text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In ring theory, a branch of abstract algebra, a quotient ring, also known as factor ring, difference ring or residue class ring, is a construction quite similar to the quotient group in group theory and to the quotient space in linear algebra. It is a specific example of a quotient, as viewed from the general setting of universal algebra. Starting with a ring
R
{\displaystyle R}
and a two-sided ideal
I
{\displaystyle I}
in
R
{\displaystyle R}
, a new ring, the quotient ring
R
/
I
{\displaystyle R\ /\ I}
, is constructed, whose elements are the cosets of
I
{\displaystyle I}
in
R
{\displaystyle R}
subject to special
+
{\displaystyle +}
and
⋅
{\displaystyle \cdot }
operations. (Quotient ring notation always uses a fraction slash "
/
{\displaystyle /}
".)
Quotient rings are distinct from the so-called "quotient field", or field of fractions, of an integral domain as well as from the more general "rings of quotients" obtained by localization.
== Formal quotient ring construction ==
Given a ring
R
{\displaystyle R}
and a two-sided ideal
I
{\displaystyle I}
in
R
{\displaystyle R}
, we may define an equivalence relation
∼
{\displaystyle \sim }
on
R
{\displaystyle R}
as follows:
a
∼
b
{\displaystyle a\sim b}
if and only if
a
−
b
{\displaystyle a-b}
is in
I
{\displaystyle I}
.
Using the ideal properties, it is not difficult to check that
∼
{\displaystyle \sim }
is a congruence relation.
In case
a
∼
b
{\displaystyle a\sim b}
, we say that
a
{\displaystyle a}
and
b
{\displaystyle b}
are congruent modulo
I
{\displaystyle I}
(for example,
1
{\displaystyle 1}
and
3
{\displaystyle 3}
are congruent modulo
2
{\displaystyle 2}
as their difference is an element of the ideal
2
Z
{\displaystyle 2\mathbb {Z} }
, the even integers). The equivalence class of the element
a
{\displaystyle a}
in
R
{\displaystyle R}
is given by:
[
a
]
=
a
+
I
:=
{
a
+
r
:
r
∈
I
}
{\displaystyle \left[a\right]=a+I:=\left\lbrace a+r:r\in I\right\rbrace }
This equivalence class is also sometimes written as
a
mod
I
{\displaystyle a{\bmod {I}}}
and called the "residue class of
a
{\displaystyle a}
modulo
I
{\displaystyle I}
".
The set of all such equivalence classes is denoted by
R
/
I
{\displaystyle R\ /\ I}
; it becomes a ring, the factor ring or quotient ring of
R
{\displaystyle R}
modulo
I
{\displaystyle I}
, if one defines
(
a
+
I
)
+
(
b
+
I
)
=
(
a
+
b
)
+
I
{\displaystyle (a+I)+(b+I)=(a+b)+I}
;
(
a
+
I
)
(
b
+
I
)
=
(
a
b
)
+
I
{\displaystyle (a+I)(b+I)=(ab)+I}
.
(Here one has to check that these definitions are well-defined. Compare coset and quotient group.) The zero-element of
R
/
I
{\displaystyle R\ /\ I}
is
0
¯
=
(
0
+
I
)
=
I
{\displaystyle {\bar {0}}=(0+I)=I}
, and the multiplicative identity is
1
¯
=
(
1
+
I
)
{\displaystyle {\bar {1}}=(1+I)}
.
The map
p
{\displaystyle p}
from
R
{\displaystyle R}
to
R
/
I
{\displaystyle R\ /\ I}
defined by
p
(
a
)
=
a
+
I
{\displaystyle p(a)=a+I}
is a surjective ring homomorphism, sometimes called the natural quotient map or the canonical homomorphism.
== Examples ==
The quotient ring
R
/
{
0
}
{\displaystyle R\ /\ \lbrace 0\rbrace }
is naturally isomorphic to
R
{\displaystyle R}
, and
R
/
R
{\displaystyle R/R}
is the zero ring
{
0
}
{\displaystyle \lbrace 0\rbrace }
, since, by our definition, for any
r
∈
R
{\displaystyle r\in R}
, we have that
[
r
]
=
r
+
R
=
{
r
+
b
:
b
∈
R
}
{\displaystyle \left[r\right]=r+R=\left\lbrace r+b:b\in R\right\rbrace }
, which equals
R
{\displaystyle R}
itself. This fits with the rule of thumb that the larger the ideal
I
{\displaystyle I}
, the smaller the quotient ring
R
/
I
{\displaystyle R\ /\ I}
. If
I
{\displaystyle I}
is a proper ideal of
R
{\displaystyle R}
, i.e.,
I
≠
R
{\displaystyle I\neq R}
, then
R
/
I
{\displaystyle R/I}
is not the zero ring.
Consider the ring of integers
Z
{\displaystyle \mathbb {Z} }
and the ideal of even numbers, denoted by
2
Z
{\displaystyle 2\mathbb {Z} }
. Then the quotient ring
Z
/
2
Z
{\displaystyle \mathbb {Z} /2\mathbb {Z} }
has only two elements, the coset
0
+
2
Z
{\displaystyle 0+2\mathbb {Z} }
consisting of the even numbers and the coset
1
+
2
Z
{\displaystyle 1+2\mathbb {Z} }
consisting of the odd numbers; applying the definition,
[
z
]
=
z
+
2
Z
=
{
z
+
2
y
:
2
y
∈
2
Z
}
{\displaystyle \left[z\right]=z+2\mathbb {Z} =\left\lbrace z+2y:2y\in 2\mathbb {Z} \right\rbrace }
, where
2
Z
{\displaystyle 2\mathbb {Z} }
is the ideal of even numbers. It is naturally isomorphic to the finite field with two elements,
F
2
{\displaystyle F_{2}}
. Intuitively: if you think of all the even numbers as
0
{\displaystyle 0}
, then every integer is either
0
{\displaystyle 0}
(if it is even) or
1
{\displaystyle 1}
(if it is odd and therefore differs from an even number by
1
{\displaystyle 1}
). Modular arithmetic is essentially arithmetic in the quotient ring
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
(which has
n
{\displaystyle n}
elements).
Now consider the ring of polynomials in the variable
X
{\displaystyle X}
with real coefficients,
R
[
X
]
{\displaystyle \mathbb {R} [X]}
, and the ideal
I
=
(
X
2
+
1
)
{\displaystyle I=\left(X^{2}+1\right)}
consisting of all multiples of the polynomial
X
2
+
1
{\displaystyle X^{2}+1}
. The quotient ring
R
[
X
]
/
(
X
2
+
1
)
{\displaystyle \mathbb {R} [X]\ /\ (X^{2}+1)}
is naturally isomorphic to the field of complex numbers
C
{\displaystyle \mathbb {C} }
, with the class
[
X
]
{\displaystyle [X]}
playing the role of the imaginary unit
i
{\displaystyle i}
. The reason is that we "forced"
X
2
+
1
=
0
{\displaystyle X^{2}+1=0}
, i.e.
X
2
=
−
1
{\displaystyle X^{2}=-1}
, which is the defining property of
i
{\displaystyle i}
. Since any integer exponent of
i
{\displaystyle i}
must be either
±
i
{\displaystyle \pm i}
or
±
1
{\displaystyle \pm 1}
, that means all possible polynomials essentially simplify to the form
a
+
b
i
{\displaystyle a+bi}
. (To clarify, the quotient ring
R
[
X
]
/
(
X
2
+
1
)
{\displaystyle \mathbb {R} [X]\ /\ (X^{2}+1)}
is actually naturally isomorphic to the field of all linear polynomials
a
X
+
b
;
a
,
b
∈
R
{\displaystyle aX+b;a,b\in \mathbb {R} }
, where the operations are performed modulo
X
2
+
1
{\displaystyle X^{2}+1}
. In return, we have
X
2
=
−
1
{\displaystyle X^{2}=-1}
, and this is matching
X
{\displaystyle X}
to the imaginary unit in the isomorphic field of complex numbers.)
Generalizing the previous example, quotient rings are often used to construct field extensions. Suppose
K
{\displaystyle K}
is some field and
f
{\displaystyle f}
is an irreducible polynomial in
K
[
X
]
{\displaystyle K[X]}
. Then
L
=
K
[
X
]
/
(
f
)
{\displaystyle L=K[X]\ /\ (f)}
is a field whose minimal polynomial over
K
{\displaystyle K}
is
f
{\displaystyle f}
, which contains
K
{\displaystyle K}
as well as an element
x
=
X
+
(
f
)
{\displaystyle x=X+(f)}
.
One important instance of the previous example is the construction of the finite fields. Consider for instance the field
F
3
=
Z
/
3
Z
{\displaystyle F_{3}=\mathbb {Z} /3\mathbb {Z} }
with three elements. The polynomial
f
(
X
)
=
X
i
2
+
1
{\displaystyle f(X)=Xi^{2}+1}
is irreducible over
F
3
{\displaystyle F_{3}}
(since it has no root), and we can construct the quotient ring
F
3
[
X
]
/
(
f
)
{\displaystyle F_{3}[X]\ /\ (f)}
. This is a field with
3
2
=
9
{\displaystyle 3^{2}=9}
elements, denoted by
F
9
{\displaystyle F_{9}}
. The other finite fields can be constructed in a similar fashion.
The coordinate rings of algebraic varieties are important examples of quotient rings in algebraic geometry. As a simple case, consider the real variety
V
=
{
(
x
,
y
)
|
x
2
=
y
3
}
{\displaystyle V=\left\lbrace (x,y)|x^{2}=y^{3}\right\rbrace }
as a subset of the real plane
R
2
{\displaystyle \mathbb {R} ^{2}}
. The ring of real-valued polynomial functions defined on
V
{\displaystyle V}
can be identified with the quotient ring
R
[
X
,
Y
]
/
(
X
2
−
Y
3
)
{\displaystyle \mathbb {R} [X,Y]\ /\ (X^{2}-Y^{3})}
, and this is the coordinate ring of
V
{\displaystyle V}
. The variety
V
{\displaystyle V}
is now investigated by studying its coordinate ring.
Suppose
M
{\displaystyle M}
is a
C
∞
{\displaystyle \mathbb {C} ^{\infty }}
-manifold, and
p
{\displaystyle p}
is a point of
M
{\displaystyle M}
. Consider the ring
R
=
C
∞
(
M
)
{\displaystyle R=\mathbb {C} ^{\infty }(M)}
of all
C
∞
{\displaystyle \mathbb {C} ^{\infty }}
-functions defined on
M
{\displaystyle M}
and let
I
{\displaystyle I}
be the ideal in
R
{\displaystyle R}
consisting of those functions
f
{\displaystyle f}
which are identically zero in some neighborhood
U
{\displaystyle U}
of
p
{\displaystyle p}
(where
U
{\displaystyle U}
may depend on
f
{\displaystyle f}
). Then the quotient ring
R
/
I
{\displaystyle R\ /\ I}
is the ring of germs of
C
∞
{\displaystyle \mathbb {C} ^{\infty }}
-functions on
M
{\displaystyle M}
at
p
{\displaystyle p}
.
Consider the ring
F
{\displaystyle F}
of finite elements of a hyperreal field
∗
R
{\displaystyle ^{*}\mathbb {R} }
. It consists of all hyperreal numbers differing from a standard real by an infinitesimal amount, or equivalently: of all hyperreal numbers
x
{\displaystyle x}
for which a standard integer
n
{\displaystyle n}
with
−
n
<
x
<
n
{\displaystyle -n<x<n}
exists. The set
I
{\displaystyle I}
of all infinitesimal numbers in
∗
R
{\displaystyle ^{*}\mathbb {R} }
, together with
0
{\displaystyle 0}
, is an ideal in
F
{\displaystyle F}
, and the quotient ring
F
/
I
{\displaystyle F\ /\ I}
is isomorphic to the real numbers
R
{\displaystyle \mathbb {R} }
. The isomorphism is induced by associating to every element
x
{\displaystyle x}
of
F
{\displaystyle F}
the standard part of
x
{\displaystyle x}
, i.e. the unique real number that differs from
x
{\displaystyle x}
by an infinitesimal. In fact, one obtains the same result, namely
R
{\displaystyle \mathbb {R} }
, if one starts with the ring
F
{\displaystyle F}
of finite hyperrationals (i.e. ratio of a pair of hyperintegers), see construction of the real numbers.
=== Variations of complex planes ===
The quotients
R
[
X
]
/
(
X
)
{\displaystyle \mathbb {R} [X]/(X)}
,
R
[
X
]
/
(
X
+
1
)
{\displaystyle \mathbb {R} [X]/(X+1)}
, and
R
[
X
]
/
(
X
−
1
)
{\displaystyle \mathbb {R} [X]/(X-1)}
are all isomorphic to
R
{\displaystyle \mathbb {R} }
and gain little interest at first. But note that
R
[
X
]
/
(
X
2
)
{\displaystyle \mathbb {R} [X]/(X^{2})}
is called the dual number plane in geometric algebra. It consists only of linear binomials as "remainders" after reducing an element of
R
[
X
]
{\displaystyle \mathbb {R} [X]}
by
X
2
{\displaystyle X^{2}}
. This variation of a complex plane arises as a subalgebra whenever the algebra contains a real line and a nilpotent.
Furthermore, the ring quotient
R
[
X
]
/
(
X
2
−
1
)
{\displaystyle \mathbb {R} [X]/(X^{2}-1)}
does split into
R
[
X
]
/
(
X
+
1
)
{\displaystyle \mathbb {R} [X]/(X+1)}
and
R
[
X
]
/
(
X
−
1
)
{\displaystyle \mathbb {R} [X]/(X-1)}
, so this ring is often viewed as the direct sum
R
⊕
R
{\displaystyle \mathbb {R} \oplus \mathbb {R} }
.
Nevertheless, a variation on complex numbers
z
=
x
+
y
j
{\displaystyle z=x+yj}
is suggested by
j
{\displaystyle j}
as a root of
X
2
−
1
=
0
{\displaystyle X^{2}-1=0}
, compared to
i
{\displaystyle i}
as root of
X
2
+
1
=
0
{\displaystyle X^{2}+1=0}
. This plane of split-complex numbers normalizes the direct sum
R
⊕
R
{\displaystyle \mathbb {R} \oplus \mathbb {R} }
by providing a basis
{
1
,
j
}
{\displaystyle \left\lbrace 1,j\right\rbrace }
for 2-space where the identity of the algebra is at unit distance from the zero. With this basis a unit hyperbola may be compared to the unit circle of the ordinary complex plane.
=== Quaternions and variations ===
Suppose
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are two non-commuting indeterminates and form the free algebra
R
⟨
X
,
Y
⟩
{\displaystyle \mathbb {R} \langle X,Y\rangle }
. Then Hamilton's quaternions of 1843 can be cast as:
R
⟨
X
,
Y
⟩
/
(
X
2
+
1
,
Y
2
+
1
,
X
Y
+
Y
X
)
{\displaystyle \mathbb {R} \langle X,Y\rangle /(X^{2}+1,\,Y^{2}+1,\,XY+YX)}
If
Y
2
−
1
{\displaystyle Y^{2}-1}
is substituted for
Y
2
+
1
{\displaystyle Y^{2}+1}
, then one obtains the ring of split-quaternions. The anti-commutative property
Y
X
=
−
X
Y
{\displaystyle YX=-XY}
implies that
X
Y
{\displaystyle XY}
has as its square:
(
X
Y
)
(
X
Y
)
=
X
(
Y
X
)
Y
=
−
X
(
X
Y
)
Y
=
−
(
X
X
)
(
Y
Y
)
=
−
(
−
1
)
(
+
1
)
=
+
1
{\displaystyle (XY)(XY)=X(YX)Y=-X(XY)Y=-(XX)(YY)=-(-1)(+1)=+1}
Substituting minus for plus in both the quadratic binomials also results in split-quaternions.
The three types of biquaternions can also be written as quotients by use of the free algebra with three indeterminates
R
⟨
X
,
Y
,
Z
⟩
{\displaystyle \mathbb {R} \langle X,Y,Z\rangle }
and constructing appropriate ideals.
== Properties ==
Clearly, if
R
{\displaystyle R}
is a commutative ring, then so is
R
/
I
{\displaystyle R\ /\ I}
; the converse, however, is not true in general.
The natural quotient map
p
{\displaystyle p}
has
I
{\displaystyle I}
as its kernel; since the kernel of every ring homomorphism is a two-sided ideal, we can state that two-sided ideals are precisely the kernels of ring homomorphisms.
The intimate relationship between ring homomorphisms, kernels and quotient rings can be summarized as follows: the ring homomorphisms defined on
R
/
I
{\displaystyle R\ /\ I}
are essentially the same as the ring homomorphisms defined on
R
{\displaystyle R}
that vanish (i.e. are zero) on
I
{\displaystyle I}
. More precisely, given a two-sided ideal
I
{\displaystyle I}
in
R
{\displaystyle R}
and a ring homomorphism
f
:
R
→
S
{\displaystyle f:R\to S}
whose kernel contains
I
{\displaystyle I}
, there exists precisely one ring homomorphism
g
:
R
/
I
→
S
{\displaystyle g:R\ /\ I\to S}
with
g
p
=
f
{\displaystyle gp=f}
(where
p
{\displaystyle p}
is the natural quotient map). The map
g
{\displaystyle g}
here is given by the well-defined rule
g
(
[
a
]
)
=
f
(
a
)
{\displaystyle g([a])=f(a)}
for all
a
{\displaystyle a}
in
1
R
{\displaystyle 1R}
. Indeed, this universal property can be used to define quotient rings and their natural quotient maps.
As a consequence of the above, one obtains the fundamental statement: every ring homomorphism
f
:
R
→
S
{\displaystyle f:R\to S}
induces a ring isomorphism between the quotient ring
R
/
ker
(
f
)
{\displaystyle R\ /\ \ker(f)}
and the image
i
m
(
f
)
{\displaystyle \mathrm {im} (f)}
. (See also: Fundamental theorem on homomorphisms.)
The ideals of
R
{\displaystyle R}
and
R
/
I
{\displaystyle R\ /\ I}
are closely related: the natural quotient map provides a bijection between the two-sided ideals of
R
{\displaystyle R}
that contain
I
{\displaystyle I}
and the two-sided ideals of
R
/
I
{\displaystyle R\ /\ I}
(the same is true for left and for right ideals). This relationship between two-sided ideal extends to a relationship between the corresponding quotient rings: if
M
{\displaystyle M}
is a two-sided ideal in
R
{\displaystyle R}
that contains
I
{\displaystyle I}
, and we write
M
/
I
{\displaystyle M\ /\ I}
for the corresponding ideal in
R
/
I
{\displaystyle R\ /\ I}
(i.e.
M
/
I
=
p
(
M
)
{\displaystyle M\ /\ I=p(M)}
), the quotient rings
R
/
M
{\displaystyle R\ /\ M}
and
(
R
/
I
)
/
(
M
/
I
)
{\displaystyle (R/I)\ /\ (M/I)}
are naturally isomorphic via the (well-defined) mapping
a
+
M
↦
(
a
+
I
)
+
M
/
I
{\displaystyle a+M\mapsto (a+I)+M/I}
.
The following facts prove useful in commutative algebra and algebraic geometry: for
R
≠
{
0
}
{\displaystyle R\neq \lbrace 0\rbrace }
commutative,
R
/
I
{\displaystyle R\ /\ I}
is a field if and only if
I
{\displaystyle I}
is a maximal ideal, while
R
/
I
{\displaystyle R/I}
is an integral domain if and only if
I
{\displaystyle I}
is a prime ideal. A number of similar statements relate properties of the ideal
I
{\displaystyle I}
to properties of the quotient ring
R
/
I
{\displaystyle R\ /\ I}
.
The Chinese remainder theorem states that, if the ideal
I
{\displaystyle I}
is the intersection (or equivalently, the product) of pairwise coprime ideals
I
1
,
…
,
I
k
{\displaystyle I_{1},\ldots ,I_{k}}
, then the quotient ring
R
/
I
{\displaystyle R\ /\ I}
is isomorphic to the product of the quotient rings
R
/
I
n
,
n
=
1
,
…
,
k
{\displaystyle R\ /\ I_{n},\;n=1,\ldots ,k}
.
== For algebras over a ring ==
An associative algebra
A
{\displaystyle A}
over a commutative ring
R
{\displaystyle R}
is a ring itself. If
I
{\displaystyle I}
is an ideal in
A
{\displaystyle A}
(closed under
R
{\displaystyle R}
-multiplication), then
A
/
I
{\displaystyle A/I}
inherits the structure of an algebra over
R
{\displaystyle R}
and is the quotient algebra.
== See also ==
Associated graded ring
Residue field
Goldie's theorem
Quotient module
== Notes ==
== Further references ==
F. Kasch (1978) Moduln und Ringe, translated by DAR Wallace (1982) Modules and Rings, Academic Press, page 33.
Neal H. McCoy (1948) Rings and Ideals, §13 Residue class rings, page 61, Carus Mathematical Monographs #8, Mathematical Association of America.
Joseph Rotman (1998). Galois Theory (2nd ed.). Springer. pp. 21–23. ISBN 0-387-98541-7.
B.L. van der Waerden (1970) Algebra, translated by Fred Blum and John R Schulenberger, Frederick Ungar Publishing, New York. See Chapter 3.5, "Ideals. Residue Class Rings", pp. 47–51.
== External links ==
"Quotient ring", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Ideals and factor rings from John Beachy's Abstract Algebra Online | Wikipedia/Quotient_associative_algebra |
In algebra, the kernel of a homomorphism is the relation describing how elements in the domain of the homomorphism become related in the image. A homomorphism is a function that preserves the underlying algebraic structure in the domain to its image.
When the algebraic structures involved have an underlying group structure, the kernel is taken to be the preimage of the group's identity element in the image, that is, it consists of the elements of the domain mapping to the image's identity. For example, the map that sends every integer to its parity (that is, 0 if the number is even, 1 if the number is odd) would be a homomorphism to the integers modulo 2, and its respective kernel would be the even integers which all have 0 as its parity. The kernel of a homomorphism of group-like structures will only contain the identity if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.
For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and some kernels have received a special name, such as normal subgroups for groups and two-sided ideals for rings. The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.
Kernels allow defining quotient objects (also called quotient algebras in universal algebra). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.
== Definition ==
=== Group homomorphisms ===
Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH.
The kernel is usually denoted ker f (or a variation). In symbols:
ker
f
=
{
g
∈
G
:
f
(
g
)
=
e
H
}
.
{\displaystyle \ker f=\{g\in G:f(g)=e_{H}\}.}
Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set {eG}.
ker f is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group G / (ker f). This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups.
=== Ring homomorphisms ===
Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S.
If 0S is the zero element of S, then the kernel of f is its kernel as additive groups. It is the preimage of the zero ideal {0S}, which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S.
The kernel is usually denoted ker f (or a variation).
In symbols:
ker
f
=
{
r
∈
R
:
f
(
r
)
=
0
S
}
.
{\displaystyle \operatorname {ker} f=\{r\in R:f(r)=0_{S}\}.}
Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set {0R}.
This is always the case if R is a field, and S is not the zero ring.
Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R.
Thus, it makes sense to speak of the quotient ring R / (ker f).
The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S).
=== Linear maps ===
Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T (or null space) is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as ker T, or some variation thereof:
ker
T
=
{
v
∈
V
:
T
(
v
)
=
0
W
}
.
{\displaystyle \ker T=\{\mathbf {v} \in V:T(\mathbf {v} )=\mathbf {0} _{W}\}.}
Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.
The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space V / (ker T). The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.
=== Module homomorphisms ===
Let
R
{\displaystyle R}
be a ring, and let
M
{\displaystyle M}
and
N
{\displaystyle N}
be
R
{\displaystyle R}
-modules. If
φ
:
M
→
N
{\displaystyle \varphi :M\to N}
is a module homomorphism, then the kernel is defined to be:
ker
φ
=
{
m
∈
M
|
φ
(
m
)
=
0
}
{\displaystyle \ker \varphi =\{m\in M\ |\ \varphi (m)=0\}}
Every kernel is a submodule of the domain module, which means they always contain 0, the additive identity of the module. Kernels of abelian groups can be considered a particular kind of module kernel when the underlying ring is the integers.
== Survey of examples ==
=== Group homomorphisms ===
Let G be the cyclic group on 6 elements {0, 1, 2, 3, 4, 5} with modular addition, H be the cyclic on 2 elements {0, 1} with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then ker f = {0, 2, 4} , since all these elements are mapped to 0H. The quotient group G / (ker f) has two elements: {0, 2, 4} and {1, 3, 5}, and is isomorphic to H.
Given a isomorphism
φ
:
G
→
H
{\displaystyle \varphi :G\to H}
, one has
ker
φ
=
1
{\displaystyle \ker \varphi =1}
. On the other hand, if this mapping is merely a homomorphism where H is the trivial group, then
φ
(
g
)
=
1
{\displaystyle \varphi (g)=1}
for all
g
∈
G
{\displaystyle g\in G}
, so thus
ker
φ
=
G
{\displaystyle \ker \varphi =G}
.
Let
φ
:
R
2
→
R
{\displaystyle \varphi :\mathbb {R} ^{2}\to \mathbb {R} }
be the map defined as
φ
(
(
x
,
y
)
)
=
x
{\displaystyle \varphi ((x,y))=x}
. Then this is a homomorphism with the kernel consisting precisely the points of the form
(
0
,
y
)
{\displaystyle (0,y)}
. This mapping is considered the "projection onto the x-axis." A similar phenomenon occurs with the mapping
f
:
(
R
×
)
2
→
R
×
{\displaystyle f:(\mathbb {R} ^{\times })^{2}\to \mathbb {R} ^{\times }}
defined as
f
(
a
,
b
)
=
b
{\displaystyle f(a,b)=b}
, where the kernel is the points of the form
(
a
,
1
)
{\displaystyle (a,1)}
For a non-abelian example, let
Q
8
{\displaystyle Q_{8}}
denote the Quaternion group, and
V
4
{\displaystyle V_{4}}
the Klein 4-group. Define a mapping
φ
:
Q
8
→
V
4
{\displaystyle \varphi :Q_{8}\to V_{4}}
to be:
φ
(
±
1
)
=
1
{\displaystyle \varphi (\pm 1)=1}
φ
(
±
i
)
=
a
{\displaystyle \varphi (\pm i)=a}
φ
(
±
j
)
=
b
{\displaystyle \varphi (\pm j)=b}
φ
(
±
k
)
=
c
{\displaystyle \varphi (\pm k)=c}
Then this mapping is a homomorphism where
ker
φ
=
{
±
1
}
{\displaystyle \ker \varphi =\{\pm 1\}}
.
=== Ring homomorphisms ===
Consider the mapping
φ
:
Z
→
Z
/
2
Z
{\displaystyle \varphi :\mathbb {Z} \to \mathbb {Z} /2\mathbb {Z} }
where the later ring is the integers modulo 2 and the map sends each number to its parity; 0 for even numbers, and 1 for odd numbers. This mapping turns out to be a homomorphism, and since the additive identity of the later ring is 0, the kernel is precisely the even numbers.
Let
φ
:
Q
[
x
]
→
Q
{\displaystyle \varphi :\mathbb {Q} [x]\to \mathbb {Q} }
be defined as
φ
(
p
(
x
)
)
=
p
(
0
)
{\displaystyle \varphi (p(x))=p(0)}
. This mapping , which happens to be a homomorphism, sends each polynomial to its constant term. It maps a polynomial to zero if and only if said polynomial's constant term is 0. Polynomials with real coefficients can receive a similar homomorphism, with its kernel being the polynomials with constant term 0.
=== Linear maps ===
Let
φ
:
C
3
→
C
{\displaystyle \varphi :\mathbb {C} ^{3}\to \mathbb {C} }
be defined as
φ
(
x
,
y
,
z
)
=
x
+
2
y
+
3
z
{\displaystyle \varphi (x,y,z)=x+2y+3z}
, then the kernel of
φ
{\displaystyle \varphi }
(that is, the null space) will be the set of points
(
x
,
y
,
z
)
∈
C
3
{\displaystyle (x,y,z)\in \mathbb {C} ^{3}}
such that
x
+
2
y
+
3
z
=
0
{\displaystyle x+2y+3z=0}
, and this set is a subspace of
C
3
{\displaystyle \mathbb {C} ^{3}}
(the same is true for every kernel of a linear map).
If
D
{\displaystyle D}
represents the derivative operator on real polynomials, then the kernel of
D
{\displaystyle D}
will consist of the polynomials with deterivative equal to 0, that is the constant functions.
Consider the mapping
(
T
p
)
(
x
)
=
x
2
p
(
x
)
{\displaystyle (Tp)(x)=x^{2}p(x)}
, where
p
{\displaystyle p}
is a polynomial with real coefficients. Then
T
{\displaystyle T}
is a linear map whose kernel is precisely 0, since it is the only polynomial to satisfy
x
2
p
(
x
)
=
0
{\displaystyle x^{2}p(x)=0}
for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
.
== Quotient algebras ==
The kernel of a homomorphism can be used to define a quotient algebra. For instance, if
φ
:
G
→
H
{\displaystyle \varphi :G\to H}
denotes a group homomorphism, and denote
K
=
ker
φ
{\displaystyle K=\ker \varphi }
, then consider
G
/
K
{\displaystyle G/K}
to be the set of fibers of the homomorphism
φ
{\displaystyle \varphi }
, where a fiber is merely the set of points of the domain mapping to a single chosen point in the range. If
X
a
∈
G
/
K
{\displaystyle X_{a}\in G/K}
denotes the fiber of the element
a
∈
H
{\displaystyle a\in H}
, then a group operation on the set of fibers can be endowed by
X
a
X
b
=
X
a
b
{\displaystyle X_{a}X_{b}=X_{ab}}
, and
G
/
K
{\displaystyle G/K}
is called the quotient group (or factor group), to be read as "G modulo K" or "G mod K". The terminology arises from the fact that the kernel represents the fiber of the identity element of the range,
H
{\displaystyle H}
, and that the remaining elements are simply "translates" of the kernel, so the quotient group is obtained by "dividing out" by the kernel.
The fibers can also be described by looking at the domain relative to the kernel; given
X
∈
G
/
K
{\displaystyle X\in G/K}
and any element
u
∈
X
{\displaystyle u\in X}
, then
X
=
u
K
=
K
u
{\displaystyle X=uK=Ku}
where:
u
K
=
{
u
k
|
k
∈
K
}
{\displaystyle uK=\{uk\ |\ k\in K\}}
K
u
=
{
k
u
|
k
∈
K
}
{\displaystyle Ku=\{ku\ |\ k\in K\}}
these sets are called the left and right cosets respectively, and can be defined in general for any arbitrary subgroup aside from the kernel. The group operation can then be defined as
u
K
∘
v
K
=
(
u
k
)
K
{\displaystyle uK\circ vK=(uk)K}
, which is well-defined regardless of the choice of representatives of the fibers.
According to the first isomorphism theorem, there is an isomorphism
μ
:
G
/
K
→
φ
(
G
)
{\displaystyle \mu :G/K\to \varphi (G)}
, where the later group is the image of the homomorphism
φ
{\displaystyle \varphi }
, and the isomorphism is defined as
μ
(
u
K
)
=
φ
(
u
)
{\displaystyle \mu (uK)=\varphi (u)}
, and such map is also well-defined.
For rings, modules, and vector spaces, one can define the respective quotient algebras via the underlying additive group structure, with cosets represented as
x
+
K
{\displaystyle x+K}
. Ring multiplication can be defined on the quotient algebra the same way as in the group (and be well-defined). For a ring
R
{\displaystyle R}
(possibly a field when describing vector spaces) and a module homomorphism
φ
:
M
→
N
{\displaystyle \varphi :M\to N}
with kernel
K
=
ker
φ
{\displaystyle K=\ker \varphi }
, one can define scalar multiplication on
G
/
K
{\displaystyle G/K}
by
r
(
x
+
K
)
=
r
x
+
K
{\displaystyle r(x+K)=rx+K}
for
r
∈
R
{\displaystyle r\in R}
and
x
∈
M
{\displaystyle x\in M}
, which will also be well-defined.
== Kernel structures ==
The structure of kernels allows for the building of quotient algebras from structures satisfying the properties of kernels. Any subgroup
N
{\displaystyle N}
of a group
G
{\displaystyle G}
can construct a quotient
G
/
N
{\displaystyle G/N}
by the set of all cosets of
N
{\displaystyle N}
in
G
{\displaystyle G}
. The natural way to turn this into a group, similar to the treatment for the quotient by a kernel, is to define an operation on (left) cosets by
u
N
⋅
v
N
=
(
u
v
)
N
{\displaystyle uN\cdot vN=(uv)N}
, however this operation is well defined if and only if the subgroup
N
{\displaystyle N}
is closed under conjugation under
G
{\displaystyle G}
, that is, if
g
∈
G
{\displaystyle g\in G}
and
n
∈
N
{\displaystyle n\in N}
, then
g
n
g
−
1
∈
N
{\displaystyle gng^{-1}\in N}
. Furthermore, the operation being well defined is sufficient for the quotient to be a group. Subgroups satisfying this property are called normal subgroups. Every kernel of a group is a normal subgroup, and for a given normal subgroup
N
{\displaystyle N}
of a group
G
{\displaystyle G}
, the natural projection
π
(
g
)
=
g
N
{\displaystyle \pi (g)=gN}
is a homomorphism with
ker
π
=
N
{\displaystyle \ker \pi =N}
, so the normal subgroups are precisely the subgroups which are kernels. The closure under conjugation, however, gives a criterion for when a subgroup is a kernel for some homomorphism.
For a ring
R
{\displaystyle R}
, treating it as a group, one can take a quotient group via an arbitrary subgroup
I
{\displaystyle I}
of the ring, which will be normal due to the ring's additive group being abelian. To define multiplication on
R
/
I
{\displaystyle R/I}
, the multiplication of cosets, defined as
(
r
+
I
)
(
s
+
I
)
=
r
s
+
I
{\displaystyle (r+I)(s+I)=rs+I}
needs to be well-defined. Taking representative
r
+
α
{\displaystyle r+\alpha }
and
s
+
β
{\displaystyle s+\beta }
of
r
+
I
{\displaystyle r+I}
and
s
+
I
{\displaystyle s+I}
respectively, for
r
,
s
∈
R
{\displaystyle r,s\in R}
and
α
,
β
∈
I
{\displaystyle \alpha ,\beta \in I}
, yields:
(
r
+
α
)
(
s
+
β
)
+
I
=
r
s
+
I
{\displaystyle (r+\alpha )(s+\beta )+I=rs+I}
Setting
r
=
s
=
0
{\displaystyle r=s=0}
implies that
I
{\displaystyle I}
is closed under multiplication, while setting
α
=
s
=
0
{\displaystyle \alpha =s=0}
shows that
r
β
∈
I
{\displaystyle r\beta \in I}
, that is,
I
{\displaystyle I}
is closed under arbitrary multiplication by elements on the left. Similarly, taking
r
=
β
=
0
{\displaystyle r=\beta =0}
implies that
I
{\displaystyle I}
is also closed under multiplication by arbitrary elements on the right. Any subgroup of
R
{\displaystyle R}
that is closed under multiplication by any element of the ring is called an ideal. Analogously to normal subgroups, the ideals of a ring are precisely the kernels of homomorphisms.
== Exact sequence ==
Kernels are used to define exact sequences of homomorphisms for groups and modules. If A, B, and C are modules, then a pair of homomorphisms
ψ
:
A
→
B
,
φ
:
B
→
C
{\displaystyle \psi :A\to B,\varphi :B\to C}
is said to be exact if
image
ψ
=
ker
φ
{\displaystyle {\text{image }}\psi =\ker \varphi }
. An exact sequence is then a sequence of modules and homomorphism
⋯
→
X
n
−
1
→
X
n
→
X
n
+
1
→
⋯
{\displaystyle \cdots \to X_{n-1}\to X_{n}\to X_{n+1}\to \cdots }
where each adjacent pair of homomorphisms is exact.
== Universal algebra ==
All the above cases may be unified and generalized in universal algebra. Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B. Then the kernel of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B. The kernel is usually denoted ker f (or a variation). In symbols:
ker
f
=
{
(
a
,
b
)
∈
A
×
A
:
f
(
a
)
=
f
(
b
)
}
.
{\displaystyle \operatorname {ker} f=\left\{\left(a,b\right)\in A\times A:f(a)=f\left(b\right)\right\}{\mbox{.}}}
The homomorphism f is injective if and only if its kernel is exactly the diagonal set {(a, a) : a ∈ A}, which is always at least contained inside the kernel.
It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation.
Thus, it makes sense to speak of the quotient algebra A / (ker f).
The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
== See also ==
Kernel (linear algebra)
Kernel (category theory)
Kernel of a function
Equalizer (mathematics)
Zero set
== Notes ==
== References ==
Axler, Sheldon. Linear Algebra Done Right (4th ed.). Springer.
Burris, Stanley; Sankappanavar, H.P. (2012). A Course in Universal Algebra (Millennium ed.). S. Burris and H.P. Sankappanavar. ISBN 978-0-9880552-0-9.
Dummit, David Steven; Foote, Richard M. (2004). Abstract algebra (3rd ed.). Hoboken, NJ: Wiley. ISBN 978-0-471-43334-7.
Fraleigh, John B.; Katz, Victor (2003). A first course in abstract algebra. World student series (7th ed.). Boston: Addison-Wesley. ISBN 978-0-201-76390-4.
Hungerford, Thomas W. (2014). Abstract Algebra: an introduction (3rd ed.). Boston, MA: Brooks/Cole, Cengage Learning. ISBN 978-1-111-56962-4.
McKenzie, Ralph; McNulty, George F.; Taylor, W. (1987). Algebras, lattices, varieties. The Wadsworth & Brooks/Cole mathematics series. Monterey, Calif: Wadsworth & Brooks/Cole Advanced Books & Software. ISBN 978-0-534-07651-1. | Wikipedia/Kernel_(ring_theory) |
In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set. One may also call this "inclusion of classes".
That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B. In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set.
As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!)
== References == | Wikipedia/Subclass_(set_theory) |
In mathematics, a field F is algebraically closed if every non-constant polynomial in F[x] (the univariate polynomial ring with coefficients in F) has a root in F. In other words, a field is algebraically closed if the fundamental theorem of algebra holds for it.
Every field
K
{\displaystyle K}
is contained in an algebraically closed field
C
,
{\displaystyle C,}
and the roots in
C
{\displaystyle C}
of the polynomials with coefficients in
K
{\displaystyle K}
form an algebraically closed field called an algebraic closure of
K
.
{\displaystyle K.}
Given two algebraic closures of
K
{\displaystyle K}
there are isomorphisms between them that fix the elements of
K
.
{\displaystyle K.}
Algebraically closed fields appear in the following chain of class inclusions:
rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ euclidean domains ⊃ fields ⊃ algebraically closed fields
== Examples ==
As an example, the field of real numbers is not algebraically closed, because the polynomial equation
x
2
+
1
=
0
{\displaystyle x^{2}+1=0}
has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraically closed field is the field of (complex) algebraic numbers.
No finite field F is algebraically closed, because if a1, a2, ..., an are the elements of F, then the polynomial (x − a1)(x − a2) ⋯ (x − an) + 1
has no zero in F. However, the union of all finite fields of a fixed characteristic p (p prime) is an algebraically closed field, which is, in fact, the algebraic closure of the field
F
p
{\displaystyle \mathbb {F} _{p}}
with p elements.
The field
C
(
x
)
{\displaystyle \mathbb {C} (x)}
of rational functions with complex coefficients is not closed; for example, the polynomial
y
2
−
x
{\displaystyle y^{2}-x}
has roots
±
x
{\displaystyle \pm {\sqrt {x}}}
, which are not elements of
C
(
x
)
{\displaystyle \mathbb {C} (x)}
.
== Equivalent properties ==
Given a field F, the assertion "F is algebraically closed" is equivalent to other assertions:
=== The only irreducible polynomials are those of degree one ===
The field F is algebraically closed if and only if the only irreducible polynomials in the polynomial ring F[x] are those of degree one.
The assertion "the polynomials of degree one are irreducible" is trivially true for any field. If F is algebraically closed and p(x) is an irreducible polynomial of F[x], then it has some root a and therefore p(x) is a multiple of x − a. Since p(x) is irreducible, this means that p(x) = k(x − a), for some k ∈ F \ {0} . On the other hand, if F is not algebraically closed, then there is some non-constant polynomial p(x) in F[x] without roots in F. Let q(x) be some irreducible factor of p(x). Since p(x) has no roots in F, q(x) also has no roots in F. Therefore, q(x) has degree greater than one, since every first degree polynomial has one root in F.
=== Every polynomial is a product of first degree polynomials ===
The field F is algebraically closed if and only if every polynomial p(x) of degree n ≥ 1, with coefficients in F, splits into linear factors. In other words, there are elements k, x1, x2, ..., xn of the field F such that p(x) = k(x − x1)(x − x2) ⋯ (x − xn).
If F has this property, then clearly every non-constant polynomial in F[x] has some root in F; in other words, F is algebraically closed. On the other hand, that the property stated here holds for F if F is algebraically closed follows from the previous property together with the fact that, for any field K, any polynomial in K[x] can be written as a product of irreducible polynomials.
=== Polynomials of prime degree have roots ===
If every polynomial over F of prime degree has a root in F, then every non-constant polynomial has a root in F. It follows that a field is algebraically closed if and only if every polynomial over F of prime degree has a root in F.
=== The field has no proper algebraic extension ===
The field F is algebraically closed if and only if it has no proper algebraic extension.
If F has no proper algebraic extension, let p(x) be some irreducible polynomial in F[x]. Then the quotient of F[x] modulo the ideal generated by p(x) is an algebraic extension of F whose degree is equal to the degree of p(x). Since it is not a proper extension, its degree is 1 and therefore the degree of p(x) is 1.
On the other hand, if F has some proper algebraic extension K, then the minimal polynomial of an element in K \ F is irreducible and its degree is greater than 1.
=== The field has no proper finite extension ===
The field F is algebraically closed if and only if it has no proper finite extension because if, within the previous proof, the term "algebraic extension" is replaced by the term "finite extension", then the proof is still valid. (Finite extensions are necessarily algebraic.)
=== Every endomorphism of Fn has some eigenvector ===
The field F is algebraically closed if and only if, for each natural number n, every linear map from Fn into itself has some eigenvector.
An endomorphism of Fn has an eigenvector if and only if its characteristic polynomial has some root. Therefore, when F is algebraically closed, every endomorphism of Fn has some eigenvector. On the other hand, if every endomorphism of Fn has an eigenvector, let p(x) be an element of F[x]. Dividing by its leading coefficient, we get another polynomial q(x) which has roots if and only if p(x) has roots. But if q(x) = xn + an − 1 xn − 1 + ⋯ + a0, then q(x) is the characteristic polynomial of the n×n companion matrix
(
0
0
⋯
0
−
a
0
1
0
⋯
0
−
a
1
0
1
⋯
0
−
a
2
⋮
⋮
⋱
⋮
⋮
0
0
⋯
1
−
a
n
−
1
)
.
{\displaystyle {\begin{pmatrix}0&0&\cdots &0&-a_{0}\\1&0&\cdots &0&-a_{1}\\0&1&\cdots &0&-a_{2}\\\vdots &\vdots &\ddots &\vdots &\vdots \\0&0&\cdots &1&-a_{n-1}\end{pmatrix}}.}
=== Decomposition of rational expressions ===
The field F is algebraically closed if and only if every rational function in one variable x, with coefficients in F, can be written as the sum of a polynomial function with rational functions of the form a/(x − b)n, where n is a natural number, and a and b are elements of F.
If F is algebraically closed then, since the irreducible polynomials in F[x] are all of degree 1, the property stated above holds by the theorem on partial fraction decomposition.
On the other hand, suppose that the property stated above holds for the field F. Let p(x) be an irreducible element in F[x]. Then the rational function 1/p can be written as the sum of a polynomial function q with rational functions of the form a/(x – b)n. Therefore, the rational expression
1
p
(
x
)
−
q
(
x
)
=
1
−
p
(
x
)
q
(
x
)
p
(
x
)
{\displaystyle {\frac {1}{p(x)}}-q(x)={\frac {1-p(x)q(x)}{p(x)}}}
can be written as a quotient of two polynomials in which the denominator is a product of first degree polynomials. Since p(x) is irreducible, it must divide this product and, therefore, it must also be a first degree polynomial.
=== Relatively prime polynomials and roots ===
For any field F, if two polynomials p(x), q(x) ∈ F[x] are relatively prime then they do not have a common root, for if a ∈ F was a common root, then p(x) and q(x) would both be multiples of x − a and therefore they would not be relatively prime. The fields for which the reverse implication holds (that is, the fields such that whenever two polynomials have no common root then they are relatively prime) are precisely the algebraically closed fields.
If the field F is algebraically closed, let p(x) and q(x) be two polynomials which are not relatively prime and let r(x) be their greatest common divisor. Then, since r(x) is not constant, it will have some root a, which will be then a common root of p(x) and q(x).
If F is not algebraically closed, let p(x) be a polynomial whose degree is at least 1 without roots. Then p(x) and p(x) are not relatively prime, but they have no common roots (since none of them has roots).
== Other properties ==
If F is an algebraically closed field and n is a natural number, then F contains all nth roots of unity, because these are (by definition) the n (not necessarily distinct) zeroes of the polynomial xn − 1. A field extension that is contained in an extension generated by the roots of unity is a cyclotomic extension, and the extension of a field generated by all roots of unity is sometimes called its cyclotomic closure. Thus algebraically closed fields are cyclotomically closed. The converse is not true. Even assuming that every polynomial of the form xn − a splits into linear factors is not enough to assure that the field is algebraically closed.
If a proposition which can be expressed in the language of first-order logic is true for an algebraically closed field, then it is true for every algebraically closed field with the same characteristic. Furthermore, if such a proposition is valid for an algebraically closed field with characteristic 0, then not only is it valid for all other algebraically closed fields with characteristic 0, but there is some natural number N such that the proposition is valid for every algebraically closed field with characteristic p when p > N.
Every field F has some extension which is algebraically closed. Such an extension is called an algebraically closed extension. Among all such extensions there is one and only one (up to isomorphism, but not unique isomorphism) which is an algebraic extension of F; it is called the algebraic closure of F.
The theory of algebraically closed fields has quantifier elimination.
== Notes ==
== References == | Wikipedia/Algebraically_closed_field |
In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a (not necessarily commutative) ring. The concept of a module also generalizes the notion of an abelian group, since the abelian groups are exactly the modules over the ring of integers.
Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operations of addition between elements of the ring or module and is compatible with the ring multiplication.
Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology.
== Introduction and definition ==
=== Motivation ===
In a vector space, the set of scalars is a field and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law. In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals and quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules.
Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a "well-behaved" ring, such as a principal ideal domain. However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis, and, even for those that do (free modules), the number of elements in a basis need not be the same for all bases (that is to say that they may not have a unique rank) if the underlying ring does not satisfy the invariant basis number condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two assertions require the axiom of choice in general, but not in the case of finite-dimensional vector spaces, or certain well-behaved infinite-dimensional vector spaces such as Lp spaces.)
=== Formal definition ===
Suppose that R is a ring, and 1 is its multiplicative identity.
A left R-module M consists of an abelian group (M, +) and an operation · : R × M → M such that for all r, s in R and x, y in M, we have
r
⋅
(
x
+
y
)
=
r
⋅
x
+
r
⋅
y
{\displaystyle r\cdot (x+y)=r\cdot x+r\cdot y}
,
(
r
+
s
)
⋅
x
=
r
⋅
x
+
s
⋅
x
{\displaystyle (r+s)\cdot x=r\cdot x+s\cdot x}
,
(
r
s
)
⋅
x
=
r
⋅
(
s
⋅
x
)
{\displaystyle (rs)\cdot x=r\cdot (s\cdot x)}
,
1
⋅
x
=
x
.
{\displaystyle 1\cdot x=x.}
The operation · is called scalar multiplication. Often the symbol · is omitted, but in this article we use it and reserve juxtaposition for multiplication in R. One may write RM to emphasize that M is a left R-module. A right R-module MR is defined similarly in terms of an operation · : M × R → M.
The qualificative of left- or right-module does not depend on whether the scalars are written on the left or on the right, but on the property 3: if, in the above definition, the property 3 is replaced by
(
r
s
)
⋅
x
=
s
⋅
(
r
⋅
x
)
,
{\displaystyle (rs)\cdot x=s\cdot (r\cdot x),}
one gets a right-module, even if the scalars are written on the left. However, writing the scalars on the left for left-modules and on the right for right modules makes the manipulation of property 3 much easier.
Authors who do not require rings to be unital omit condition 4 in the definition above; they would call the structures defined above "unital left R-modules". In this article, consistent with the glossary of ring theory, all rings and modules are assumed to be unital.
An (R,S)-bimodule is an abelian group together with both a left scalar multiplication · by elements of R and a right scalar multiplication ∗ by elements of S, making it simultaneously a left R-module and a right S-module, satisfying the additional condition (r · x) ∗ s = r ⋅ (x ∗ s) for all r in R, x in M, and s in S.
If R is commutative, then left R-modules are the same as right R-modules and are simply called R-modules. Most often the scalars are written on the left in this case.
== Examples ==
If K is a field, then K-modules are called K-vector spaces (vector spaces over K).
If K is a field, and K[x] a univariate polynomial ring, then a K[x]-module M is a K-module with an additional action of x on M by a group homomorphism that commutes with the action of K on M. In other words, a K[x]-module is a K-vector space M combined with a linear map from M to M. Applying the structure theorem for finitely generated modules over a principal ideal domain to this example shows the existence of the rational and Jordan canonical forms.
The concept of a Z-module agrees with the notion of an abelian group. That is, every abelian group is a module over the ring of integers Z in a unique way. For n > 0, let n ⋅ x = x + x + ... + x (n summands), 0 ⋅ x = 0, and (−n) ⋅ x = −(n ⋅ x). Such a module need not have a basis—groups containing torsion elements do not. (For example, in the group of integers modulo 3, one cannot find even one element that satisfies the definition of a linearly independent set, since when an integer such as 3 or 6 multiplies an element, the result is 0. However, if a finite field is considered as a module over the same finite field taken as a ring, it is a vector space and does have a basis.)
The decimal fractions (including negative ones) form a module over the integers. Only singletons are linearly independent sets, but there is no singleton that can serve as a basis, so the module has no basis and no rank, in the usual sense of linear algebra. However this module has a torsion-free rank equal to 1.
If R is any ring and n a natural number, then the cartesian product Rn is both a left and right R-module over R if we use the component-wise operations. Hence when n = 1, R is an R-module, where the scalar multiplication is just ring multiplication. The case n = 0 yields the trivial R-module {0} consisting only of its identity element. Modules of this type are called free and if R has invariant basis number (e.g. any commutative ring or field) the number n is then the rank of the free module.
If Mn(R) is the ring of n × n matrices over a ring R, M is an Mn(R)-module, and ei is the n × n matrix with 1 in the (i, i)-entry (and zeros elsewhere), then eiM is an R-module, since reim = eirm ∈ eiM. So M breaks up as the direct sum of R-modules, M = e1M ⊕ ... ⊕ enM. Conversely, given an R-module M0, then M0⊕n is an Mn(R)-module. In fact, the category of R-modules and the category of Mn(R)-modules are equivalent. The special case is that the module M is just R as a module over itself, then Rn is an Mn(R)-module.
If S is a nonempty set, M is a left R-module, and MS is the collection of all functions f : S → M, then with addition and scalar multiplication in MS defined pointwise by (f + g)(s) = f(s) + g(s) and (rf)(s) = rf(s), MS is a left R-module. The right R-module case is analogous. In particular, if R is commutative then the collection of R-module homomorphisms h : M → N (see below) is an R-module (and in fact a submodule of NM).
If X is a smooth manifold, then the smooth functions from X to the real numbers form a ring C∞(X). The set of all smooth vector fields defined on X forms a module over C∞(X), and so do the tensor fields and the differential forms on X. More generally, the sections of any vector bundle form a projective module over C∞(X), and by Swan's theorem, every projective module is isomorphic to the module of sections of some vector bundle; the category of C∞(X)-modules and the category of vector bundles over X are equivalent.
If R is any ring and I is any left ideal in R, then I is a left R-module, and analogously right ideals in R are right R-modules.
If R is a ring, we can define the opposite ring Rop, which has the same underlying set and the same addition operation, but the opposite multiplication: if ab = c in R, then ba = c in Rop. Any left R-module M can then be seen to be a right module over Rop, and any right module over R can be considered a left module over Rop.
Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra.
If R and S are rings with a ring homomorphism φ : R → S, then every S-module M is an R-module by defining rm = φ(r)m. In particular, S itself is such an R-module.
== Submodules and homomorphisms ==
Suppose M is a left R-module and N is a subgroup of M. Then N is a submodule (or more explicitly an R-submodule) if for any n in N and any r in R, the product r ⋅ n (or n ⋅ r for a right R-module) is in N.
If X is any subset of an R-module M, then the submodule spanned by X is defined to be
⟨
X
⟩
=
⋂
N
⊇
X
N
{\textstyle \langle X\rangle =\,\bigcap _{N\supseteq X}N}
where N runs over the submodules of M that contain X, or explicitly
{
∑
i
=
1
k
r
i
x
i
∣
r
i
∈
R
,
x
i
∈
X
}
{\textstyle \left\{\sum _{i=1}^{k}r_{i}x_{i}\mid r_{i}\in R,x_{i}\in X\right\}}
, which is important in the definition of tensor products of modules.
The set of submodules of a given module M, together with the two binary operations + (the module spanned by the union of the arguments) and ∩, forms a lattice that satisfies the modular law:
Given submodules U, N1, N2 of M such that N1 ⊆ N2, then the following two submodules are equal: (N1 + U) ∩ N2 = N1 + (U ∩ N2).
If M and N are left R-modules, then a map f : M → N is a homomorphism of R-modules if for any m, n in M and r, s in R,
f
(
r
⋅
m
+
s
⋅
n
)
=
r
⋅
f
(
m
)
+
s
⋅
f
(
n
)
{\displaystyle f(r\cdot m+s\cdot n)=r\cdot f(m)+s\cdot f(n)}
.
This, like any homomorphism of mathematical objects, is just a mapping that preserves the structure of the objects. Another name for a homomorphism of R-modules is an R-linear map.
A bijective module homomorphism f : M → N is called a module isomorphism, and the two modules M and N are called isomorphic. Two isomorphic modules are identical for all practical purposes, differing solely in the notation for their elements.
The kernel of a module homomorphism f : M → N is the submodule of M consisting of all elements that are sent to zero by f, and the image of f is the submodule of N consisting of values f(m) for all elements m of M. The isomorphism theorems familiar from groups and vector spaces are also valid for R-modules.
Given a ring R, the set of all left R-modules together with their module homomorphisms forms an abelian category, denoted by R-Mod (see category of modules).
== Types of modules ==
Finitely generated
An R-module M is finitely generated if there exist finitely many elements x1, ..., xn in M such that every element of M is a linear combination of those elements with coefficients from the ring R.
Cyclic
A module is called a cyclic module if it is generated by one element.
Free
A free R-module is a module that has a basis, or equivalently, one that is isomorphic to a direct sum of copies of the ring R. These are the modules that behave very much like vector spaces.
Projective
Projective modules are direct summands of free modules and share many of their desirable properties.
Injective
Injective modules are defined dually to projective modules.
Flat
A module is called flat if taking the tensor product of it with any exact sequence of R-modules preserves exactness.
Torsionless
A module is called torsionless if it embeds into its algebraic dual.
Simple
A simple module S is a module that is not {0} and whose only submodules are {0} and S. Simple modules are sometimes called irreducible.
Semisimple
A semisimple module is a direct sum (finite or not) of simple modules. Historically these modules are also called completely reducible.
Indecomposable
An indecomposable module is a non-zero module that cannot be written as a direct sum of two non-zero submodules. Every simple module is indecomposable, but there are indecomposable modules that are not simple (e.g. uniform modules).
Faithful
A faithful module M is one where the action of each r ≠ 0 in R on M is nontrivial (i.e. r ⋅ x ≠ 0 for some x in M). Equivalently, the annihilator of M is the zero ideal.
Torsion-free
A torsion-free module is a module over a ring such that 0 is the only element annihilated by a regular element (non zero-divisor) of the ring, equivalently rm = 0 implies r = 0 or m = 0.
Noetherian
A Noetherian module is a module that satisfies the ascending chain condition on submodules, that is, every increasing chain of submodules becomes stationary after finitely many steps. Equivalently, every submodule is finitely generated.
Artinian
An Artinian module is a module that satisfies the descending chain condition on submodules, that is, every decreasing chain of submodules becomes stationary after finitely many steps.
Graded
A graded module is a module with a decomposition as a direct sum M = ⨁x Mx over a graded ring R = ⨁x Rx such that RxMy ⊆ Mx+y for all x and y.
Uniform
A uniform module is a module in which all pairs of nonzero submodules have nonzero intersection.
== Further notions ==
=== Relation to representation theory ===
A representation of a group G over a field k is a module over the group ring k[G].
If M is a left R-module, then the action of an element r in R is defined to be the map M → M that sends each x to rx (or xr in the case of a right module), and is necessarily a group endomorphism of the abelian group (M, +). The set of all group endomorphisms of M is denoted EndZ(M) and forms a ring under addition and composition, and sending a ring element r of R to its action actually defines a ring homomorphism from R to EndZ(M).
Such a ring homomorphism R → EndZ(M) is called a representation of the abelian group M over the ring R; an alternative and equivalent way of defining left R-modules is to say that a left R-module is an abelian group M together with a representation of M over R. Such a representation R → EndZ(M) may also be called a ring action of R on M.
A representation is called faithful if the map R → EndZ(M) is injective. In terms of modules, this means that if r is an element of R such that rx = 0 for all x in M, then r = 0. Every abelian group is a faithful module over the integers or over the ring of integers modulo n, Z/nZ, for some n.
=== Generalizations ===
A ring R corresponds to a preadditive category R with a single object. With this understanding, a left R-module is just a covariant additive functor from R to the category Ab of abelian groups, and right R-modules are contravariant additive functors. This suggests that, if C is any preadditive category, a covariant additive functor from C to Ab should be considered a generalized left module over C. These functors form a functor category C-Mod, which is the natural generalization of the module category R-Mod.
Modules over commutative rings can be generalized in a different direction: take a ringed space (X, OX) and consider the sheaves of OX-modules (see sheaf of modules). These form a category OX-Mod, and play an important role in modern algebraic geometry. If X has only a single point, then this is a module category in the old sense over the commutative ring OX(X).
One can also consider modules over a semiring. Modules over rings are abelian groups, but modules over semirings are only commutative monoids. Most applications of modules are still possible. In particular, for any semiring S, the matrices over S form a semiring over which the tuples of elements from S are a module (in this generalized sense only). This allows a further generalization of the concept of vector space incorporating the semirings from theoretical computer science.
Over near-rings, one can consider near-ring modules, a nonabelian generalization of modules.
== See also ==
Group ring
Algebra (ring theory)
Module (model theory)
Module spectrum
Annihilator
== Notes ==
== References ==
F.W. Anderson and K.R. Fuller: Rings and Categories of Modules, Graduate Texts in Mathematics, Vol. 13, 2nd Ed., Springer-Verlag, New York, 1992, ISBN 0-387-97845-3, ISBN 3-540-97845-3
Nathan Jacobson. Structure of rings. Colloquium publications, Vol. 37, 2nd Ed., AMS Bookstore, 1964, ISBN 978-0-8218-1037-8
== External links ==
"Module", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
module at the nLab | Wikipedia/Module_(algebra) |
Distributions, also known as Schwartz distributions are a kind of generalized function in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.
Distributions are widely used in the theory of partial differential equations, where it may be easier to establish the existence of distributional solutions (weak solutions) than classical solutions, or where appropriate classical solutions may not exist. Distributions are also important in physics and engineering where many problems naturally lead to differential equations whose solutions or initial conditions are singular, such as the Dirac delta function.
A function
f
{\displaystyle f}
is normally thought of as acting on the points in the function domain by "sending" a point
x
{\displaystyle x}
in the domain to the point
f
(
x
)
.
{\displaystyle f(x).}
Instead of acting on points, distribution theory reinterprets functions such as
f
{\displaystyle f}
as acting on test functions in a certain way. In applications to physics and engineering, test functions are usually infinitely differentiable complex-valued (or real-valued) functions with compact support that are defined on some given non-empty open subset
U
⊆
R
n
{\displaystyle U\subseteq \mathbb {R} ^{n}}
. (Bump functions are examples of test functions.) The set of all such test functions forms a vector space that is denoted by
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
or
D
(
U
)
.
{\displaystyle {\mathcal {D}}(U).}
Most commonly encountered functions, including all continuous maps
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
if using
U
:=
R
,
{\displaystyle U:=\mathbb {R} ,}
can be canonically reinterpreted as acting via "integration against a test function." Explicitly, this means that such a function
f
{\displaystyle f}
"acts on" a test function
ψ
∈
D
(
R
)
{\displaystyle \psi \in {\mathcal {D}}(\mathbb {R} )}
by "sending" it to the number
∫
R
f
ψ
d
x
,
{\textstyle \int _{\mathbb {R} }f\,\psi \,dx,}
which is often denoted by
D
f
(
ψ
)
.
{\displaystyle D_{f}(\psi ).}
This new action
ψ
↦
D
f
(
ψ
)
{\textstyle \psi \mapsto D_{f}(\psi )}
of
f
{\displaystyle f}
defines a scalar-valued map
D
f
:
D
(
R
)
→
C
,
{\displaystyle D_{f}:{\mathcal {D}}(\mathbb {R} )\to \mathbb {C} ,}
whose domain is the space of test functions
D
(
R
)
.
{\displaystyle {\mathcal {D}}(\mathbb {R} ).}
This functional
D
f
{\displaystyle D_{f}}
turns out to have the two defining properties of what is known as a distribution on
U
=
R
{\displaystyle U=\mathbb {R} }
: it is linear, and it is also continuous when
D
(
R
)
{\displaystyle {\mathcal {D}}(\mathbb {R} )}
is given a certain topology called the canonical LF topology. The action (the integration
ψ
↦
∫
R
f
ψ
d
x
{\textstyle \psi \mapsto \int _{\mathbb {R} }f\,\psi \,dx}
) of this distribution
D
f
{\displaystyle D_{f}}
on a test function
ψ
{\displaystyle \psi }
can be interpreted as a weighted average of the distribution on the support of the test function, even if the values of the distribution at a single point are not well-defined. Distributions like
D
f
{\displaystyle D_{f}}
that arise from functions in this way are prototypical examples of distributions, but there exist many distributions that cannot be defined by integration against any function. Examples of the latter include the Dirac delta function and distributions defined to act by integration of test functions
ψ
↦
∫
U
ψ
d
μ
{\textstyle \psi \mapsto \int _{U}\psi d\mu }
against certain measures
μ
{\displaystyle \mu }
on
U
.
{\displaystyle U.}
Nonetheless, it is still always possible to reduce any arbitrary distribution down to a simpler family of related distributions that do arise via such actions of integration.
More generally, a distribution on
U
{\displaystyle U}
is by definition a linear functional on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
that is continuous when
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
is given a topology called the canonical LF topology. This leads to the space of (all) distributions on
U
{\displaystyle U}
, usually denoted by
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
(note the prime), which by definition is the space of all distributions on
U
{\displaystyle U}
(that is, it is the continuous dual space of
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
); it is these distributions that are the main focus of this article.
Definitions of the appropriate topologies on spaces of test functions and distributions are given in the article on spaces of test functions and distributions. This article is primarily concerned with the definition of distributions, together with their properties and some important examples.
== History ==
The practical use of distributions can be traced back to the use of Green's functions in the 1830s to solve ordinary differential equations, but was not formalized until much later. According to Kolmogorov & Fomin (1957), generalized functions originated in the work of Sergei Sobolev (1936) on second-order hyperbolic partial differential equations, and the ideas were developed in somewhat extended form by Laurent Schwartz in the late 1940s. According to his autobiography, Schwartz introduced the term "distribution" by analogy with a distribution of electrical charge, possibly including not only point charges but also dipoles and so on. Gårding (1997) comments that although the ideas in the transformative book by Schwartz (1951) were not entirely new, it was Schwartz's broad attack and conviction that distributions would be useful almost everywhere in analysis that made the difference. A detailed history of the theory of distributions was given by Lützen (1982).
== Notation ==
The following notation will be used throughout this article:
n
{\displaystyle n}
is a fixed positive integer and
U
{\displaystyle U}
is a fixed non-empty open subset of Euclidean space
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
N
0
=
{
0
,
1
,
2
,
…
}
{\displaystyle \mathbb {N} _{0}=\{0,1,2,\ldots \}}
denotes the natural numbers.
k
{\displaystyle k}
will denote a non-negative integer or
∞
.
{\displaystyle \infty .}
If
f
{\displaystyle f}
is a function then
Dom
(
f
)
{\displaystyle \operatorname {Dom} (f)}
will denote its domain and the support of
f
,
{\displaystyle f,}
denoted by
supp
(
f
)
,
{\displaystyle \operatorname {supp} (f),}
is defined to be the closure of the set
{
x
∈
Dom
(
f
)
:
f
(
x
)
≠
0
}
{\displaystyle \{x\in \operatorname {Dom} (f):f(x)\neq 0\}}
in
Dom
(
f
)
.
{\displaystyle \operatorname {Dom} (f).}
For two functions
f
,
g
:
U
→
C
,
{\displaystyle f,g:U\to \mathbb {C} ,}
the following notation defines a canonical pairing:
⟨
f
,
g
⟩
:=
∫
U
f
(
x
)
g
(
x
)
d
x
.
{\displaystyle \langle f,g\rangle :=\int _{U}f(x)g(x)\,dx.}
A multi-index of size
n
{\displaystyle n}
is an element in
N
n
{\displaystyle \mathbb {N} ^{n}}
(given that
n
{\displaystyle n}
is fixed, if the size of multi-indices is omitted then the size should be assumed to be
n
{\displaystyle n}
). The length of a multi-index
α
=
(
α
1
,
…
,
α
n
)
∈
N
n
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {N} ^{n}}
is defined as
α
1
+
⋯
+
α
n
{\displaystyle \alpha _{1}+\cdots +\alpha _{n}}
and denoted by
|
α
|
.
{\displaystyle |\alpha |.}
Multi-indices are particularly useful when dealing with functions of several variables, in particular, we introduce the following notations for a given multi-index
α
=
(
α
1
,
…
,
α
n
)
∈
N
n
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})\in \mathbb {N} ^{n}}
:
x
α
=
x
1
α
1
⋯
x
n
α
n
∂
α
=
∂
|
α
|
∂
x
1
α
1
⋯
∂
x
n
α
n
{\displaystyle {\begin{aligned}x^{\alpha }&=x_{1}^{\alpha _{1}}\cdots x_{n}^{\alpha _{n}}\\\partial ^{\alpha }&={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}\end{aligned}}}
We also introduce a partial order of all multi-indices by
β
≥
α
{\displaystyle \beta \geq \alpha }
if and only if
β
i
≥
α
i
{\displaystyle \beta _{i}\geq \alpha _{i}}
for all
1
≤
i
≤
n
.
{\displaystyle 1\leq i\leq n.}
When
β
≥
α
{\displaystyle \beta \geq \alpha }
we define their multi-index binomial coefficient as:
(
β
α
)
:=
(
β
1
α
1
)
⋯
(
β
n
α
n
)
.
{\displaystyle {\binom {\beta }{\alpha }}:={\binom {\beta _{1}}{\alpha _{1}}}\cdots {\binom {\beta _{n}}{\alpha _{n}}}.}
== Definitions of test functions and distributions ==
In this section, some basic notions and definitions needed to define real-valued distributions on U are introduced. Further discussion of the topologies on the spaces of test functions and distributions is given in the article on spaces of test functions and distributions.
For all
j
,
k
∈
{
0
,
1
,
2
,
…
,
∞
}
{\displaystyle j,k\in \{0,1,2,\ldots ,\infty \}}
and any compact subsets
K
{\displaystyle K}
and
L
{\displaystyle L}
of
U
{\displaystyle U}
, we have:
C
k
(
K
)
⊆
C
c
k
(
U
)
⊆
C
k
(
U
)
C
k
(
K
)
⊆
C
k
(
L
)
if
K
⊆
L
C
k
(
K
)
⊆
C
j
(
K
)
if
j
≤
k
C
c
k
(
U
)
⊆
C
c
j
(
U
)
if
j
≤
k
C
k
(
U
)
⊆
C
j
(
U
)
if
j
≤
k
{\displaystyle {\begin{aligned}C^{k}(K)&\subseteq C_{c}^{k}(U)\subseteq C^{k}(U)\\C^{k}(K)&\subseteq C^{k}(L)&&{\text{if }}K\subseteq L\\C^{k}(K)&\subseteq C^{j}(K)&&{\text{if }}j\leq k\\C_{c}^{k}(U)&\subseteq C_{c}^{j}(U)&&{\text{if }}j\leq k\\C^{k}(U)&\subseteq C^{j}(U)&&{\text{if }}j\leq k\\\end{aligned}}}
Distributions on U are continuous linear functionals on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
when this vector space is endowed with a particular topology called the canonical LF-topology. The following proposition states two necessary and sufficient conditions for the continuity of a linear function on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
that are often straightforward to verify.
Proposition: A linear functional T on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
is continuous, and therefore a distribution, if and only if any of the following equivalent conditions is satisfied:
For every compact subset
K
⊆
U
{\displaystyle K\subseteq U}
there exist constants
C
>
0
{\displaystyle C>0}
and
N
∈
N
{\displaystyle N\in \mathbb {N} }
(dependent on
K
{\displaystyle K}
) such that for all
f
∈
C
c
∞
(
U
)
{\displaystyle f\in C_{c}^{\infty }(U)}
with support contained in
K
{\displaystyle K}
,
|
T
(
f
)
|
≤
C
sup
{
|
∂
α
f
(
x
)
|
:
x
∈
U
,
|
α
|
≤
N
}
.
{\displaystyle |T(f)|\leq C\sup\{|\partial ^{\alpha }f(x)|:x\in U,|\alpha |\leq N\}.}
For every compact subset
K
⊆
U
{\displaystyle K\subseteq U}
and every sequence
{
f
i
}
i
=
1
∞
{\displaystyle \{f_{i}\}_{i=1}^{\infty }}
in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
whose supports are contained in
K
{\displaystyle K}
, if
{
∂
α
f
i
}
i
=
1
∞
{\displaystyle \{\partial ^{\alpha }f_{i}\}_{i=1}^{\infty }}
converges uniformly to zero on
U
{\displaystyle U}
for every multi-index
α
{\displaystyle \alpha }
, then
T
(
f
i
)
→
0.
{\displaystyle T(f_{i})\to 0.}
=== Topology on Ck(U) ===
We now introduce the seminorms that will define the topology on
C
k
(
U
)
.
{\displaystyle C^{k}(U).}
Different authors sometimes use different families of seminorms so we list the most common families below. However, the resulting topology is the same no matter which family is used.
All of the functions above are non-negative
R
{\displaystyle \mathbb {R} }
-valued seminorms on
C
k
(
U
)
.
{\displaystyle C^{k}(U).}
As explained in this article, every set of seminorms on a vector space induces a locally convex vector topology.
Each of the following sets of seminorms
A
:=
{
q
i
,
K
:
K
compact and
i
∈
N
satisfies
0
≤
i
≤
k
}
B
:=
{
r
i
,
K
:
K
compact and
i
∈
N
satisfies
0
≤
i
≤
k
}
C
:=
{
t
i
,
K
:
K
compact and
i
∈
N
satisfies
0
≤
i
≤
k
}
D
:=
{
s
p
,
K
:
K
compact and
p
∈
N
n
satisfies
|
p
|
≤
k
}
{\displaystyle {\begin{alignedat}{4}A~:=\quad &\{q_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\B~:=\quad &\{r_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\C~:=\quad &\{t_{i,K}&&:\;K{\text{ compact and }}\;&&i\in \mathbb {N} {\text{ satisfies }}\;&&0\leq i\leq k\}\\D~:=\quad &\{s_{p,K}&&:\;K{\text{ compact and }}\;&&p\in \mathbb {N} ^{n}{\text{ satisfies }}\;&&|p|\leq k\}\end{alignedat}}}
generate the same locally convex vector topology on
C
k
(
U
)
{\displaystyle C^{k}(U)}
(so for example, the topology generated by the seminorms in
A
{\displaystyle A}
is equal to the topology generated by those in
C
{\displaystyle C}
).
With this topology,
C
k
(
U
)
{\displaystyle C^{k}(U)}
becomes a locally convex Fréchet space that is not normable. Every element of
A
∪
B
∪
C
∪
D
{\displaystyle A\cup B\cup C\cup D}
is a continuous seminorm on
C
k
(
U
)
.
{\displaystyle C^{k}(U).}
Under this topology, a net
(
f
i
)
i
∈
I
{\displaystyle (f_{i})_{i\in I}}
in
C
k
(
U
)
{\displaystyle C^{k}(U)}
converges to
f
∈
C
k
(
U
)
{\displaystyle f\in C^{k}(U)}
if and only if for every multi-index
p
{\displaystyle p}
with
|
p
|
<
k
+
1
{\displaystyle |p|<k+1}
and every compact
K
,
{\displaystyle K,}
the net of partial derivatives
(
∂
p
f
i
)
i
∈
I
{\displaystyle \left(\partial ^{p}f_{i}\right)_{i\in I}}
converges uniformly to
∂
p
f
{\displaystyle \partial ^{p}f}
on
K
.
{\displaystyle K.}
For any
k
∈
{
0
,
1
,
2
,
…
,
∞
}
,
{\displaystyle k\in \{0,1,2,\ldots ,\infty \},}
any (von Neumann) bounded subset of
C
k
+
1
(
U
)
{\displaystyle C^{k+1}(U)}
is a relatively compact subset of
C
k
(
U
)
.
{\displaystyle C^{k}(U).}
In particular, a subset of
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
is bounded if and only if it is bounded in
C
i
(
U
)
{\displaystyle C^{i}(U)}
for all
i
∈
N
.
{\displaystyle i\in \mathbb {N} .}
The space
C
k
(
U
)
{\displaystyle C^{k}(U)}
is a Montel space if and only if
k
=
∞
.
{\displaystyle k=\infty .}
A subset
W
{\displaystyle W}
of
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
is open in this topology if and only if there exists
i
∈
N
{\displaystyle i\in \mathbb {N} }
such that
W
{\displaystyle W}
is open when
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
is endowed with the subspace topology induced on it by
C
i
(
U
)
.
{\displaystyle C^{i}(U).}
==== Topology on Ck(K) ====
As before, fix
k
∈
{
0
,
1
,
2
,
…
,
∞
}
.
{\displaystyle k\in \{0,1,2,\ldots ,\infty \}.}
Recall that if
K
{\displaystyle K}
is any compact subset of
U
{\displaystyle U}
then
C
k
(
K
)
⊆
C
k
(
U
)
.
{\displaystyle C^{k}(K)\subseteq C^{k}(U).}
If
k
{\displaystyle k}
is finite then
C
k
(
K
)
{\displaystyle C^{k}(K)}
is a Banach space with a topology that can be defined by the norm
r
K
(
f
)
:=
sup
|
p
|
<
k
(
sup
x
0
∈
K
|
∂
p
f
(
x
0
)
|
)
.
{\displaystyle r_{K}(f):=\sup _{|p|<k}\left(\sup _{x_{0}\in K}\left|\partial ^{p}f(x_{0})\right|\right).}
==== Trivial extensions and independence of Ck(K)'s topology from U ====
Suppose
U
{\displaystyle U}
is an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
and
K
⊆
U
{\displaystyle K\subseteq U}
is a compact subset. By definition, elements of
C
k
(
K
)
{\displaystyle C^{k}(K)}
are functions with domain
U
{\displaystyle U}
(in symbols,
C
k
(
K
)
⊆
C
k
(
U
)
{\displaystyle C^{k}(K)\subseteq C^{k}(U)}
), so the space
C
k
(
K
)
{\displaystyle C^{k}(K)}
and its topology depend on
U
;
{\displaystyle U;}
to make this dependence on the open set
U
{\displaystyle U}
clear, temporarily denote
C
k
(
K
)
{\displaystyle C^{k}(K)}
by
C
k
(
K
;
U
)
.
{\displaystyle C^{k}(K;U).}
Importantly, changing the set
U
{\displaystyle U}
to a different open subset
U
′
{\displaystyle U'}
(with
K
⊆
U
′
{\displaystyle K\subseteq U'}
) will change the set
C
k
(
K
)
{\displaystyle C^{k}(K)}
from
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
to
C
k
(
K
;
U
′
)
,
{\displaystyle C^{k}(K;U'),}
so that elements of
C
k
(
K
)
{\displaystyle C^{k}(K)}
will be functions with domain
U
′
{\displaystyle U'}
instead of
U
.
{\displaystyle U.}
Despite
C
k
(
K
)
{\displaystyle C^{k}(K)}
depending on the open set (
U
or
U
′
{\displaystyle U{\text{ or }}U'}
), the standard notation for
C
k
(
K
)
{\displaystyle C^{k}(K)}
makes no mention of it.
This is justified because, as this subsection will now explain, the space
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
is canonically identified as a subspace of
C
k
(
K
;
U
′
)
{\displaystyle C^{k}(K;U')}
(both algebraically and topologically).
It is enough to explain how to canonically identify
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
with
C
k
(
K
;
U
′
)
{\displaystyle C^{k}(K;U')}
when one of
U
{\displaystyle U}
and
U
′
{\displaystyle U'}
is a subset of the other. The reason is that if
V
{\displaystyle V}
and
W
{\displaystyle W}
are arbitrary open subsets of
R
n
{\displaystyle \mathbb {R} ^{n}}
containing
K
{\displaystyle K}
then the open set
U
:=
V
∩
W
{\displaystyle U:=V\cap W}
also contains
K
,
{\displaystyle K,}
so that each of
C
k
(
K
;
V
)
{\displaystyle C^{k}(K;V)}
and
C
k
(
K
;
W
)
{\displaystyle C^{k}(K;W)}
is canonically identified with
C
k
(
K
;
V
∩
W
)
{\displaystyle C^{k}(K;V\cap W)}
and now by transitivity,
C
k
(
K
;
V
)
{\displaystyle C^{k}(K;V)}
is thus identified with
C
k
(
K
;
W
)
.
{\displaystyle C^{k}(K;W).}
So assume
U
⊆
V
{\displaystyle U\subseteq V}
are open subsets of
R
n
{\displaystyle \mathbb {R} ^{n}}
containing
K
.
{\displaystyle K.}
Given
f
∈
C
c
k
(
U
)
,
{\displaystyle f\in C_{c}^{k}(U),}
its trivial extension to
V
{\displaystyle V}
is the function
F
:
V
→
C
{\displaystyle F:V\to \mathbb {C} }
defined by:
F
(
x
)
=
{
f
(
x
)
x
∈
U
,
0
otherwise
.
{\displaystyle F(x)={\begin{cases}f(x)&x\in U,\\0&{\text{otherwise}}.\end{cases}}}
This trivial extension belongs to
C
k
(
V
)
{\displaystyle C^{k}(V)}
(because
f
∈
C
c
k
(
U
)
{\displaystyle f\in C_{c}^{k}(U)}
has compact support) and it will be denoted by
I
(
f
)
{\displaystyle I(f)}
(that is,
I
(
f
)
:=
F
{\displaystyle I(f):=F}
). The assignment
f
↦
I
(
f
)
{\displaystyle f\mapsto I(f)}
thus induces a map
I
:
C
c
k
(
U
)
→
C
k
(
V
)
{\displaystyle I:C_{c}^{k}(U)\to C^{k}(V)}
that sends a function in
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
to its trivial extension on
V
.
{\displaystyle V.}
This map is a linear injection and for every compact subset
K
⊆
U
{\displaystyle K\subseteq U}
(where
K
{\displaystyle K}
is also a compact subset of
V
{\displaystyle V}
since
K
⊆
U
⊆
V
{\displaystyle K\subseteq U\subseteq V}
),
I
(
C
k
(
K
;
U
)
)
=
C
k
(
K
;
V
)
and thus
I
(
C
c
k
(
U
)
)
⊆
C
c
k
(
V
)
.
{\displaystyle {\begin{alignedat}{4}I\left(C^{k}(K;U)\right)&~=~C^{k}(K;V)\qquad {\text{ and thus }}\\I\left(C_{c}^{k}(U)\right)&~\subseteq ~C_{c}^{k}(V).\end{alignedat}}}
If
I
{\displaystyle I}
is restricted to
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
then the following induced linear map is a homeomorphism (linear homeomorphisms are called TVS-isomorphisms):
C
k
(
K
;
U
)
→
C
k
(
K
;
V
)
f
↦
I
(
f
)
{\displaystyle {\begin{alignedat}{4}\,&C^{k}(K;U)&&\to \,&&C^{k}(K;V)\\&f&&\mapsto \,&&I(f)\\\end{alignedat}}}
and thus the next map is a topological embedding:
C
k
(
K
;
U
)
→
C
k
(
V
)
f
↦
I
(
f
)
.
{\displaystyle {\begin{alignedat}{4}\,&C^{k}(K;U)&&\to \,&&C^{k}(V)\\&f&&\mapsto \,&&I(f).\\\end{alignedat}}}
Using the injection
I
:
C
c
k
(
U
)
→
C
k
(
V
)
{\displaystyle I:C_{c}^{k}(U)\to C^{k}(V)}
the vector space
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
is canonically identified with its image in
C
c
k
(
V
)
⊆
C
k
(
V
)
.
{\displaystyle C_{c}^{k}(V)\subseteq C^{k}(V).}
Because
C
k
(
K
;
U
)
⊆
C
c
k
(
U
)
,
{\displaystyle C^{k}(K;U)\subseteq C_{c}^{k}(U),}
through this identification,
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
can also be considered as a subset of
C
k
(
V
)
.
{\displaystyle C^{k}(V).}
Thus the topology on
C
k
(
K
;
U
)
{\displaystyle C^{k}(K;U)}
is independent of the open subset
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
that contains
K
,
{\displaystyle K,}
which justifies the practice of writing
C
k
(
K
)
{\displaystyle C^{k}(K)}
instead of
C
k
(
K
;
U
)
.
{\displaystyle C^{k}(K;U).}
=== Canonical LF topology ===
Recall that
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
denotes all functions in
C
k
(
U
)
{\displaystyle C^{k}(U)}
that have compact support in
U
,
{\displaystyle U,}
where note that
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
is the union of all
C
k
(
K
)
{\displaystyle C^{k}(K)}
as
K
{\displaystyle K}
ranges over all compact subsets of
U
.
{\displaystyle U.}
Moreover, for each
k
,
C
c
k
(
U
)
{\displaystyle k,\,C_{c}^{k}(U)}
is a dense subset of
C
k
(
U
)
.
{\displaystyle C^{k}(U).}
The special case when
k
=
∞
{\displaystyle k=\infty }
gives us the space of test functions.
The canonical LF-topology is not metrizable and importantly, it is strictly finer than the subspace topology that
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
induces on
C
c
∞
(
U
)
.
{\displaystyle C_{c}^{\infty }(U).}
However, the canonical LF-topology does make
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
into a complete reflexive nuclear Montel bornological barrelled Mackey space; the same is true of its strong dual space (that is, the space of all distributions with its usual topology). The canonical LF-topology can be defined in various ways.
=== Distributions ===
As discussed earlier, continuous linear functionals on a
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
are known as distributions on
U
.
{\displaystyle U.}
Other equivalent definitions are described below.
There is a canonical duality pairing between a distribution
T
{\displaystyle T}
on
U
{\displaystyle U}
and a test function
f
∈
C
c
∞
(
U
)
,
{\displaystyle f\in C_{c}^{\infty }(U),}
which is denoted using angle brackets by
{
D
′
(
U
)
×
C
c
∞
(
U
)
→
R
(
T
,
f
)
↦
⟨
T
,
f
⟩
:=
T
(
f
)
{\displaystyle {\begin{cases}{\mathcal {D}}'(U)\times C_{c}^{\infty }(U)\to \mathbb {R} \\(T,f)\mapsto \langle T,f\rangle :=T(f)\end{cases}}}
One interprets this notation as the distribution
T
{\displaystyle T}
acting on the test function
f
{\displaystyle f}
to give a scalar, or symmetrically as the test function
f
{\displaystyle f}
acting on the distribution
T
.
{\displaystyle T.}
==== Characterizations of distributions ====
Proposition. If
T
{\displaystyle T}
is a linear functional on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
then the following are equivalent:
T is a distribution;
T is continuous;
T is continuous at the origin;
T is uniformly continuous;
T is a bounded operator;
T is sequentially continuous;
explicitly, for every sequence
(
f
i
)
i
=
1
∞
{\displaystyle \left(f_{i}\right)_{i=1}^{\infty }}
in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
that converges in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
to some
f
∈
C
c
∞
(
U
)
,
{\displaystyle f\in C_{c}^{\infty }(U),}
lim
i
→
∞
T
(
f
i
)
=
T
(
f
)
;
{\textstyle \lim _{i\to \infty }T\left(f_{i}\right)=T(f);}
T is sequentially continuous at the origin; in other words, T maps null sequences to null sequences;
explicitly, for every sequence
(
f
i
)
i
=
1
∞
{\displaystyle \left(f_{i}\right)_{i=1}^{\infty }}
in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
that converges in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
to the origin (such a sequence is called a null sequence),
lim
i
→
∞
T
(
f
i
)
=
0
;
{\textstyle \lim _{i\to \infty }T\left(f_{i}\right)=0;}
a null sequence is by definition any sequence that converges to the origin;
T maps null sequences to bounded subsets;
explicitly, for every sequence
(
f
i
)
i
=
1
∞
{\displaystyle \left(f_{i}\right)_{i=1}^{\infty }}
in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
that converges in
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
to the origin, the sequence
(
T
(
f
i
)
)
i
=
1
∞
{\displaystyle \left(T\left(f_{i}\right)\right)_{i=1}^{\infty }}
is bounded;
T maps Mackey convergent null sequences to bounded subsets;
explicitly, for every Mackey convergent null sequence
(
f
i
)
i
=
1
∞
{\displaystyle \left(f_{i}\right)_{i=1}^{\infty }}
in
C
c
∞
(
U
)
,
{\displaystyle C_{c}^{\infty }(U),}
the sequence
(
T
(
f
i
)
)
i
=
1
∞
{\displaystyle \left(T\left(f_{i}\right)\right)_{i=1}^{\infty }}
is bounded;
a sequence
f
∙
=
(
f
i
)
i
=
1
∞
{\displaystyle f_{\bullet }=\left(f_{i}\right)_{i=1}^{\infty }}
is said to be Mackey convergent to the origin if there exists a divergent sequence
r
∙
=
(
r
i
)
i
=
1
∞
→
∞
{\displaystyle r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }\to \infty }
of positive real numbers such that the sequence
(
r
i
f
i
)
i
=
1
∞
{\displaystyle \left(r_{i}f_{i}\right)_{i=1}^{\infty }}
is bounded; every sequence that is Mackey convergent to the origin necessarily converges to the origin (in the usual sense);
The kernel of T is a closed subspace of
C
c
∞
(
U
)
;
{\displaystyle C_{c}^{\infty }(U);}
The graph of T is closed;
There exists a continuous seminorm
g
{\displaystyle g}
on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
such that
|
T
|
≤
g
;
{\displaystyle |T|\leq g;}
There exists a constant
C
>
0
{\displaystyle C>0}
and a finite subset
{
g
1
,
…
,
g
m
}
⊆
P
{\displaystyle \{g_{1},\ldots ,g_{m}\}\subseteq {\mathcal {P}}}
(where
P
{\displaystyle {\mathcal {P}}}
is any collection of continuous seminorms that defines the canonical LF topology on
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
) such that
|
T
|
≤
C
(
g
1
+
⋯
+
g
m
)
;
{\displaystyle |T|\leq C(g_{1}+\cdots +g_{m});}
For every compact subset
K
⊆
U
{\displaystyle K\subseteq U}
there exist constants
C
>
0
{\displaystyle C>0}
and
N
∈
N
{\displaystyle N\in \mathbb {N} }
such that for all
f
∈
C
∞
(
K
)
,
{\displaystyle f\in C^{\infty }(K),}
|
T
(
f
)
|
≤
C
sup
{
|
∂
α
f
(
x
)
|
:
x
∈
U
,
|
α
|
≤
N
}
;
{\displaystyle |T(f)|\leq C\sup\{|\partial ^{\alpha }f(x)|:x\in U,|\alpha |\leq N\};}
For every compact subset
K
⊆
U
{\displaystyle K\subseteq U}
there exist constants
C
K
>
0
{\displaystyle C_{K}>0}
and
N
K
∈
N
{\displaystyle N_{K}\in \mathbb {N} }
such that for all
f
∈
C
c
∞
(
U
)
{\displaystyle f\in C_{c}^{\infty }(U)}
with support contained in
K
,
{\displaystyle K,}
|
T
(
f
)
|
≤
C
K
sup
{
|
∂
α
f
(
x
)
|
:
x
∈
K
,
|
α
|
≤
N
K
}
;
{\displaystyle |T(f)|\leq C_{K}\sup\{|\partial ^{\alpha }f(x)|:x\in K,|\alpha |\leq N_{K}\};}
For any compact subset
K
⊆
U
{\displaystyle K\subseteq U}
and any sequence
{
f
i
}
i
=
1
∞
{\displaystyle \{f_{i}\}_{i=1}^{\infty }}
in
C
∞
(
K
)
,
{\displaystyle C^{\infty }(K),}
if
{
∂
p
f
i
}
i
=
1
∞
{\displaystyle \{\partial ^{p}f_{i}\}_{i=1}^{\infty }}
converges uniformly to zero for all multi-indices
p
,
{\displaystyle p,}
then
T
(
f
i
)
→
0
;
{\displaystyle T(f_{i})\to 0;}
==== Topology on the space of distributions and its relation to the weak-* topology ====
The set of all distributions on
U
{\displaystyle U}
is the continuous dual space of
C
c
∞
(
U
)
,
{\displaystyle C_{c}^{\infty }(U),}
which when endowed with the strong dual topology is denoted by
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'(U).}
Importantly, unless indicated otherwise, the topology on
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
is the strong dual topology; if the topology is instead the weak-* topology then this will be indicated. Neither topology is metrizable although unlike the weak-* topology, the strong dual topology makes
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
into a complete nuclear space, to name just a few of its desirable properties.
Neither
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
nor its strong dual
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
is a sequential space and so neither of their topologies can be fully described by sequences (in other words, defining only what sequences converge in these spaces is not enough to fully/correctly define their topologies).
However, a sequence in
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
converges in the strong dual topology if and only if it converges in the weak-* topology (this leads many authors to use pointwise convergence to define the convergence of a sequence of distributions; this is fine for sequences but this is not guaranteed to extend to the convergence of nets of distributions because a net may converge pointwise but fail to converge in the strong dual topology).
More information about the topology that
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
is endowed with can be found in the article on spaces of test functions and distributions and the articles on polar topologies and dual systems.
A linear map from
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
into another locally convex topological vector space (such as any normed space) is continuous if and only if it is sequentially continuous at the origin. However, this is no longer guaranteed if the map is not linear or for maps valued in more general topological spaces (for example, that are not also locally convex topological vector spaces). The same is true of maps from
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
(more generally, this is true of maps from any locally convex bornological space).
== Localization of distributions ==
There is no way to define the value of a distribution in
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
at a particular point of U. However, as is the case with functions, distributions on U restrict to give distributions on open subsets of U. Furthermore, distributions are locally determined in the sense that a distribution on all of U can be assembled from a distribution on an open cover of U satisfying some compatibility conditions on the overlaps. Such a structure is known as a sheaf.
=== Extensions and restrictions to an open subset ===
Let
V
⊆
U
{\displaystyle V\subseteq U}
be open subsets of
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
Every function
f
∈
D
(
V
)
{\displaystyle f\in {\mathcal {D}}(V)}
can be extended by zero from its domain V to a function on U by setting it equal to
0
{\displaystyle 0}
on the complement
U
∖
V
.
{\displaystyle U\setminus V.}
This extension is a smooth compactly supported function called the trivial extension of
f
{\displaystyle f}
to
U
{\displaystyle U}
and it will be denoted by
E
V
U
(
f
)
.
{\displaystyle E_{VU}(f).}
This assignment
f
↦
E
V
U
(
f
)
{\displaystyle f\mapsto E_{VU}(f)}
defines the trivial extension operator
E
V
U
:
D
(
V
)
→
D
(
U
)
,
{\displaystyle E_{VU}:{\mathcal {D}}(V)\to {\mathcal {D}}(U),}
which is a continuous injective linear map. It is used to canonically identify
D
(
V
)
{\displaystyle {\mathcal {D}}(V)}
as a vector subspace of
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
(although not as a topological subspace).
Its transpose (explained here)
ρ
V
U
:=
t
E
V
U
:
D
′
(
U
)
→
D
′
(
V
)
,
{\displaystyle \rho _{VU}:={}^{t}E_{VU}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(V),}
is called the restriction to
V
{\displaystyle V}
of distributions in
U
{\displaystyle U}
and as the name suggests, the image
ρ
V
U
(
T
)
{\displaystyle \rho _{VU}(T)}
of a distribution
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
under this map is a distribution on
V
{\displaystyle V}
called the restriction of
T
{\displaystyle T}
to
V
.
{\displaystyle V.}
The defining condition of the restriction
ρ
V
U
(
T
)
{\displaystyle \rho _{VU}(T)}
is:
⟨
ρ
V
U
T
,
ϕ
⟩
=
⟨
T
,
E
V
U
ϕ
⟩
for all
ϕ
∈
D
(
V
)
.
{\displaystyle \langle \rho _{VU}T,\phi \rangle =\langle T,E_{VU}\phi \rangle \quad {\text{ for all }}\phi \in {\mathcal {D}}(V).}
If
V
≠
U
{\displaystyle V\neq U}
then the (continuous injective linear) trivial extension map
E
V
U
:
D
(
V
)
→
D
(
U
)
{\displaystyle E_{VU}:{\mathcal {D}}(V)\to {\mathcal {D}}(U)}
is not a topological embedding (in other words, if this linear injection was used to identify
D
(
V
)
{\displaystyle {\mathcal {D}}(V)}
as a subset of
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
then
D
(
V
)
{\displaystyle {\mathcal {D}}(V)}
's topology would strictly finer than the subspace topology that
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
induces on it; importantly, it would not be a topological subspace since that requires equality of topologies) and its range is also not dense in its codomain
D
(
U
)
.
{\displaystyle {\mathcal {D}}(U).}
Consequently if
V
≠
U
{\displaystyle V\neq U}
then the restriction mapping is neither injective nor surjective. A distribution
S
∈
D
′
(
V
)
{\displaystyle S\in {\mathcal {D}}'(V)}
is said to be extendible to U if it belongs to the range of the transpose of
E
V
U
{\displaystyle E_{VU}}
and it is called extendible if it is extendable to
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
Unless
U
=
V
,
{\displaystyle U=V,}
the restriction to V is neither injective nor surjective. Lack of surjectivity follows since distributions can blow up towards the boundary of V. For instance, if
U
=
R
{\displaystyle U=\mathbb {R} }
and
V
=
(
0
,
2
)
,
{\displaystyle V=(0,2),}
then the distribution
T
(
x
)
=
∑
n
=
1
∞
n
δ
(
x
−
1
n
)
{\displaystyle T(x)=\sum _{n=1}^{\infty }n\,\delta \left(x-{\frac {1}{n}}\right)}
is in
D
′
(
V
)
{\displaystyle {\mathcal {D}}'(V)}
but admits no extension to
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'(U).}
=== Gluing and distributions that vanish in a set ===
Let V be an open subset of U.
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
is said to vanish in V if for all
f
∈
D
(
U
)
{\displaystyle f\in {\mathcal {D}}(U)}
such that
supp
(
f
)
⊆
V
{\displaystyle \operatorname {supp} (f)\subseteq V}
we have
T
f
=
0.
{\displaystyle Tf=0.}
T vanishes in V if and only if the restriction of T to V is equal to 0, or equivalently, if and only if T lies in the kernel of the restriction map
ρ
V
U
.
{\displaystyle \rho _{VU}.}
=== Support of a distribution ===
This last corollary implies that for every distribution T on U, there exists a unique largest subset V of U such that T vanishes in V (and does not vanish in any open subset of U that is not contained in V); the complement in U of this unique largest open subset is called the support of T. Thus
supp
(
T
)
=
U
∖
⋃
{
V
∣
ρ
V
U
T
=
0
}
.
{\displaystyle \operatorname {supp} (T)=U\setminus \bigcup \{V\mid \rho _{VU}T=0\}.}
If
f
{\displaystyle f}
is a locally integrable function on U and if
D
f
{\displaystyle D_{f}}
is its associated distribution, then the support of
D
f
{\displaystyle D_{f}}
is the smallest closed subset of U in the complement of which
f
{\displaystyle f}
is almost everywhere equal to 0. If
f
{\displaystyle f}
is continuous, then the support of
D
f
{\displaystyle D_{f}}
is equal to the closure of the set of points in U at which
f
{\displaystyle f}
does not vanish. The support of the distribution associated with the Dirac measure at a point
x
0
{\displaystyle x_{0}}
is the set
{
x
0
}
.
{\displaystyle \{x_{0}\}.}
If the support of a test function
f
{\displaystyle f}
does not intersect the support of a distribution T then
T
f
=
0.
{\displaystyle Tf=0.}
A distribution T is 0 if and only if its support is empty. If
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
is identically 1 on some open set containing the support of a distribution T then
f
T
=
T
.
{\displaystyle fT=T.}
If the support of a distribution T is compact then it has finite order and there is a constant
C
{\displaystyle C}
and a non-negative integer
N
{\displaystyle N}
such that:
|
T
ϕ
|
≤
C
‖
ϕ
‖
N
:=
C
sup
{
|
∂
α
ϕ
(
x
)
|
:
x
∈
U
,
|
α
|
≤
N
}
for all
ϕ
∈
D
(
U
)
.
{\displaystyle |T\phi |\leq C\|\phi \|_{N}:=C\sup \left\{\left|\partial ^{\alpha }\phi (x)\right|:x\in U,|\alpha |\leq N\right\}\quad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
If T has compact support, then it has a unique extension to a continuous linear functional
T
^
{\displaystyle {\widehat {T}}}
on
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
; this function can be defined by
T
^
(
f
)
:=
T
(
ψ
f
)
,
{\displaystyle {\widehat {T}}(f):=T(\psi f),}
where
ψ
∈
D
(
U
)
{\displaystyle \psi \in {\mathcal {D}}(U)}
is any function that is identically 1 on an open set containing the support of T.
If
S
,
T
∈
D
′
(
U
)
{\displaystyle S,T\in {\mathcal {D}}'(U)}
and
λ
≠
0
{\displaystyle \lambda \neq 0}
then
supp
(
S
+
T
)
⊆
supp
(
S
)
∪
supp
(
T
)
{\displaystyle \operatorname {supp} (S+T)\subseteq \operatorname {supp} (S)\cup \operatorname {supp} (T)}
and
supp
(
λ
T
)
=
supp
(
T
)
.
{\displaystyle \operatorname {supp} (\lambda T)=\operatorname {supp} (T).}
Thus, distributions with support in a given subset
A
⊆
U
{\displaystyle A\subseteq U}
form a vector subspace of
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'(U).}
Furthermore, if
P
{\displaystyle P}
is a differential operator in U, then for all distributions T on U and all
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
we have
supp
(
P
(
x
,
∂
)
T
)
⊆
supp
(
T
)
{\displaystyle \operatorname {supp} (P(x,\partial )T)\subseteq \operatorname {supp} (T)}
and
supp
(
f
T
)
⊆
supp
(
f
)
∩
supp
(
T
)
.
{\displaystyle \operatorname {supp} (fT)\subseteq \operatorname {supp} (f)\cap \operatorname {supp} (T).}
=== Distributions with compact support ===
==== Support in a point set and Dirac measures ====
For any
x
∈
U
,
{\displaystyle x\in U,}
let
δ
x
∈
D
′
(
U
)
{\displaystyle \delta _{x}\in {\mathcal {D}}'(U)}
denote the distribution induced by the Dirac measure at
x
.
{\displaystyle x.}
For any
x
0
∈
U
{\displaystyle x_{0}\in U}
and distribution
T
∈
D
′
(
U
)
,
{\displaystyle T\in {\mathcal {D}}'(U),}
the support of T is contained in
{
x
0
}
{\displaystyle \{x_{0}\}}
if and only if T is a finite linear combination of derivatives of the Dirac measure at
x
0
.
{\displaystyle x_{0}.}
If in addition the order of T is
≤
k
{\displaystyle \leq k}
then there exist constants
α
p
{\displaystyle \alpha _{p}}
such that:
T
=
∑
|
p
|
≤
k
α
p
∂
p
δ
x
0
.
{\displaystyle T=\sum _{|p|\leq k}\alpha _{p}\partial ^{p}\delta _{x_{0}}.}
Said differently, if T has support at a single point
{
P
}
,
{\displaystyle \{P\},}
then T is in fact a finite linear combination of distributional derivatives of the
δ
{\displaystyle \delta }
function at P. That is, there exists an integer m and complex constants
a
α
{\displaystyle a_{\alpha }}
such that
T
=
∑
|
α
|
≤
m
a
α
∂
α
(
τ
P
δ
)
{\displaystyle T=\sum _{|\alpha |\leq m}a_{\alpha }\partial ^{\alpha }(\tau _{P}\delta )}
where
τ
P
{\displaystyle \tau _{P}}
is the translation operator.
==== Distribution with compact support ====
==== Distributions of finite order with support in an open subset ====
=== Global structure of distributions ===
The formal definition of distributions exhibits them as a subspace of a very large space, namely the topological dual of
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
(or the Schwartz space
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
for tempered distributions). It is not immediately clear from the definition how exotic a distribution might be. To answer this question, it is instructive to see distributions built up from a smaller space, namely the space of continuous functions. Roughly, any distribution is locally a (multiple) derivative of a continuous function. A precise version of this result, given below, holds for distributions of compact support, tempered distributions, and general distributions. Generally speaking, no proper subset of the space of distributions contains all continuous functions and is closed under differentiation. This says that distributions are not particularly exotic objects; they are only as complicated as necessary.
==== Distributions as sheaves ====
==== Decomposition of distributions as sums of derivatives of continuous functions ====
By combining the above results, one may express any distribution on U as the sum of a series of distributions with compact support, where each of these distributions can in turn be written as a finite sum of distributional derivatives of continuous functions on U. In other words, for arbitrary
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
we can write:
T
=
∑
i
=
1
∞
∑
p
∈
P
i
∂
p
f
i
p
,
{\displaystyle T=\sum _{i=1}^{\infty }\sum _{p\in P_{i}}\partial ^{p}f_{ip},}
where
P
1
,
P
2
,
…
{\displaystyle P_{1},P_{2},\ldots }
are finite sets of multi-indices and the functions
f
i
p
{\displaystyle f_{ip}}
are continuous.
Note that the infinite sum above is well-defined as a distribution. The value of T for a given
f
∈
D
(
U
)
{\displaystyle f\in {\mathcal {D}}(U)}
can be computed using the finitely many
g
α
{\displaystyle g_{\alpha }}
that intersect the support of
f
.
{\displaystyle f.}
== Operations on distributions ==
Many operations which are defined on smooth functions with compact support can also be defined for distributions. In general, if
A
:
D
(
U
)
→
D
(
U
)
{\displaystyle A:{\mathcal {D}}(U)\to {\mathcal {D}}(U)}
is a linear map that is continuous with respect to the weak topology, then it is not always possible to extend
A
{\displaystyle A}
to a map
A
′
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle A':{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
by classic extension theorems of topology or linear functional analysis. The “distributional” extension of the above linear continuous operator A is possible if and only if A admits a Schwartz adjoint, that is another linear continuous operator B of the same type such that
⟨
A
f
,
g
⟩
=
⟨
f
,
B
g
⟩
{\displaystyle \langle Af,g\rangle =\langle f,Bg\rangle }
,
for every pair of test functions. In that condition, B is unique and the extension A’ is the transpose of the Schwartz adjoint B.
=== Preliminaries: Transpose of a linear operator ===
Operations on distributions and spaces of distributions are often defined using the transpose of a linear operator. This is because the transpose allows for a unified presentation of the many definitions in the theory of distributions and also because its properties are well-known in functional analysis. For instance, the well-known Hermitian adjoint of a linear operator between Hilbert spaces is just the operator's transpose (but with the Riesz representation theorem used to identify each Hilbert space with its continuous dual space). In general, the transpose of a continuous linear map
A
:
X
→
Y
{\displaystyle A:X\to Y}
is the linear map
t
A
:
Y
′
→
X
′
defined by
t
A
(
y
′
)
:=
y
′
∘
A
,
{\displaystyle {}^{t}A:Y'\to X'\qquad {\text{ defined by }}\qquad {}^{t}A(y'):=y'\circ A,}
or equivalently, it is the unique map satisfying
⟨
y
′
,
A
(
x
)
⟩
=
⟨
t
A
(
y
′
)
,
x
⟩
{\displaystyle \langle y',A(x)\rangle =\left\langle {}^{t}A(y'),x\right\rangle }
for all
x
∈
X
{\displaystyle x\in X}
and all
y
′
∈
Y
′
{\displaystyle y'\in Y'}
(the prime symbol in
y
′
{\displaystyle y'}
does not denote a derivative of any kind; it merely indicates that
y
′
{\displaystyle y'}
is an element of the continuous dual space
Y
′
{\displaystyle Y'}
). Since
A
{\displaystyle A}
is continuous, the transpose
t
A
:
Y
′
→
X
′
{\displaystyle {}^{t}A:Y'\to X'}
is also continuous when both duals are endowed with their respective strong dual topologies; it is also continuous when both duals are endowed with their respective weak* topologies (see the articles polar topology and dual system for more details).
In the context of distributions, the characterization of the transpose can be refined slightly. Let
A
:
D
(
U
)
→
D
(
U
)
{\displaystyle A:{\mathcal {D}}(U)\to {\mathcal {D}}(U)}
be a continuous linear map. Then by definition, the transpose of
A
{\displaystyle A}
is the unique linear operator
t
A
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle {}^{t}A:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
that satisfies:
⟨
t
A
(
T
)
,
ϕ
⟩
=
⟨
T
,
A
(
ϕ
)
⟩
for all
ϕ
∈
D
(
U
)
and all
T
∈
D
′
(
U
)
.
{\displaystyle \langle {}^{t}A(T),\phi \rangle =\langle T,A(\phi )\rangle \quad {\text{ for all }}\phi \in {\mathcal {D}}(U){\text{ and all }}T\in {\mathcal {D}}'(U).}
Since
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
is dense in
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
(here,
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
actually refers to the set of distributions
{
D
ψ
:
ψ
∈
D
(
U
)
}
{\displaystyle \left\{D_{\psi }:\psi \in {\mathcal {D}}(U)\right\}}
) it is sufficient that the defining equality hold for all distributions of the form
T
=
D
ψ
{\displaystyle T=D_{\psi }}
where
ψ
∈
D
(
U
)
.
{\displaystyle \psi \in {\mathcal {D}}(U).}
Explicitly, this means that a continuous linear map
B
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle B:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
is equal to
t
A
{\displaystyle {}^{t}A}
if and only if the condition below holds:
⟨
B
(
D
ψ
)
,
ϕ
⟩
=
⟨
t
A
(
D
ψ
)
,
ϕ
⟩
for all
ϕ
,
ψ
∈
D
(
U
)
{\displaystyle \langle B(D_{\psi }),\phi \rangle =\langle {}^{t}A(D_{\psi }),\phi \rangle \quad {\text{ for all }}\phi ,\psi \in {\mathcal {D}}(U)}
where the right-hand side equals
⟨
t
A
(
D
ψ
)
,
ϕ
⟩
=
⟨
D
ψ
,
A
(
ϕ
)
⟩
=
⟨
ψ
,
A
(
ϕ
)
⟩
=
∫
U
ψ
⋅
A
(
ϕ
)
d
x
.
{\displaystyle \langle {}^{t}A(D_{\psi }),\phi \rangle =\langle D_{\psi },A(\phi )\rangle =\langle \psi ,A(\phi )\rangle =\int _{U}\psi \cdot A(\phi )\,dx.}
=== Differential operators ===
==== Differentiation of distributions ====
Let
A
:
D
(
U
)
→
D
(
U
)
{\displaystyle A:{\mathcal {D}}(U)\to {\mathcal {D}}(U)}
be the partial derivative operator
∂
∂
x
k
.
{\displaystyle {\tfrac {\partial }{\partial x_{k}}}.}
To extend
A
{\displaystyle A}
we compute its transpose:
⟨
t
A
(
D
ψ
)
,
ϕ
⟩
=
∫
U
ψ
(
A
ϕ
)
d
x
(See above.)
=
∫
U
ψ
∂
ϕ
∂
x
k
d
x
=
−
∫
U
ϕ
∂
ψ
∂
x
k
d
x
(integration by parts)
=
−
⟨
∂
ψ
∂
x
k
,
ϕ
⟩
=
−
⟨
A
ψ
,
ϕ
⟩
=
⟨
−
A
ψ
,
ϕ
⟩
{\displaystyle {\begin{aligned}\langle {}^{t}A(D_{\psi }),\phi \rangle &=\int _{U}\psi (A\phi )\,dx&&{\text{(See above.)}}\\&=\int _{U}\psi {\frac {\partial \phi }{\partial x_{k}}}\,dx\\[4pt]&=-\int _{U}\phi {\frac {\partial \psi }{\partial x_{k}}}\,dx&&{\text{(integration by parts)}}\\[4pt]&=-\left\langle {\frac {\partial \psi }{\partial x_{k}}},\phi \right\rangle \\[4pt]&=-\langle A\psi ,\phi \rangle =\langle -A\psi ,\phi \rangle \end{aligned}}}
Therefore
t
A
=
−
A
.
{\displaystyle {}^{t}A=-A.}
Thus, the partial derivative of
T
{\displaystyle T}
with respect to the coordinate
x
k
{\displaystyle x_{k}}
is defined by the formula
⟨
∂
T
∂
x
k
,
ϕ
⟩
=
−
⟨
T
,
∂
ϕ
∂
x
k
⟩
for all
ϕ
∈
D
(
U
)
.
{\displaystyle \left\langle {\frac {\partial T}{\partial x_{k}}},\phi \right\rangle =-\left\langle T,{\frac {\partial \phi }{\partial x_{k}}}\right\rangle \qquad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
With this definition, every distribution is infinitely differentiable, and the derivative in the direction
x
k
{\displaystyle x_{k}}
is a linear operator on
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'(U).}
More generally, if
α
{\displaystyle \alpha }
is an arbitrary multi-index, then the partial derivative
∂
α
T
{\displaystyle \partial ^{\alpha }T}
of the distribution
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
is defined by
⟨
∂
α
T
,
ϕ
⟩
=
(
−
1
)
|
α
|
⟨
T
,
∂
α
ϕ
⟩
for all
ϕ
∈
D
(
U
)
.
{\displaystyle \langle \partial ^{\alpha }T,\phi \rangle =(-1)^{|\alpha |}\langle T,\partial ^{\alpha }\phi \rangle \qquad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
Differentiation of distributions is a continuous operator on
D
′
(
U
)
;
{\displaystyle {\mathcal {D}}'(U);}
this is an important and desirable property that is not shared by most other notions of differentiation.
If
T
{\displaystyle T}
is a distribution in
R
{\displaystyle \mathbb {R} }
then
lim
x
→
0
T
−
τ
x
T
x
=
T
′
∈
D
′
(
R
)
,
{\displaystyle \lim _{x\to 0}{\frac {T-\tau _{x}T}{x}}=T'\in {\mathcal {D}}'(\mathbb {R} ),}
where
T
′
{\displaystyle T'}
is the derivative of
T
{\displaystyle T}
and
τ
x
{\displaystyle \tau _{x}}
is a translation by
x
;
{\displaystyle x;}
thus the derivative of
T
{\displaystyle T}
may be viewed as a limit of quotients.
==== Differential operators acting on smooth functions ====
A linear differential operator in
U
{\displaystyle U}
with smooth coefficients acts on the space of smooth functions on
U
.
{\displaystyle U.}
Given such an operator
P
:=
∑
α
c
α
∂
α
,
{\textstyle P:=\sum _{\alpha }c_{\alpha }\partial ^{\alpha },}
we would like to define a continuous linear map,
D
P
{\displaystyle D_{P}}
that extends the action of
P
{\displaystyle P}
on
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
to distributions on
U
.
{\displaystyle U.}
In other words, we would like to define
D
P
{\displaystyle D_{P}}
such that the following diagram commutes:
D
′
(
U
)
⟶
D
P
D
′
(
U
)
↑
↑
C
∞
(
U
)
⟶
P
C
∞
(
U
)
{\displaystyle {\begin{matrix}{\mathcal {D}}'(U)&{\stackrel {D_{P}}{\longrightarrow }}&{\mathcal {D}}'(U)\\[2pt]\uparrow &&\uparrow \\[2pt]C^{\infty }(U)&{\stackrel {P}{\longrightarrow }}&C^{\infty }(U)\end{matrix}}}
where the vertical maps are given by assigning
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
its canonical distribution
D
f
∈
D
′
(
U
)
,
{\displaystyle D_{f}\in {\mathcal {D}}'(U),}
which is defined by:
D
f
(
ϕ
)
=
⟨
f
,
ϕ
⟩
:=
∫
U
f
(
x
)
ϕ
(
x
)
d
x
for all
ϕ
∈
D
(
U
)
.
{\displaystyle D_{f}(\phi )=\langle f,\phi \rangle :=\int _{U}f(x)\phi (x)\,dx\quad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
With this notation, the diagram commuting is equivalent to:
D
P
(
f
)
=
D
P
D
f
for all
f
∈
C
∞
(
U
)
.
{\displaystyle D_{P(f)}=D_{P}D_{f}\qquad {\text{ for all }}f\in C^{\infty }(U).}
To find
D
P
,
{\displaystyle D_{P},}
the transpose
t
P
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle {}^{t}P:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
of the continuous induced map
P
:
D
(
U
)
→
D
(
U
)
{\displaystyle P:{\mathcal {D}}(U)\to {\mathcal {D}}(U)}
defined by
ϕ
↦
P
(
ϕ
)
{\displaystyle \phi \mapsto P(\phi )}
is considered in the lemma below.
This leads to the following definition of the differential operator on
U
{\displaystyle U}
called the formal transpose of
P
,
{\displaystyle P,}
which will be denoted by
P
∗
{\displaystyle P_{*}}
to avoid confusion with the transpose map, that is defined by
P
∗
:=
∑
α
b
α
∂
α
where
b
α
:=
∑
β
≥
α
(
−
1
)
|
β
|
(
β
α
)
∂
β
−
α
c
β
.
{\displaystyle P_{*}:=\sum _{\alpha }b_{\alpha }\partial ^{\alpha }\quad {\text{ where }}\quad b_{\alpha }:=\sum _{\beta \geq \alpha }(-1)^{|\beta |}{\binom {\beta }{\alpha }}\partial ^{\beta -\alpha }c_{\beta }.}
The Lemma combined with the fact that the formal transpose of the formal transpose is the original differential operator, that is,
P
∗
∗
=
P
,
{\displaystyle P_{**}=P,}
enables us to arrive at the correct definition: the formal transpose induces the (continuous) canonical linear operator
P
∗
:
C
c
∞
(
U
)
→
C
c
∞
(
U
)
{\displaystyle P_{*}:C_{c}^{\infty }(U)\to C_{c}^{\infty }(U)}
defined by
ϕ
↦
P
∗
(
ϕ
)
.
{\displaystyle \phi \mapsto P_{*}(\phi ).}
We claim that the transpose of this map,
t
P
∗
:
D
′
(
U
)
→
D
′
(
U
)
,
{\displaystyle {}^{t}P_{*}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U),}
can be taken as
D
P
.
{\displaystyle D_{P}.}
To see this, for every
ϕ
∈
D
(
U
)
,
{\displaystyle \phi \in {\mathcal {D}}(U),}
compute its action on a distribution of the form
D
f
{\displaystyle D_{f}}
with
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
:
⟨
t
P
∗
(
D
f
)
,
ϕ
⟩
=
⟨
D
P
∗
∗
(
f
)
,
ϕ
⟩
Using Lemma above with
P
∗
in place of
P
=
⟨
D
P
(
f
)
,
ϕ
⟩
P
∗
∗
=
P
{\displaystyle {\begin{aligned}\left\langle {}^{t}P_{*}\left(D_{f}\right),\phi \right\rangle &=\left\langle D_{P_{**}(f)},\phi \right\rangle &&{\text{Using Lemma above with }}P_{*}{\text{ in place of }}P\\&=\left\langle D_{P(f)},\phi \right\rangle &&P_{**}=P\end{aligned}}}
We call the continuous linear operator
D
P
:=
t
P
∗
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle D_{P}:={}^{t}P_{*}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
the differential operator on distributions extending
P
{\displaystyle P}
. Its action on an arbitrary distribution
S
{\displaystyle S}
is defined via:
D
P
(
S
)
(
ϕ
)
=
S
(
P
∗
(
ϕ
)
)
for all
ϕ
∈
D
(
U
)
.
{\displaystyle D_{P}(S)(\phi )=S\left(P_{*}(\phi )\right)\quad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
If
(
T
i
)
i
=
1
∞
{\displaystyle (T_{i})_{i=1}^{\infty }}
converges to
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
then for every multi-index
α
,
(
∂
α
T
i
)
i
=
1
∞
{\displaystyle \alpha ,(\partial ^{\alpha }T_{i})_{i=1}^{\infty }}
converges to
∂
α
T
∈
D
′
(
U
)
.
{\displaystyle \partial ^{\alpha }T\in {\mathcal {D}}'(U).}
==== Multiplication of distributions by smooth functions ====
A differential operator of order 0 is just multiplication by a smooth function. And conversely, if
f
{\displaystyle f}
is a smooth function then
P
:=
f
(
x
)
{\displaystyle P:=f(x)}
is a differential operator of order 0, whose formal transpose is itself (that is,
P
∗
=
P
{\displaystyle P_{*}=P}
). The induced differential operator
D
P
:
D
′
(
U
)
→
D
′
(
U
)
{\displaystyle D_{P}:{\mathcal {D}}'(U)\to {\mathcal {D}}'(U)}
maps a distribution
T
{\displaystyle T}
to a distribution denoted by
f
T
:=
D
P
(
T
)
.
{\displaystyle fT:=D_{P}(T).}
We have thus defined the multiplication of a distribution by a smooth function.
We now give an alternative presentation of the multiplication of a distribution
T
{\displaystyle T}
on
U
{\displaystyle U}
by a smooth function
m
:
U
→
R
.
{\displaystyle m:U\to \mathbb {R} .}
The product
m
T
{\displaystyle mT}
is defined by
⟨
m
T
,
ϕ
⟩
=
⟨
T
,
m
ϕ
⟩
for all
ϕ
∈
D
(
U
)
.
{\displaystyle \langle mT,\phi \rangle =\langle T,m\phi \rangle \qquad {\text{ for all }}\phi \in {\mathcal {D}}(U).}
This definition coincides with the transpose definition since if
M
:
D
(
U
)
→
D
(
U
)
{\displaystyle M:{\mathcal {D}}(U)\to {\mathcal {D}}(U)}
is the operator of multiplication by the function
m
{\displaystyle m}
(that is,
(
M
ϕ
)
(
x
)
=
m
(
x
)
ϕ
(
x
)
{\displaystyle (M\phi )(x)=m(x)\phi (x)}
), then
∫
U
(
M
ϕ
)
(
x
)
ψ
(
x
)
d
x
=
∫
U
m
(
x
)
ϕ
(
x
)
ψ
(
x
)
d
x
=
∫
U
ϕ
(
x
)
m
(
x
)
ψ
(
x
)
d
x
=
∫
U
ϕ
(
x
)
(
M
ψ
)
(
x
)
d
x
,
{\displaystyle \int _{U}(M\phi )(x)\psi (x)\,dx=\int _{U}m(x)\phi (x)\psi (x)\,dx=\int _{U}\phi (x)m(x)\psi (x)\,dx=\int _{U}\phi (x)(M\psi )(x)\,dx,}
so that
t
M
=
M
.
{\displaystyle {}^{t}M=M.}
Under multiplication by smooth functions,
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
is a module over the ring
C
∞
(
U
)
.
{\displaystyle C^{\infty }(U).}
With this definition of multiplication by a smooth function, the ordinary product rule of calculus remains valid. However, some unusual identities also arise. For example, if
δ
{\displaystyle \delta }
is the Dirac delta distribution on
R
,
{\displaystyle \mathbb {R} ,}
then
m
δ
=
m
(
0
)
δ
,
{\displaystyle m\delta =m(0)\delta ,}
and if
δ
′
{\displaystyle \delta ^{'}}
is the derivative of the delta distribution, then
m
δ
′
=
m
(
0
)
δ
′
−
m
′
δ
=
m
(
0
)
δ
′
−
m
′
(
0
)
δ
.
{\displaystyle m\delta '=m(0)\delta '-m'\delta =m(0)\delta '-m'(0)\delta .}
The bilinear multiplication map
C
∞
(
R
n
)
×
D
′
(
R
n
)
→
D
′
(
R
n
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n})\times {\mathcal {D}}'(\mathbb {R} ^{n})\to {\mathcal {D}}'\left(\mathbb {R} ^{n}\right)}
given by
(
f
,
T
)
↦
f
T
{\displaystyle (f,T)\mapsto fT}
is not continuous; it is however, hypocontinuous.
Example. The product of any distribution
T
{\displaystyle T}
with the function that is identically 1 on
U
{\displaystyle U}
is equal to
T
.
{\displaystyle T.}
Example. Suppose
(
f
i
)
i
=
1
∞
{\displaystyle (f_{i})_{i=1}^{\infty }}
is a sequence of test functions on
U
{\displaystyle U}
that converges to the constant function
1
∈
C
∞
(
U
)
.
{\displaystyle 1\in C^{\infty }(U).}
For any distribution
T
{\displaystyle T}
on
U
,
{\displaystyle U,}
the sequence
(
f
i
T
)
i
=
1
∞
{\displaystyle (f_{i}T)_{i=1}^{\infty }}
converges to
T
∈
D
′
(
U
)
.
{\displaystyle T\in {\mathcal {D}}'(U).}
If
(
T
i
)
i
=
1
∞
{\displaystyle (T_{i})_{i=1}^{\infty }}
converges to
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
and
(
f
i
)
i
=
1
∞
{\displaystyle (f_{i})_{i=1}^{\infty }}
converges to
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
then
(
f
i
T
i
)
i
=
1
∞
{\displaystyle (f_{i}T_{i})_{i=1}^{\infty }}
converges to
f
T
∈
D
′
(
U
)
.
{\displaystyle fT\in {\mathcal {D}}'(U).}
===== Problem of multiplying distributions =====
It is easy to define the product of a distribution with a smooth function, or more generally the product of two distributions whose singular supports are disjoint. With more effort, it is possible to define a well-behaved product of several distributions provided their wave front sets at each point are compatible. A limitation of the theory of distributions (and hyperfunctions) is that there is no associative product of two distributions extending the product of a distribution by a smooth function, as has been proved by Laurent Schwartz in the 1950s. For example, if
p
.
v
.
1
x
{\displaystyle \operatorname {p.v.} {\frac {1}{x}}}
is the distribution obtained by the Cauchy principal value
(
p
.
v
.
1
x
)
(
ϕ
)
=
lim
ε
→
0
+
∫
|
x
|
≥
ε
ϕ
(
x
)
x
d
x
for all
ϕ
∈
S
(
R
)
.
{\displaystyle \left(\operatorname {p.v.} {\frac {1}{x}}\right)(\phi )=\lim _{\varepsilon \to 0^{+}}\int _{|x|\geq \varepsilon }{\frac {\phi (x)}{x}}\,dx\quad {\text{ for all }}\phi \in {\mathcal {S}}(\mathbb {R} ).}
If
δ
{\displaystyle \delta }
is the Dirac delta distribution then
(
δ
×
x
)
×
p
.
v
.
1
x
=
0
{\displaystyle (\delta \times x)\times \operatorname {p.v.} {\frac {1}{x}}=0}
but,
δ
×
(
x
×
p
.
v
.
1
x
)
=
δ
{\displaystyle \delta \times \left(x\times \operatorname {p.v.} {\frac {1}{x}}\right)=\delta }
so the product of a distribution by a smooth function (which is always well-defined) cannot be extended to an associative product on the space of distributions.
Thus, nonlinear problems cannot be posed in general and thus are not solved within distribution theory alone. In the context of quantum field theory, however, solutions can be found. In more than two spacetime dimensions the problem is related to the regularization of divergences. Here Henri Epstein and Vladimir Glaser developed the mathematically rigorous (but extremely technical) causal perturbation theory. This does not solve the problem in other situations. Many other interesting theories are non-linear, like for example the Navier–Stokes equations of fluid dynamics.
Several not entirely satisfactory theories of algebras of generalized functions have been developed, among which Colombeau's (simplified) algebra is maybe the most popular in use today.
Inspired by Lyons' rough path theory, Martin Hairer proposed a consistent way of multiplying distributions with certain structures (regularity structures), available in many examples from stochastic analysis, notably stochastic partial differential equations. See also Gubinelli–Imkeller–Perkowski (2015) for a related development based on Bony's paraproduct from Fourier analysis.
=== Composition with a smooth function ===
Let
T
{\displaystyle T}
be a distribution on
U
.
{\displaystyle U.}
Let
V
{\displaystyle V}
be an open set in
R
n
{\displaystyle \mathbb {R} ^{n}}
and
F
:
V
→
U
.
{\displaystyle F:V\to U.}
If
F
{\displaystyle F}
is a submersion then it is possible to define
T
∘
F
∈
D
′
(
V
)
.
{\displaystyle T\circ F\in {\mathcal {D}}'(V).}
This is the composition of the distribution
T
{\displaystyle T}
with
F
{\displaystyle F}
, and is also called the pullback of
T
{\displaystyle T}
along
F
{\displaystyle F}
, sometimes written
F
♯
:
T
↦
F
♯
T
=
T
∘
F
.
{\displaystyle F^{\sharp }:T\mapsto F^{\sharp }T=T\circ F.}
The pullback is often denoted
F
∗
,
{\displaystyle F^{*},}
although this notation should not be confused with the use of '*' to denote the adjoint of a linear mapping.
The condition that
F
{\displaystyle F}
be a submersion is equivalent to the requirement that the Jacobian derivative
d
F
(
x
)
{\displaystyle dF(x)}
of
F
{\displaystyle F}
is a surjective linear map for every
x
∈
V
.
{\displaystyle x\in V.}
A necessary (but not sufficient) condition for extending
F
#
{\displaystyle F^{\#}}
to distributions is that
F
{\displaystyle F}
be an open mapping. The Inverse function theorem ensures that a submersion satisfies this condition.
If
F
{\displaystyle F}
is a submersion, then
F
#
{\displaystyle F^{\#}}
is defined on distributions by finding the transpose map. The uniqueness of this extension is guaranteed since
F
#
{\displaystyle F^{\#}}
is a continuous linear operator on
D
(
U
)
.
{\displaystyle {\mathcal {D}}(U).}
Existence, however, requires using the change of variables formula, the inverse function theorem (locally), and a partition of unity argument.
In the special case when
F
{\displaystyle F}
is a diffeomorphism from an open subset
V
{\displaystyle V}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
onto an open subset
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
change of variables under the integral gives:
∫
V
ϕ
∘
F
(
x
)
ψ
(
x
)
d
x
=
∫
U
ϕ
(
x
)
ψ
(
F
−
1
(
x
)
)
|
det
d
F
−
1
(
x
)
|
d
x
.
{\displaystyle \int _{V}\phi \circ F(x)\psi (x)\,dx=\int _{U}\phi (x)\psi \left(F^{-1}(x)\right)\left|\det dF^{-1}(x)\right|\,dx.}
In this particular case, then,
F
#
{\displaystyle F^{\#}}
is defined by the transpose formula:
⟨
F
♯
T
,
ϕ
⟩
=
⟨
T
,
|
det
d
(
F
−
1
)
|
ϕ
∘
F
−
1
⟩
.
{\displaystyle \left\langle F^{\sharp }T,\phi \right\rangle =\left\langle T,\left|\det d(F^{-1})\right|\phi \circ F^{-1}\right\rangle .}
=== Convolution ===
Under some circumstances, it is possible to define the convolution of a function with a distribution, or even the convolution of two distributions.
Recall that if
f
{\displaystyle f}
and
g
{\displaystyle g}
are functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
then we denote by
f
∗
g
{\displaystyle f\ast g}
the convolution of
f
{\displaystyle f}
and
g
,
{\displaystyle g,}
defined at
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
to be the integral
(
f
∗
g
)
(
x
)
:=
∫
R
n
f
(
x
−
y
)
g
(
y
)
d
y
=
∫
R
n
f
(
y
)
g
(
x
−
y
)
d
y
{\displaystyle (f\ast g)(x):=\int _{\mathbb {R} ^{n}}f(x-y)g(y)\,dy=\int _{\mathbb {R} ^{n}}f(y)g(x-y)\,dy}
provided that the integral exists. If
1
≤
p
,
q
,
r
≤
∞
{\displaystyle 1\leq p,q,r\leq \infty }
are such that
1
r
=
1
p
+
1
q
−
1
{\textstyle {\frac {1}{r}}={\frac {1}{p}}+{\frac {1}{q}}-1}
then for any functions
f
∈
L
p
(
R
n
)
{\displaystyle f\in L^{p}(\mathbb {R} ^{n})}
and
g
∈
L
q
(
R
n
)
{\displaystyle g\in L^{q}(\mathbb {R} ^{n})}
we have
f
∗
g
∈
L
r
(
R
n
)
{\displaystyle f\ast g\in L^{r}(\mathbb {R} ^{n})}
and
‖
f
∗
g
‖
L
r
≤
‖
f
‖
L
p
‖
g
‖
L
q
.
{\displaystyle \|f\ast g\|_{L^{r}}\leq \|f\|_{L^{p}}\|g\|_{L^{q}}.}
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous functions on
R
n
,
{\displaystyle \mathbb {R} ^{n},}
at least one of which has compact support, then
supp
(
f
∗
g
)
⊆
supp
(
f
)
+
supp
(
g
)
{\displaystyle \operatorname {supp} (f\ast g)\subseteq \operatorname {supp} (f)+\operatorname {supp} (g)}
and if
A
⊆
R
n
{\displaystyle A\subseteq \mathbb {R} ^{n}}
then the value of
f
∗
g
{\displaystyle f\ast g}
on
A
{\displaystyle A}
do not depend on the values of
f
{\displaystyle f}
outside of the Minkowski sum
A
−
supp
(
g
)
=
{
a
−
s
:
a
∈
A
,
s
∈
supp
(
g
)
}
.
{\displaystyle A-\operatorname {supp} (g)=\{a-s:a\in A,s\in \operatorname {supp} (g)\}.}
Importantly, if
g
∈
L
1
(
R
n
)
{\displaystyle g\in L^{1}(\mathbb {R} ^{n})}
has compact support then for any
0
≤
k
≤
∞
,
{\displaystyle 0\leq k\leq \infty ,}
the convolution map
f
↦
f
∗
g
{\displaystyle f\mapsto f\ast g}
is continuous when considered as the map
C
k
(
R
n
)
→
C
k
(
R
n
)
{\displaystyle C^{k}(\mathbb {R} ^{n})\to C^{k}(\mathbb {R} ^{n})}
or as the map
C
c
k
(
R
n
)
→
C
c
k
(
R
n
)
.
{\displaystyle C_{c}^{k}(\mathbb {R} ^{n})\to C_{c}^{k}(\mathbb {R} ^{n}).}
==== Translation and symmetry ====
Given
a
∈
R
n
,
{\displaystyle a\in \mathbb {R} ^{n},}
the translation operator
τ
a
{\displaystyle \tau _{a}}
sends
f
:
R
n
→
C
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {C} }
to
τ
a
f
:
R
n
→
C
,
{\displaystyle \tau _{a}f:\mathbb {R} ^{n}\to \mathbb {C} ,}
defined by
τ
a
f
(
y
)
=
f
(
y
−
a
)
.
{\displaystyle \tau _{a}f(y)=f(y-a).}
This can be extended by the transpose to distributions in the following way: given a distribution
T
,
{\displaystyle T,}
the translation of
T
{\displaystyle T}
by
a
{\displaystyle a}
is the distribution
τ
a
T
:
D
(
R
n
)
→
C
{\displaystyle \tau _{a}T:{\mathcal {D}}(\mathbb {R} ^{n})\to \mathbb {C} }
defined by
τ
a
T
(
ϕ
)
:=
⟨
T
,
τ
−
a
ϕ
⟩
.
{\displaystyle \tau _{a}T(\phi ):=\left\langle T,\tau _{-a}\phi \right\rangle .}
Given
f
:
R
n
→
C
,
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {C} ,}
define the function
f
~
:
R
n
→
C
{\displaystyle {\tilde {f}}:\mathbb {R} ^{n}\to \mathbb {C} }
by
f
~
(
x
)
:=
f
(
−
x
)
.
{\displaystyle {\tilde {f}}(x):=f(-x).}
Given a distribution
T
,
{\displaystyle T,}
let
T
~
:
D
(
R
n
)
→
C
{\displaystyle {\tilde {T}}:{\mathcal {D}}(\mathbb {R} ^{n})\to \mathbb {C} }
be the distribution defined by
T
~
(
ϕ
)
:=
T
(
ϕ
~
)
.
{\displaystyle {\tilde {T}}(\phi ):=T\left({\tilde {\phi }}\right).}
The operator
T
↦
T
~
{\displaystyle T\mapsto {\tilde {T}}}
is called the symmetry with respect to the origin.
==== Convolution of a test function with a distribution ====
Convolution with
f
∈
D
(
R
n
)
{\displaystyle f\in {\mathcal {D}}(\mathbb {R} ^{n})}
defines a linear map:
C
f
:
D
(
R
n
)
→
D
(
R
n
)
g
↦
f
∗
g
{\displaystyle {\begin{alignedat}{4}C_{f}:\,&{\mathcal {D}}(\mathbb {R} ^{n})&&\to \,&&{\mathcal {D}}(\mathbb {R} ^{n})\\&g&&\mapsto \,&&f\ast g\\\end{alignedat}}}
which is continuous with respect to the canonical LF space topology on
D
(
R
n
)
.
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n}).}
Convolution of
f
{\displaystyle f}
with a distribution
T
∈
D
′
(
R
n
)
{\displaystyle T\in {\mathcal {D}}'(\mathbb {R} ^{n})}
can be defined by taking the transpose of
C
f
{\displaystyle C_{f}}
relative to the duality pairing of
D
(
R
n
)
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})}
with the space
D
′
(
R
n
)
{\displaystyle {\mathcal {D}}'(\mathbb {R} ^{n})}
of distributions. If
f
,
g
,
ϕ
∈
D
(
R
n
)
,
{\displaystyle f,g,\phi \in {\mathcal {D}}(\mathbb {R} ^{n}),}
then by Fubini's theorem
⟨
C
f
g
,
ϕ
⟩
=
∫
R
n
ϕ
(
x
)
∫
R
n
f
(
x
−
y
)
g
(
y
)
d
y
d
x
=
⟨
g
,
C
f
~
ϕ
⟩
.
{\displaystyle \langle C_{f}g,\phi \rangle =\int _{\mathbb {R} ^{n}}\phi (x)\int _{\mathbb {R} ^{n}}f(x-y)g(y)\,dy\,dx=\left\langle g,C_{\tilde {f}}\phi \right\rangle .}
Extending by continuity, the convolution of
f
{\displaystyle f}
with a distribution
T
{\displaystyle T}
is defined by
⟨
f
∗
T
,
ϕ
⟩
=
⟨
T
,
f
~
∗
ϕ
⟩
,
for all
ϕ
∈
D
(
R
n
)
.
{\displaystyle \langle f\ast T,\phi \rangle =\left\langle T,{\tilde {f}}\ast \phi \right\rangle ,\quad {\text{ for all }}\phi \in {\mathcal {D}}(\mathbb {R} ^{n}).}
An alternative way to define the convolution of a test function
f
{\displaystyle f}
and a distribution
T
{\displaystyle T}
is to use the translation operator
τ
a
.
{\displaystyle \tau _{a}.}
The convolution of the compactly supported function
f
{\displaystyle f}
and the distribution
T
{\displaystyle T}
is then the function defined for each
x
∈
R
n
{\displaystyle x\in \mathbb {R} ^{n}}
by
(
f
∗
T
)
(
x
)
=
⟨
T
,
τ
x
f
~
⟩
.
{\displaystyle (f\ast T)(x)=\left\langle T,\tau _{x}{\tilde {f}}\right\rangle .}
It can be shown that the convolution of a smooth, compactly supported function and a distribution is a smooth function. If the distribution
T
{\displaystyle T}
has compact support, and if
f
{\displaystyle f}
is a polynomial (resp. an exponential function, an analytic function, the restriction of an entire analytic function on
C
n
{\displaystyle \mathbb {C} ^{n}}
to
R
n
,
{\displaystyle \mathbb {R} ^{n},}
the restriction of an entire function of exponential type in
C
n
{\displaystyle \mathbb {C} ^{n}}
to
R
n
{\displaystyle \mathbb {R} ^{n}}
), then the same is true of
T
∗
f
.
{\displaystyle T\ast f.}
If the distribution
T
{\displaystyle T}
has compact support as well, then
f
∗
T
{\displaystyle f\ast T}
is a compactly supported function, and the Titchmarsh convolution theorem Hörmander (1983, Theorem 4.3.3) implies that:
ch
(
supp
(
f
∗
T
)
)
=
ch
(
supp
(
f
)
)
+
ch
(
supp
(
T
)
)
{\displaystyle \operatorname {ch} (\operatorname {supp} (f\ast T))=\operatorname {ch} (\operatorname {supp} (f))+\operatorname {ch} (\operatorname {supp} (T))}
where
ch
{\displaystyle \operatorname {ch} }
denotes the convex hull and
supp
{\displaystyle \operatorname {supp} }
denotes the support.
==== Convolution of a smooth function with a distribution ====
Let
f
∈
C
∞
(
R
n
)
{\displaystyle f\in C^{\infty }(\mathbb {R} ^{n})}
and
T
∈
D
′
(
R
n
)
{\displaystyle T\in {\mathcal {D}}'(\mathbb {R} ^{n})}
and assume that at least one of
f
{\displaystyle f}
and
T
{\displaystyle T}
has compact support. The convolution of
f
{\displaystyle f}
and
T
,
{\displaystyle T,}
denoted by
f
∗
T
{\displaystyle f\ast T}
or by
T
∗
f
,
{\displaystyle T\ast f,}
is the smooth function:
f
∗
T
:
R
n
→
C
x
↦
⟨
T
,
τ
x
f
~
⟩
{\displaystyle {\begin{alignedat}{4}f\ast T:\,&\mathbb {R} ^{n}&&\to \,&&\mathbb {C} \\&x&&\mapsto \,&&\left\langle T,\tau _{x}{\tilde {f}}\right\rangle \\\end{alignedat}}}
satisfying for all
p
∈
N
n
{\displaystyle p\in \mathbb {N} ^{n}}
:
supp
(
f
∗
T
)
⊆
supp
(
f
)
+
supp
(
T
)
for all
p
∈
N
n
:
{
∂
p
⟨
T
,
τ
x
f
~
⟩
=
⟨
T
,
∂
p
τ
x
f
~
⟩
∂
p
(
T
∗
f
)
=
(
∂
p
T
)
∗
f
=
T
∗
(
∂
p
f
)
.
{\displaystyle {\begin{aligned}&\operatorname {supp} (f\ast T)\subseteq \operatorname {supp} (f)+\operatorname {supp} (T)\\[6pt]&{\text{ for all }}p\in \mathbb {N} ^{n}:\quad {\begin{cases}\partial ^{p}\left\langle T,\tau _{x}{\tilde {f}}\right\rangle =\left\langle T,\partial ^{p}\tau _{x}{\tilde {f}}\right\rangle \\\partial ^{p}(T\ast f)=(\partial ^{p}T)\ast f=T\ast (\partial ^{p}f).\end{cases}}\end{aligned}}}
Let
M
{\displaystyle M}
be the map
f
↦
T
∗
f
{\displaystyle f\mapsto T\ast f}
. If
T
{\displaystyle T}
is a distribution, then
M
{\displaystyle M}
is continuous as a map
D
(
R
n
)
→
C
∞
(
R
n
)
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})\to C^{\infty }(\mathbb {R} ^{n})}
. If
T
{\displaystyle T}
also has compact support, then
M
{\displaystyle M}
is also continuous as the map
C
∞
(
R
n
)
→
C
∞
(
R
n
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n})\to C^{\infty }(\mathbb {R} ^{n})}
and continuous as the map
D
(
R
n
)
→
D
(
R
n
)
.
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})\to {\mathcal {D}}(\mathbb {R} ^{n}).}
If
L
:
D
(
R
n
)
→
C
∞
(
R
n
)
{\displaystyle L:{\mathcal {D}}(\mathbb {R} ^{n})\to C^{\infty }(\mathbb {R} ^{n})}
is a continuous linear map such that
L
∂
α
ϕ
=
∂
α
L
ϕ
{\displaystyle L\partial ^{\alpha }\phi =\partial ^{\alpha }L\phi }
for all
α
{\displaystyle \alpha }
and all
ϕ
∈
D
(
R
n
)
{\displaystyle \phi \in {\mathcal {D}}(\mathbb {R} ^{n})}
then there exists a distribution
T
∈
D
′
(
R
n
)
{\displaystyle T\in {\mathcal {D}}'(\mathbb {R} ^{n})}
such that
L
ϕ
=
T
∘
ϕ
{\displaystyle L\phi =T\circ \phi }
for all
ϕ
∈
D
(
R
n
)
.
{\displaystyle \phi \in {\mathcal {D}}(\mathbb {R} ^{n}).}
Example. Let
H
{\displaystyle H}
be the Heaviside function on
R
.
{\displaystyle \mathbb {R} .}
For any
ϕ
∈
D
(
R
)
,
{\displaystyle \phi \in {\mathcal {D}}(\mathbb {R} ),}
(
H
∗
ϕ
)
(
x
)
=
∫
−
∞
x
ϕ
(
t
)
d
t
.
{\displaystyle (H\ast \phi )(x)=\int _{-\infty }^{x}\phi (t)\,dt.}
Let
δ
{\displaystyle \delta }
be the Dirac measure at 0 and let
δ
′
{\displaystyle \delta '}
be its derivative as a distribution. Then
δ
′
∗
H
=
δ
{\displaystyle \delta '\ast H=\delta }
and
1
∗
δ
′
=
0.
{\displaystyle 1\ast \delta '=0.}
Importantly, the associative law fails to hold:
1
=
1
∗
δ
=
1
∗
(
δ
′
∗
H
)
≠
(
1
∗
δ
′
)
∗
H
=
0
∗
H
=
0.
{\displaystyle 1=1\ast \delta =1\ast (\delta '\ast H)\neq (1\ast \delta ')\ast H=0\ast H=0.}
==== Convolution of distributions ====
It is also possible to define the convolution of two distributions
S
{\displaystyle S}
and
T
{\displaystyle T}
on
R
n
,
{\displaystyle \mathbb {R} ^{n},}
provided one of them has compact support. Informally, to define
S
∗
T
{\displaystyle S\ast T}
where
T
{\displaystyle T}
has compact support, the idea is to extend the definition of the convolution
∗
{\displaystyle \,\ast \,}
to a linear operation on distributions so that the associativity formula
S
∗
(
T
∗
ϕ
)
=
(
S
∗
T
)
∗
ϕ
{\displaystyle S\ast (T\ast \phi )=(S\ast T)\ast \phi }
continues to hold for all test functions
ϕ
.
{\displaystyle \phi .}
It is also possible to provide a more explicit characterization of the convolution of distributions. Suppose that
S
{\displaystyle S}
and
T
{\displaystyle T}
are distributions and that
S
{\displaystyle S}
has compact support. Then the linear maps
∙
∗
S
~
:
D
(
R
n
)
→
D
(
R
n
)
and
∙
∗
T
~
:
D
(
R
n
)
→
D
(
R
n
)
f
↦
f
∗
S
~
f
↦
f
∗
T
~
{\displaystyle {\begin{alignedat}{9}\bullet \ast {\tilde {S}}:\,&{\mathcal {D}}(\mathbb {R} ^{n})&&\to \,&&{\mathcal {D}}(\mathbb {R} ^{n})&&\quad {\text{ and }}\quad &&\bullet \ast {\tilde {T}}:\,&&{\mathcal {D}}(\mathbb {R} ^{n})&&\to \,&&{\mathcal {D}}(\mathbb {R} ^{n})\\&f&&\mapsto \,&&f\ast {\tilde {S}}&&&&&&f&&\mapsto \,&&f\ast {\tilde {T}}\\\end{alignedat}}}
are continuous. The transposes of these maps:
t
(
∙
∗
S
~
)
:
D
′
(
R
n
)
→
D
′
(
R
n
)
t
(
∙
∗
T
~
)
:
E
′
(
R
n
)
→
D
′
(
R
n
)
{\displaystyle {}^{t}\left(\bullet \ast {\tilde {S}}\right):{\mathcal {D}}'(\mathbb {R} ^{n})\to {\mathcal {D}}'(\mathbb {R} ^{n})\qquad {}^{t}\left(\bullet \ast {\tilde {T}}\right):{\mathcal {E}}'(\mathbb {R} ^{n})\to {\mathcal {D}}'(\mathbb {R} ^{n})}
are consequently continuous and it can also be shown that
t
(
∙
∗
S
~
)
(
T
)
=
t
(
∙
∗
T
~
)
(
S
)
.
{\displaystyle {}^{t}\left(\bullet \ast {\tilde {S}}\right)(T)={}^{t}\left(\bullet \ast {\tilde {T}}\right)(S).}
This common value is called the convolution of
S
{\displaystyle S}
and
T
{\displaystyle T}
and it is a distribution that is denoted by
S
∗
T
{\displaystyle S\ast T}
or
T
∗
S
.
{\displaystyle T\ast S.}
It satisfies
supp
(
S
∗
T
)
⊆
supp
(
S
)
+
supp
(
T
)
.
{\displaystyle \operatorname {supp} (S\ast T)\subseteq \operatorname {supp} (S)+\operatorname {supp} (T).}
If
S
{\displaystyle S}
and
T
{\displaystyle T}
are two distributions, at least one of which has compact support, then for any
a
∈
R
n
,
{\displaystyle a\in \mathbb {R} ^{n},}
τ
a
(
S
∗
T
)
=
(
τ
a
S
)
∗
T
=
S
∗
(
τ
a
T
)
.
{\displaystyle \tau _{a}(S\ast T)=\left(\tau _{a}S\right)\ast T=S\ast \left(\tau _{a}T\right).}
If
T
{\displaystyle T}
is a distribution in
R
n
{\displaystyle \mathbb {R} ^{n}}
and if
δ
{\displaystyle \delta }
is a Dirac measure then
T
∗
δ
=
T
=
δ
∗
T
{\displaystyle T\ast \delta =T=\delta \ast T}
; thus
δ
{\displaystyle \delta }
is the identity element of the convolution operation. Moreover, if
f
{\displaystyle f}
is a function then
f
∗
δ
′
=
f
′
=
δ
′
∗
f
{\displaystyle f\ast \delta ^{\prime }=f^{\prime }=\delta ^{\prime }\ast f}
where now the associativity of convolution implies that
f
′
∗
g
=
g
′
∗
f
{\displaystyle f^{\prime }\ast g=g^{\prime }\ast f}
for all functions
f
{\displaystyle f}
and
g
.
{\displaystyle g.}
Suppose that it is
T
{\displaystyle T}
that has compact support. For
ϕ
∈
D
(
R
n
)
{\displaystyle \phi \in {\mathcal {D}}(\mathbb {R} ^{n})}
consider the function
ψ
(
x
)
=
⟨
T
,
τ
−
x
ϕ
⟩
.
{\displaystyle \psi (x)=\langle T,\tau _{-x}\phi \rangle .}
It can be readily shown that this defines a smooth function of
x
,
{\displaystyle x,}
which moreover has compact support. The convolution of
S
{\displaystyle S}
and
T
{\displaystyle T}
is defined by
⟨
S
∗
T
,
ϕ
⟩
=
⟨
S
,
ψ
⟩
.
{\displaystyle \langle S\ast T,\phi \rangle =\langle S,\psi \rangle .}
This generalizes the classical notion of convolution of functions and is compatible with differentiation in the following sense: for every multi-index
α
.
{\displaystyle \alpha .}
∂
α
(
S
∗
T
)
=
(
∂
α
S
)
∗
T
=
S
∗
(
∂
α
T
)
.
{\displaystyle \partial ^{\alpha }(S\ast T)=(\partial ^{\alpha }S)\ast T=S\ast (\partial ^{\alpha }T).}
The convolution of a finite number of distributions, all of which (except possibly one) have compact support, is associative.
This definition of convolution remains valid under less restrictive assumptions about
S
{\displaystyle S}
and
T
.
{\displaystyle T.}
The convolution of distributions with compact support induces a continuous bilinear map
E
′
×
E
′
→
E
′
{\displaystyle {\mathcal {E}}'\times {\mathcal {E}}'\to {\mathcal {E}}'}
defined by
(
S
,
T
)
↦
S
∗
T
,
{\displaystyle (S,T)\mapsto S*T,}
where
E
′
{\displaystyle {\mathcal {E}}'}
denotes the space of distributions with compact support. However, the convolution map as a function
E
′
×
D
′
→
D
′
{\displaystyle {\mathcal {E}}'\times {\mathcal {D}}'\to {\mathcal {D}}'}
is not continuous although it is separately continuous. The convolution maps
D
(
R
n
)
×
D
′
→
D
′
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})\times {\mathcal {D}}'\to {\mathcal {D}}'}
and
D
(
R
n
)
×
D
′
→
D
(
R
n
)
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})\times {\mathcal {D}}'\to {\mathcal {D}}(\mathbb {R} ^{n})}
given by
(
f
,
T
)
↦
f
∗
T
{\displaystyle (f,T)\mapsto f*T}
both fail to be continuous. Each of these non-continuous maps is, however, separately continuous and hypocontinuous.
==== Convolution versus multiplication ====
In general, regularity is required for multiplication products, and locality is required for convolution products. It is expressed in the following extension of the Convolution Theorem which guarantees the existence of both convolution and multiplication products. Let
F
(
α
)
=
f
∈
O
C
′
{\displaystyle F(\alpha )=f\in {\mathcal {O}}'_{C}}
be a rapidly decreasing tempered distribution or, equivalently,
F
(
f
)
=
α
∈
O
M
{\displaystyle F(f)=\alpha \in {\mathcal {O}}_{M}}
be an ordinary (slowly growing, smooth) function within the space of tempered distributions and let
F
{\displaystyle F}
be the normalized (unitary, ordinary frequency) Fourier transform. Then, according to Schwartz (1951),
F
(
f
∗
g
)
=
F
(
f
)
⋅
F
(
g
)
and
F
(
α
⋅
g
)
=
F
(
α
)
∗
F
(
g
)
{\displaystyle F(f*g)=F(f)\cdot F(g)\qquad {\text{ and }}\qquad F(\alpha \cdot g)=F(\alpha )*F(g)}
hold within the space of tempered distributions. In particular, these equations become the Poisson Summation Formula if
g
≡
Ш
{\displaystyle g\equiv \operatorname {\text{Ш}} }
is the Dirac Comb. The space of all rapidly decreasing tempered distributions is also called the space of convolution operators
O
C
′
{\displaystyle {\mathcal {O}}'_{C}}
and the space of all ordinary functions within the space of tempered distributions is also called the space of multiplication operators
O
M
.
{\displaystyle {\mathcal {O}}_{M}.}
More generally,
F
(
O
C
′
)
=
O
M
{\displaystyle F({\mathcal {O}}'_{C})={\mathcal {O}}_{M}}
and
F
(
O
M
)
=
O
C
′
.
{\displaystyle F({\mathcal {O}}_{M})={\mathcal {O}}'_{C}.}
A particular case is the Paley-Wiener-Schwartz Theorem which states that
F
(
E
′
)
=
PW
{\displaystyle F({\mathcal {E}}')=\operatorname {PW} }
and
F
(
PW
)
=
E
′
.
{\displaystyle F(\operatorname {PW} )={\mathcal {E}}'.}
This is because
E
′
⊆
O
C
′
{\displaystyle {\mathcal {E}}'\subseteq {\mathcal {O}}'_{C}}
and
PW
⊆
O
M
.
{\displaystyle \operatorname {PW} \subseteq {\mathcal {O}}_{M}.}
In other words, compactly supported tempered distributions
E
′
{\displaystyle {\mathcal {E}}'}
belong to the space of convolution operators
O
C
′
{\displaystyle {\mathcal {O}}'_{C}}
and
Paley-Wiener functions
PW
,
{\displaystyle \operatorname {PW} ,}
better known as bandlimited functions, belong to the space of multiplication operators
O
M
.
{\displaystyle {\mathcal {O}}_{M}.}
For example, let
g
≡
Ш
∈
S
′
{\displaystyle g\equiv \operatorname {\text{Ш}} \in {\mathcal {S}}'}
be the Dirac comb and
f
≡
δ
∈
E
′
{\displaystyle f\equiv \delta \in {\mathcal {E}}'}
be the Dirac delta;then
α
≡
1
∈
PW
{\displaystyle \alpha \equiv 1\in \operatorname {PW} }
is the function that is constantly one and both equations yield the Dirac-comb identity. Another example is to let
g
{\displaystyle g}
be the Dirac comb and
f
≡
rect
∈
E
′
{\displaystyle f\equiv \operatorname {rect} \in {\mathcal {E}}'}
be the rectangular function; then
α
≡
sinc
∈
PW
{\displaystyle \alpha \equiv \operatorname {sinc} \in \operatorname {PW} }
is the sinc function and both equations yield the Classical Sampling Theorem for suitable
rect
{\displaystyle \operatorname {rect} }
functions. More generally, if
g
{\displaystyle g}
is the Dirac comb and
f
∈
S
⊆
O
C
′
∩
O
M
{\displaystyle f\in {\mathcal {S}}\subseteq {\mathcal {O}}'_{C}\cap {\mathcal {O}}_{M}}
is a smooth window function (Schwartz function), for example, the Gaussian, then
α
∈
S
{\displaystyle \alpha \in {\mathcal {S}}}
is another smooth window function (Schwartz function). They are known as mollifiers, especially in partial differential equations theory, or as regularizers in physics because they allow turning generalized functions into regular functions.
=== Tensor products of distributions ===
Let
U
⊆
R
m
{\displaystyle U\subseteq \mathbb {R} ^{m}}
and
V
⊆
R
n
{\displaystyle V\subseteq \mathbb {R} ^{n}}
be open sets. Assume all vector spaces to be over the field
F
,
{\displaystyle \mathbb {F} ,}
where
F
=
R
{\displaystyle \mathbb {F} =\mathbb {R} }
or
C
.
{\displaystyle \mathbb {C} .}
For
f
∈
D
(
U
×
V
)
{\displaystyle f\in {\mathcal {D}}(U\times V)}
define for every
u
∈
U
{\displaystyle u\in U}
and every
v
∈
V
{\displaystyle v\in V}
the following functions:
f
u
:
V
→
F
and
f
v
:
U
→
F
y
↦
f
(
u
,
y
)
x
↦
f
(
x
,
v
)
{\displaystyle {\begin{alignedat}{9}f_{u}:\,&V&&\to \,&&\mathbb {F} &&\quad {\text{ and }}\quad &&f^{v}:\,&&U&&\to \,&&\mathbb {F} \\&y&&\mapsto \,&&f(u,y)&&&&&&x&&\mapsto \,&&f(x,v)\\\end{alignedat}}}
Given
S
∈
D
′
(
U
)
{\displaystyle S\in {\mathcal {D}}^{\prime }(U)}
and
T
∈
D
′
(
V
)
,
{\displaystyle T\in {\mathcal {D}}^{\prime }(V),}
define the following functions:
⟨
S
,
f
∙
⟩
:
V
→
F
and
⟨
T
,
f
∙
⟩
:
U
→
F
v
↦
⟨
S
,
f
v
⟩
u
↦
⟨
T
,
f
u
⟩
{\displaystyle {\begin{alignedat}{9}\langle S,f^{\bullet }\rangle :\,&V&&\to \,&&\mathbb {F} &&\quad {\text{ and }}\quad &&\langle T,f_{\bullet }\rangle :\,&&U&&\to \,&&\mathbb {F} \\&v&&\mapsto \,&&\langle S,f^{v}\rangle &&&&&&u&&\mapsto \,&&\langle T,f_{u}\rangle \\\end{alignedat}}}
where
⟨
T
,
f
∙
⟩
∈
D
(
U
)
{\displaystyle \langle T,f_{\bullet }\rangle \in {\mathcal {D}}(U)}
and
⟨
S
,
f
∙
⟩
∈
D
(
V
)
.
{\displaystyle \langle S,f^{\bullet }\rangle \in {\mathcal {D}}(V).}
These definitions associate every
S
∈
D
′
(
U
)
{\displaystyle S\in {\mathcal {D}}'(U)}
and
T
∈
D
′
(
V
)
{\displaystyle T\in {\mathcal {D}}'(V)}
with the (respective) continuous linear map:
D
(
U
×
V
)
→
D
(
V
)
and
D
(
U
×
V
)
→
D
(
U
)
f
↦
⟨
S
,
f
∙
⟩
f
↦
⟨
T
,
f
∙
⟩
{\displaystyle {\begin{alignedat}{9}\,&&{\mathcal {D}}(U\times V)&\to \,&&{\mathcal {D}}(V)&&\quad {\text{ and }}\quad &&\,&{\mathcal {D}}(U\times V)&&\to \,&&{\mathcal {D}}(U)\\&&f\ &\mapsto \,&&\langle S,f^{\bullet }\rangle &&&&&f\ &&\mapsto \,&&\langle T,f_{\bullet }\rangle \\\end{alignedat}}}
Moreover, if either
S
{\displaystyle S}
(resp.
T
{\displaystyle T}
) has compact support then it also induces a continuous linear map of
C
∞
(
U
×
V
)
→
C
∞
(
V
)
{\displaystyle C^{\infty }(U\times V)\to C^{\infty }(V)}
(resp.
C
∞
(
U
×
V
)
→
C
∞
(
U
)
{\displaystyle C^{\infty }(U\times V)\to C^{\infty }(U)}
).
The tensor product of
S
∈
D
′
(
U
)
{\displaystyle S\in {\mathcal {D}}'(U)}
and
T
∈
D
′
(
V
)
,
{\displaystyle T\in {\mathcal {D}}'(V),}
denoted by
S
⊗
T
{\displaystyle S\otimes T}
or
T
⊗
S
,
{\displaystyle T\otimes S,}
is the distribution in
U
×
V
{\displaystyle U\times V}
defined by:
(
S
⊗
T
)
(
f
)
:=
⟨
S
,
⟨
T
,
f
∙
⟩
⟩
=
⟨
T
,
⟨
S
,
f
∙
⟩
⟩
.
{\displaystyle (S\otimes T)(f):=\langle S,\langle T,f_{\bullet }\rangle \rangle =\langle T,\langle S,f^{\bullet }\rangle \rangle .}
== Spaces of distributions ==
For all
0
<
k
<
∞
{\displaystyle 0<k<\infty }
and all
1
<
p
<
∞
,
{\displaystyle 1<p<\infty ,}
every one of the following canonical injections is continuous and has an image (also called the range) that is a dense subset of its codomain:
C
c
∞
(
U
)
→
C
c
k
(
U
)
→
C
c
0
(
U
)
→
L
c
∞
(
U
)
→
L
c
p
(
U
)
→
L
c
1
(
U
)
↓
↓
↓
C
∞
(
U
)
→
C
k
(
U
)
→
C
0
(
U
)
{\displaystyle {\begin{matrix}C_{c}^{\infty }(U)&\to &C_{c}^{k}(U)&\to &C_{c}^{0}(U)&\to &L_{c}^{\infty }(U)&\to &L_{c}^{p}(U)&\to &L_{c}^{1}(U)\\\downarrow &&\downarrow &&\downarrow \\C^{\infty }(U)&\to &C^{k}(U)&\to &C^{0}(U)\\{}\end{matrix}}}
where the topologies on
L
c
q
(
U
)
{\displaystyle L_{c}^{q}(U)}
(
1
≤
q
≤
∞
{\displaystyle 1\leq q\leq \infty }
) are defined as direct limits of the spaces
L
c
q
(
K
)
{\displaystyle L_{c}^{q}(K)}
in a manner analogous to how the topologies on
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
were defined (so in particular, they are not the usual norm topologies). The range of each of the maps above (and of any composition of the maps above) is dense in its codomain.
Suppose that
X
{\displaystyle X}
is one of the spaces
C
c
k
(
U
)
{\displaystyle C_{c}^{k}(U)}
(for
k
∈
{
0
,
1
,
…
,
∞
}
{\displaystyle k\in \{0,1,\ldots ,\infty \}}
) or
L
c
p
(
U
)
{\displaystyle L_{c}^{p}(U)}
(for
1
≤
p
≤
∞
{\displaystyle 1\leq p\leq \infty }
) or
L
p
(
U
)
{\displaystyle L^{p}(U)}
(for
1
≤
p
<
∞
{\displaystyle 1\leq p<\infty }
). Because the canonical injection
In
X
:
C
c
∞
(
U
)
→
X
{\displaystyle \operatorname {In} _{X}:C_{c}^{\infty }(U)\to X}
is a continuous injection whose image is dense in the codomain, this map's transpose
t
In
X
:
X
b
′
→
D
′
(
U
)
=
(
C
c
∞
(
U
)
)
b
′
{\displaystyle {}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}'(U)=\left(C_{c}^{\infty }(U)\right)'_{b}}
is a continuous injection. This injective transpose map thus allows the continuous dual space
X
′
{\displaystyle X'}
of
X
{\displaystyle X}
to be identified with a certain vector subspace of the space
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
of all distributions (specifically, it is identified with the image of this transpose map). This transpose map is continuous but it is not necessarily a topological embedding.
A linear subspace of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
carrying a locally convex topology that is finer than the subspace topology induced on it by
D
′
(
U
)
=
(
C
c
∞
(
U
)
)
b
′
{\displaystyle {\mathcal {D}}'(U)=\left(C_{c}^{\infty }(U)\right)'_{b}}
is called a space of distributions.
Almost all of the spaces of distributions mentioned in this article arise in this way (for example, tempered distribution, restrictions, distributions of order
≤
{\displaystyle \leq }
some integer, distributions induced by a positive Radon measure, distributions induced by an
L
p
{\displaystyle L^{p}}
-function, etc.) and any representation theorem about the continuous dual space of
X
{\displaystyle X}
may, through the transpose
t
In
X
:
X
b
′
→
D
′
(
U
)
,
{\displaystyle {}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}'(U),}
be transferred directly to elements of the space
Im
(
t
In
X
)
.
{\displaystyle \operatorname {Im} \left({}^{t}\operatorname {In} _{X}\right).}
=== Radon measures ===
The inclusion map
In
:
C
c
∞
(
U
)
→
C
c
0
(
U
)
{\displaystyle \operatorname {In} :C_{c}^{\infty }(U)\to C_{c}^{0}(U)}
is a continuous injection whose image is dense in its codomain, so the transpose
t
In
:
(
C
c
0
(
U
)
)
b
′
→
D
′
(
U
)
=
(
C
c
∞
(
U
)
)
b
′
{\displaystyle {}^{t}\operatorname {In} :(C_{c}^{0}(U))'_{b}\to {\mathcal {D}}'(U)=(C_{c}^{\infty }(U))'_{b}}
is also a continuous injection.
Note that the continuous dual space
(
C
c
0
(
U
)
)
b
′
{\displaystyle (C_{c}^{0}(U))'_{b}}
can be identified as the space of Radon measures, where there is a one-to-one correspondence between the continuous linear functionals
T
∈
(
C
c
0
(
U
)
)
b
′
{\displaystyle T\in (C_{c}^{0}(U))'_{b}}
and integral with respect to a Radon measure; that is,
if
T
∈
(
C
c
0
(
U
)
)
b
′
{\displaystyle T\in (C_{c}^{0}(U))'_{b}}
then there exists a Radon measure
μ
{\displaystyle \mu }
on U such that for all
f
∈
C
c
0
(
U
)
,
T
(
f
)
=
∫
U
f
d
μ
,
{\textstyle f\in C_{c}^{0}(U),T(f)=\int _{U}f\,d\mu ,}
and
if
μ
{\displaystyle \mu }
is a Radon measure on U then the linear functional on
C
c
0
(
U
)
{\displaystyle C_{c}^{0}(U)}
defined by sending
f
∈
C
c
0
(
U
)
{\textstyle f\in C_{c}^{0}(U)}
to
∫
U
f
d
μ
{\textstyle \int _{U}f\,d\mu }
is continuous.
Through the injection
t
In
:
(
C
c
0
(
U
)
)
b
′
→
D
′
(
U
)
,
{\displaystyle {}^{t}\operatorname {In} :(C_{c}^{0}(U))'_{b}\to {\mathcal {D}}'(U),}
every Radon measure becomes a distribution on U. If
f
{\displaystyle f}
is a locally integrable function on U then the distribution
ϕ
↦
∫
U
f
(
x
)
ϕ
(
x
)
d
x
{\textstyle \phi \mapsto \int _{U}f(x)\phi (x)\,dx}
is a Radon measure; so Radon measures form a large and important space of distributions.
The following is the theorem of the structure of distributions of Radon measures, which shows that every Radon measure can be written as a sum of derivatives of locally
L
∞
{\displaystyle L^{\infty }}
functions on U:
==== Positive Radon measures ====
A linear function
T
{\displaystyle T}
on a space of functions is called positive if whenever a function
f
{\displaystyle f}
that belongs to the domain of
T
{\displaystyle T}
is non-negative (that is,
f
{\displaystyle f}
is real-valued and
f
≥
0
{\displaystyle f\geq 0}
) then
T
(
f
)
≥
0.
{\displaystyle T(f)\geq 0.}
One may show that every positive linear functional on
C
c
0
(
U
)
{\displaystyle C_{c}^{0}(U)}
is necessarily continuous (that is, necessarily a Radon measure).
Lebesgue measure is an example of a positive Radon measure.
==== Locally integrable functions as distributions ====
One particularly important class of Radon measures are those that are induced locally integrable functions. The function
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
is called locally integrable if it is Lebesgue integrable over every compact subset K of U. This is a large class of functions that includes all continuous functions and all Lp space
L
p
{\displaystyle L^{p}}
functions. The topology on
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
is defined in such a fashion that any locally integrable function
f
{\displaystyle f}
yields a continuous linear functional on
D
(
U
)
{\displaystyle {\mathcal {D}}(U)}
– that is, an element of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
– denoted here by
T
f
,
{\displaystyle T_{f},}
whose value on the test function
ϕ
{\displaystyle \phi }
is given by the Lebesgue integral:
⟨
T
f
,
ϕ
⟩
=
∫
U
f
ϕ
d
x
.
{\displaystyle \langle T_{f},\phi \rangle =\int _{U}f\phi \,dx.}
Conventionally, one abuses notation by identifying
T
f
{\displaystyle T_{f}}
with
f
,
{\displaystyle f,}
provided no confusion can arise, and thus the pairing between
T
f
{\displaystyle T_{f}}
and
ϕ
{\displaystyle \phi }
is often written
⟨
f
,
ϕ
⟩
=
⟨
T
f
,
ϕ
⟩
.
{\displaystyle \langle f,\phi \rangle =\langle T_{f},\phi \rangle .}
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are two locally integrable functions, then the associated distributions
T
f
{\displaystyle T_{f}}
and
T
g
{\displaystyle T_{g}}
are equal to the same element of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
if and only if
f
{\displaystyle f}
and
g
{\displaystyle g}
are equal almost everywhere (see, for instance, Hörmander (1983, Theorem 1.2.5)). Similarly, every Radon measure
μ
{\displaystyle \mu }
on
U
{\displaystyle U}
defines an element of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
whose value on the test function
ϕ
{\displaystyle \phi }
is
∫
ϕ
d
μ
.
{\textstyle \int \phi \,d\mu .}
As above, it is conventional to abuse notation and write the pairing between a Radon measure
μ
{\displaystyle \mu }
and a test function
ϕ
{\displaystyle \phi }
as
⟨
μ
,
ϕ
⟩
.
{\displaystyle \langle \mu ,\phi \rangle .}
Conversely, as shown in a theorem by Schwartz (similar to the Riesz representation theorem), every distribution which is non-negative on non-negative functions is of this form for some (positive) Radon measure.
==== Test functions as distributions ====
The test functions are themselves locally integrable, and so define distributions. The space of test functions
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
is sequentially dense in
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
with respect to the strong topology on
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'(U).}
This means that for any
T
∈
D
′
(
U
)
,
{\displaystyle T\in {\mathcal {D}}'(U),}
there is a sequence of test functions,
(
ϕ
i
)
i
=
1
∞
,
{\displaystyle (\phi _{i})_{i=1}^{\infty },}
that converges to
T
∈
D
′
(
U
)
{\displaystyle T\in {\mathcal {D}}'(U)}
(in its strong dual topology) when considered as a sequence of distributions. Or equivalently,
⟨
ϕ
i
,
ψ
⟩
→
⟨
T
,
ψ
⟩
for all
ψ
∈
D
(
U
)
.
{\displaystyle \langle \phi _{i},\psi \rangle \to \langle T,\psi \rangle \qquad {\text{ for all }}\psi \in {\mathcal {D}}(U).}
=== Distributions with compact support ===
The inclusion map
In
:
C
c
∞
(
U
)
→
C
∞
(
U
)
{\displaystyle \operatorname {In} :C_{c}^{\infty }(U)\to C^{\infty }(U)}
is a continuous injection whose image is dense in its codomain, so the transpose map
t
In
:
(
C
∞
(
U
)
)
b
′
→
D
′
(
U
)
=
(
C
c
∞
(
U
)
)
b
′
{\displaystyle {}^{t}\operatorname {In} :(C^{\infty }(U))'_{b}\to {\mathcal {D}}'(U)=(C_{c}^{\infty }(U))'_{b}}
is also a continuous injection. Thus the image of the transpose, denoted by
E
′
(
U
)
,
{\displaystyle {\mathcal {E}}'(U),}
forms a space of distributions.
The elements of
E
′
(
U
)
=
(
C
∞
(
U
)
)
b
′
{\displaystyle {\mathcal {E}}'(U)=(C^{\infty }(U))'_{b}}
can be identified as the space of distributions with compact support. Explicitly, if
T
{\displaystyle T}
is a distribution on U then the following are equivalent,
T
∈
E
′
(
U
)
.
{\displaystyle T\in {\mathcal {E}}'(U).}
The support of
T
{\displaystyle T}
is compact.
The restriction of
T
{\displaystyle T}
to
C
c
∞
(
U
)
,
{\displaystyle C_{c}^{\infty }(U),}
when that space is equipped with the subspace topology inherited from
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
(a coarser topology than the canonical LF topology), is continuous.
There is a compact subset K of U such that for every test function
ϕ
{\displaystyle \phi }
whose support is completely outside of K, we have
T
(
ϕ
)
=
0.
{\displaystyle T(\phi )=0.}
Compactly supported distributions define continuous linear functionals on the space
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
; recall that the topology on
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
is defined such that a sequence of test functions
ϕ
k
{\displaystyle \phi _{k}}
converges to 0 if and only if all derivatives of
ϕ
k
{\displaystyle \phi _{k}}
converge uniformly to 0 on every compact subset of U. Conversely, it can be shown that every continuous linear functional on this space defines a distribution of compact support. Thus compactly supported distributions can be identified with those distributions that can be extended from
C
c
∞
(
U
)
{\displaystyle C_{c}^{\infty }(U)}
to
C
∞
(
U
)
.
{\displaystyle C^{\infty }(U).}
=== Distributions of finite order ===
Let
k
∈
N
.
{\displaystyle k\in \mathbb {N} .}
The inclusion map
In
:
C
c
∞
(
U
)
→
C
c
k
(
U
)
{\displaystyle \operatorname {In} :C_{c}^{\infty }(U)\to C_{c}^{k}(U)}
is a continuous injection whose image is dense in its codomain, so the transpose
t
In
:
(
C
c
k
(
U
)
)
b
′
→
D
′
(
U
)
=
(
C
c
∞
(
U
)
)
b
′
{\displaystyle {}^{t}\operatorname {In} :(C_{c}^{k}(U))'_{b}\to {\mathcal {D}}'(U)=(C_{c}^{\infty }(U))'_{b}}
is also a continuous injection. Consequently, the image of
t
In
,
{\displaystyle {}^{t}\operatorname {In} ,}
denoted by
D
′
k
(
U
)
,
{\displaystyle {\mathcal {D}}'^{k}(U),}
forms a space of distributions. The elements of
D
′
k
(
U
)
{\displaystyle {\mathcal {D}}'^{k}(U)}
are the distributions of order
≤
k
.
{\displaystyle \,\leq k.}
The distributions of order
≤
0
,
{\displaystyle \,\leq 0,}
which are also called distributions of order 0 are exactly the distributions that are Radon measures (described above).
For
0
≠
k
∈
N
,
{\displaystyle 0\neq k\in \mathbb {N} ,}
a distribution of order k is a distribution of order
≤
k
{\displaystyle \,\leq k}
that is not a distribution of order
≤
k
−
1
{\displaystyle \,\leq k-1}
.
A distribution is said to be of finite order if there is some integer
k
{\displaystyle k}
such that it is a distribution of order
≤
k
,
{\displaystyle \,\leq k,}
and the set of distributions of finite order is denoted by
D
′
F
(
U
)
.
{\displaystyle {\mathcal {D}}'^{F}(U).}
Note that if
k
≤
l
{\displaystyle k\leq l}
then
D
′
k
(
U
)
⊆
D
′
l
(
U
)
{\displaystyle {\mathcal {D}}'^{k}(U)\subseteq {\mathcal {D}}'^{l}(U)}
so that
D
′
F
(
U
)
:=
⋃
n
=
0
∞
D
′
n
(
U
)
{\displaystyle {\mathcal {D}}'^{F}(U):=\bigcup _{n=0}^{\infty }{\mathcal {D}}'^{n}(U)}
is a vector subspace of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
, and furthermore, if and only if
D
′
F
(
U
)
=
D
′
(
U
)
.
{\displaystyle {\mathcal {D}}'^{F}(U)={\mathcal {D}}'(U).}
==== Structure of distributions of finite order ====
Every distribution with compact support in U is a distribution of finite order. Indeed, every distribution in U is locally a distribution of finite order, in the following sense: If V is an open and relatively compact subset of U and if
ρ
V
U
{\displaystyle \rho _{VU}}
is the restriction mapping from U to V, then the image of
D
′
(
U
)
{\displaystyle {\mathcal {D}}'(U)}
under
ρ
V
U
{\displaystyle \rho _{VU}}
is contained in
D
′
F
(
V
)
.
{\displaystyle {\mathcal {D}}'^{F}(V).}
The following is the theorem of the structure of distributions of finite order, which shows that every distribution of finite order can be written as a sum of derivatives of Radon measures:
Example. (Distributions of infinite order) Let
U
:=
(
0
,
∞
)
{\displaystyle U:=(0,\infty )}
and for every test function
f
,
{\displaystyle f,}
let
S
f
:=
∑
m
=
1
∞
(
∂
m
f
)
(
1
m
)
.
{\displaystyle Sf:=\sum _{m=1}^{\infty }(\partial ^{m}f)\left({\frac {1}{m}}\right).}
Then
S
{\displaystyle S}
is a distribution of infinite order on U. Moreover,
S
{\displaystyle S}
can not be extended to a distribution on
R
{\displaystyle \mathbb {R} }
; that is, there exists no distribution
T
{\displaystyle T}
on
R
{\displaystyle \mathbb {R} }
such that the restriction of
T
{\displaystyle T}
to U is equal to
S
.
{\displaystyle S.}
=== Tempered distributions and Fourier transform ===
Defined below are the tempered distributions, which form a subspace of
D
′
(
R
n
)
,
{\displaystyle {\mathcal {D}}'(\mathbb {R} ^{n}),}
the space of distributions on
R
n
.
{\displaystyle \mathbb {R} ^{n}.}
This is a proper subspace: while every tempered distribution is a distribution and an element of
D
′
(
R
n
)
,
{\displaystyle {\mathcal {D}}'(\mathbb {R} ^{n}),}
the converse is not true. Tempered distributions are useful if one studies the Fourier transform since all tempered distributions have a Fourier transform, which is not true for an arbitrary distribution in
D
′
(
R
n
)
.
{\displaystyle {\mathcal {D}}'(\mathbb {R} ^{n}).}
==== Schwartz space ====
The Schwartz space
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
is the space of all smooth functions that are rapidly decreasing at infinity along with all partial derivatives. Thus
ϕ
:
R
n
→
R
{\displaystyle \phi :\mathbb {R} ^{n}\to \mathbb {R} }
is in the Schwartz space provided that any derivative of
ϕ
,
{\displaystyle \phi ,}
multiplied with any power of
|
x
|
,
{\displaystyle |x|,}
converges to 0 as
|
x
|
→
∞
.
{\displaystyle |x|\to \infty .}
These functions form a complete TVS with a suitably defined family of seminorms. More precisely, for any multi-indices
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
define
p
α
,
β
(
ϕ
)
=
sup
x
∈
R
n
|
x
α
∂
β
ϕ
(
x
)
|
.
{\displaystyle p_{\alpha ,\beta }(\phi )=\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|.}
Then
ϕ
{\displaystyle \phi }
is in the Schwartz space if all the values satisfy
p
α
,
β
(
ϕ
)
<
∞
.
{\displaystyle p_{\alpha ,\beta }(\phi )<\infty .}
The family of seminorms
p
α
,
β
{\displaystyle p_{\alpha ,\beta }}
defines a locally convex topology on the Schwartz space. For
n
=
1
,
{\displaystyle n=1,}
the seminorms are, in fact, norms on the Schwartz space. One can also use the following family of seminorms to define the topology:
|
f
|
m
,
k
=
sup
|
p
|
≤
m
(
sup
x
∈
R
n
{
(
1
+
|
x
|
)
k
|
(
∂
α
f
)
(
x
)
|
}
)
,
k
,
m
∈
N
.
{\displaystyle |f|_{m,k}=\sup _{|p|\leq m}\left(\sup _{x\in \mathbb {R} ^{n}}\left\{(1+|x|)^{k}\left|(\partial ^{\alpha }f)(x)\right|\right\}\right),\qquad k,m\in \mathbb {N} .}
Otherwise, one can define a norm on
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
via
‖
ϕ
‖
k
=
max
|
α
|
+
|
β
|
≤
k
sup
x
∈
R
n
|
x
α
∂
β
ϕ
(
x
)
|
,
k
≥
1.
{\displaystyle \|\phi \|_{k}=\max _{|\alpha |+|\beta |\leq k}\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|,\qquad k\geq 1.}
The Schwartz space is a Fréchet space (that is, a complete metrizable locally convex space). Because the Fourier transform changes
∂
α
{\displaystyle \partial ^{\alpha }}
into multiplication by
x
α
{\displaystyle x^{\alpha }}
and vice versa, this symmetry implies that the Fourier transform of a Schwartz function is also a Schwartz function.
A sequence
{
f
i
}
{\displaystyle \{f_{i}\}}
in
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
converges to 0 in
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
if and only if the functions
(
1
+
|
x
|
)
k
(
∂
p
f
i
)
(
x
)
{\displaystyle (1+|x|)^{k}(\partial ^{p}f_{i})(x)}
converge to 0 uniformly in the whole of
R
n
,
{\displaystyle \mathbb {R} ^{n},}
which implies that such a sequence must converge to zero in
C
∞
(
R
n
)
.
{\displaystyle C^{\infty }(\mathbb {R} ^{n}).}
D
(
R
n
)
{\displaystyle {\mathcal {D}}(\mathbb {R} ^{n})}
is dense in
S
(
R
n
)
.
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n}).}
The subset of all analytic Schwartz functions is dense in
S
(
R
n
)
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})}
as well.
The Schwartz space is nuclear, and the tensor product of two maps induces a canonical surjective TVS-isomorphisms
S
(
R
m
)
⊗
^
S
(
R
n
)
→
S
(
R
m
+
n
)
,
{\displaystyle {\mathcal {S}}(\mathbb {R} ^{m})\ {\widehat {\otimes }}\ {\mathcal {S}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{m+n}),}
where
⊗
^
{\displaystyle {\widehat {\otimes }}}
represents the completion of the injective tensor product (which in this case is identical to the completion of the projective tensor product).
==== Tempered distributions ====
The inclusion map
In
:
D
(
R
n
)
→
S
(
R
n
)
{\displaystyle \operatorname {In} :{\mathcal {D}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{n})}
is a continuous injection whose image is dense in its codomain, so the transpose
t
In
:
(
S
(
R
n
)
)
b
′
→
D
′
(
R
n
)
{\displaystyle {}^{t}\operatorname {In} :({\mathcal {S}}(\mathbb {R} ^{n}))'_{b}\to {\mathcal {D}}'(\mathbb {R} ^{n})}
is also a continuous injection. Thus, the image of the transpose map, denoted by
S
′
(
R
n
)
,
{\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n}),}
forms a space of distributions.
The space
S
′
(
R
n
)
{\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})}
is called the space of tempered distributions. It is the continuous dual space of the Schwartz space. Equivalently, a distribution
T
{\displaystyle T}
is a tempered distribution if and only if
(
for all
α
,
β
∈
N
n
:
lim
m
→
∞
p
α
,
β
(
ϕ
m
)
=
0
)
⟹
lim
m
→
∞
T
(
ϕ
m
)
=
0.
{\displaystyle \left({\text{ for all }}\alpha ,\beta \in \mathbb {N} ^{n}:\lim _{m\to \infty }p_{\alpha ,\beta }(\phi _{m})=0\right)\Longrightarrow \lim _{m\to \infty }T(\phi _{m})=0.}
The derivative of a tempered distribution is again a tempered distribution. Tempered distributions generalize the bounded (or slow-growing) locally integrable functions; all distributions with compact support and all square-integrable functions are tempered distributions. More generally, all functions that are products of polynomials with elements of Lp space
L
p
(
R
n
)
{\displaystyle L^{p}(\mathbb {R} ^{n})}
for
p
≥
1
{\displaystyle p\geq 1}
are tempered distributions.
The tempered distributions can also be characterized as slowly growing, meaning that each derivative of
T
{\displaystyle T}
grows at most as fast as some polynomial. This characterization is dual to the rapidly falling behaviour of the derivatives of a function in the Schwartz space, where each derivative of
ϕ
{\displaystyle \phi }
decays faster than every inverse power of
|
x
|
.
{\displaystyle |x|.}
An example of a rapidly falling function is
|
x
|
n
exp
(
−
λ
|
x
|
β
)
{\displaystyle |x|^{n}\exp(-\lambda |x|^{\beta })}
for any positive
n
,
λ
,
β
.
{\displaystyle n,\lambda ,\beta .}
==== Fourier transform ====
To study the Fourier transform, it is best to consider complex-valued test functions and complex-linear distributions. The ordinary continuous Fourier transform
F
:
S
(
R
n
)
→
S
(
R
n
)
{\displaystyle F:{\mathcal {S}}(\mathbb {R} ^{n})\to {\mathcal {S}}(\mathbb {R} ^{n})}
is a TVS-automorphism of the Schwartz space, and the Fourier transform is defined to be its transpose
t
F
:
S
′
(
R
n
)
→
S
′
(
R
n
)
,
{\displaystyle {}^{t}F:{\mathcal {S}}'(\mathbb {R} ^{n})\to {\mathcal {S}}'(\mathbb {R} ^{n}),}
which (abusing notation) will again be denoted by
F
.
{\displaystyle F.}
So the Fourier transform of the tempered distribution
T
{\displaystyle T}
is defined by
(
F
T
)
(
ψ
)
=
T
(
F
ψ
)
{\displaystyle (FT)(\psi )=T(F\psi )}
for every Schwartz function
ψ
.
{\displaystyle \psi .}
F
T
{\displaystyle FT}
is thus again a tempered distribution. The Fourier transform is a TVS isomorphism from the space of tempered distributions onto itself. This operation is compatible with differentiation in the sense that
F
d
T
d
x
=
i
x
F
T
{\displaystyle F{\dfrac {dT}{dx}}=ixFT}
and also with convolution: if
T
{\displaystyle T}
is a tempered distribution and
ψ
{\displaystyle \psi }
is a slowly increasing smooth function on
R
n
,
{\displaystyle \mathbb {R} ^{n},}
ψ
T
{\displaystyle \psi T}
is again a tempered distribution and
F
(
ψ
T
)
=
F
ψ
∗
F
T
{\displaystyle F(\psi T)=F\psi *FT}
is the convolution of
F
T
{\displaystyle FT}
and
F
ψ
.
{\displaystyle F\psi .}
In particular, the Fourier transform of the constant function equal to 1 is the
δ
{\displaystyle \delta }
distribution.
==== Expressing tempered distributions as sums of derivatives ====
If
T
∈
S
′
(
R
n
)
{\displaystyle T\in {\mathcal {S}}'(\mathbb {R} ^{n})}
is a tempered distribution, then there exists a constant
C
>
0
,
{\displaystyle C>0,}
and positive integers
M
{\displaystyle M}
and
N
{\displaystyle N}
such that for all Schwartz functions
ϕ
∈
S
(
R
n
)
{\displaystyle \phi \in {\mathcal {S}}(\mathbb {R} ^{n})}
⟨
T
,
ϕ
⟩
≤
C
∑
|
α
|
≤
N
,
|
β
|
≤
M
sup
x
∈
R
n
|
x
α
∂
β
ϕ
(
x
)
|
=
C
∑
|
α
|
≤
N
,
|
β
|
≤
M
p
α
,
β
(
ϕ
)
.
{\displaystyle \langle T,\phi \rangle \leq C\sum \nolimits _{|\alpha |\leq N,|\beta |\leq M}\sup _{x\in \mathbb {R} ^{n}}\left|x^{\alpha }\partial ^{\beta }\phi (x)\right|=C\sum \nolimits _{|\alpha |\leq N,|\beta |\leq M}p_{\alpha ,\beta }(\phi ).}
This estimate, along with some techniques from functional analysis, can be used to show that there is a continuous slowly increasing function
F
{\displaystyle F}
and a multi-index
α
{\displaystyle \alpha }
such that
T
=
∂
α
F
.
{\displaystyle T=\partial ^{\alpha }F.}
==== Restriction of distributions to compact sets ====
If
T
∈
D
′
(
R
n
)
,
{\displaystyle T\in {\mathcal {D}}'(\mathbb {R} ^{n}),}
then for any compact set
K
⊆
R
n
,
{\displaystyle K\subseteq \mathbb {R} ^{n},}
there exists a continuous function
F
{\displaystyle F}
compactly supported in
R
n
{\displaystyle \mathbb {R} ^{n}}
(possibly on a larger set than K itself) and a multi-index
α
{\displaystyle \alpha }
such that
T
=
∂
α
F
{\displaystyle T=\partial ^{\alpha }F}
on
C
c
∞
(
K
)
.
{\displaystyle C_{c}^{\infty }(K).}
== Using holomorphic functions as test functions ==
The success of the theory led to an investigation of the idea of hyperfunction, in which spaces of holomorphic functions are used as test functions. A refined theory has been developed, in particular Mikio Sato's algebraic analysis, using sheaf theory and several complex variables. This extends the range of symbolic methods that can be made into rigorous mathematics, for example, Feynman integrals.
== See also ==
Cauchy principal value – Method for assigning values to certain improper integrals which would otherwise be undefined
Gelfand triple – Construction linking the study of "bound" and continuous eigenvalues in functional analysisPages displaying short descriptions of redirect targets
Gelfand–Shilov space
Generalized function – Objects extending the notion of functions
Hilbert transform – Integral transform and linear operator
Homogeneous distribution – Type of mathematical distribution
Laplacian of the indicator – Limit of sequence of smooth functions
Limit of distributions
Mollifier – Integration kernels for smoothing out sharp features
Vague topology
Ultradistribution – An extension of a mathematical distribution
Differential equations related
Fundamental solution – Concept in the solution of linear partial differential equations
Pseudo-differential operator – Type of differential operator
Weak solution – Mathematical solution
Generalizations of distributions
Colombeau algebra – commutative associative differential algebra of generalized functions into which smooth functions (but not arbitrary continuous ones) embed as a subalgebra and distributions embed as a subspacePages displaying wikidata descriptions as a fallback
Current (mathematics) – Distributions on spaces of differential forms
Distribution (number theory) – function on finite sets which is analogous to an integralPages displaying wikidata descriptions as a fallback
Distribution on a linear algebraic group – Linear function satisfying a support condition
Green's function – Non-linear second-order differential equation
Hyperfunction – Type of generalized function
Malgrange–Ehrenpreis theorem
== Notes ==
== References ==
== Bibliography ==
Barros-Neto, José (1973). An Introduction to the Theory of Distributions. New York, NY: Dekker.
Benedetto, J.J. (1997), Harmonic Analysis and Applications, CRC Press.
Lützen, J. (1982). The prehistory of the theory of distributions. New York, Berlin: Springer Verlag.
Folland, G.B. (1989). Harmonic Analysis in Phase Space. Princeton, NJ: Princeton University Press.
Friedlander, F.G.; Joshi, M.S. (1998). Introduction to the Theory of Distributions. Cambridge, UK: Cambridge University Press..
Gårding, L. (1997), Some Points of Analysis and their History, American Mathematical Society.
Gel'fand, I.M.; Shilov, G.E. (1966–1968), Generalized functions, vol. 1–5, Academic Press.
Grubb, G. (2009), Distributions and Operators, Springer.
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8, MR 0717035.
Horváth, John (1966). Topological Vector Spaces and Distributions. Addison-Wesley series in mathematics. Vol. 1. Reading, MA: Addison-Wesley Publishing Company. ISBN 978-0201029857.
Kolmogorov, Andrey; Fomin, Sergei V. (2012) [1957]. Elements of the Theory of Functions and Functional Analysis. Dover Books on Mathematics. New York: Dover Books. ISBN 978-1-61427-304-2. OCLC 912495626.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Petersen, Bent E. (1983). Introduction to the Fourier Transform and Pseudo-Differential Operators. Boston, MA: Pitman Publishing..
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schwartz, Laurent (1954), "Sur l'impossibilité de la multiplications des distributions", C. R. Acad. Sci. Paris, 239: 847–848.
Schwartz, Laurent (1951), Théorie des distributions, vol. 1–2, Hermann.
Sobolev, S.L. (1936), "Méthode nouvelle à résoudre le problème de Cauchy pour les équations linéaires hyperboliques normales", Mat. Sbornik, 1: 39–72.
Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, ISBN 0-691-08078-X.
Strichartz, R. (1994), A Guide to Distribution Theory and Fourier Transforms, CRC Press, ISBN 0-8493-8273-4.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Woodward, P.M. (1953). Probability and Information Theory with Applications to Radar. Oxford, UK: Pergamon Press.
== Further reading ==
M. J. Lighthill (1959). Introduction to Fourier Analysis and Generalised Functions. Cambridge University Press. ISBN 0-521-09128-4 (requires very little knowledge of analysis; defines distributions as limits of sequences of functions under integrals)
V.S. Vladimirov (2002). Methods of the theory of generalized functions. Taylor & Francis. ISBN 0-415-27356-0
Vladimirov, V.S. (2001) [1994], "Generalized function", Encyclopedia of Mathematics, EMS Press.
Vladimirov, V.S. (2001) [1994], "Generalized functions, space of", Encyclopedia of Mathematics, EMS Press.
Vladimirov, V.S. (2001) [1994], "Generalized function, derivative of a", Encyclopedia of Mathematics, EMS Press.
Vladimirov, V.S. (2001) [1994], "Generalized functions, product of", Encyclopedia of Mathematics, EMS Press.
Oberguggenberger, Michael (2001) [1994], "Generalized function algebras", Encyclopedia of Mathematics, EMS Press. | Wikipedia/Theory_of_distributions |
In ring theory, a branch of mathematics, an idempotent element or simply idempotent of a ring is an element a such that a2 = a. That is, the element is idempotent under the ring's multiplication. Inductively then, one can also conclude that a = a2 = a3 = a4 = ... = an for any positive integer n. For example, an idempotent element of a matrix ring is precisely an idempotent matrix.
For general rings, elements idempotent under multiplication are involved in decompositions of modules, and connected to homological properties of the ring. In Boolean algebra, the main objects of study are rings in which all elements are idempotent under both addition and multiplication.
== Examples ==
=== Quotients of Z ===
One may consider the ring of integers modulo n, where n is square-free. By the Chinese remainder theorem, this ring factors into the product of rings of integers modulo p, where p is prime. Now each of these factors is a field, so it is clear that the factors' only idempotents will be 0 and 1. That is, each factor has two idempotents. So if there are m factors, there will be 2m idempotents.
We can check this for the integers mod 6, R = Z / 6Z. Since 6 has two prime factors (2 and 3) it should have 22 idempotents.
02 ≡ 0 ≡ 0 (mod 6)
12 ≡ 1 ≡ 1 (mod 6)
22 ≡ 4 ≡ 4 (mod 6)
32 ≡ 9 ≡ 3 (mod 6)
42 ≡ 16 ≡ 4 (mod 6)
52 ≡ 25 ≡ 1 (mod 6)
From these computations, 0, 1, 3, and 4 are idempotents of this ring, while 2 and 5 are not. This also demonstrates the decomposition properties described below: because 3 + 4 ≡ 1 (mod 6), there is a ring decomposition 3Z / 6Z ⊕ 4Z / 6Z. In 3Z / 6Z the multiplicative identity is 3 + 6Z and in 4Z / 6Z the multiplicative identity is 4 + 6Z.
=== Quotient of polynomial ring ===
Given a ring R and an element f ∈ R such that f2 ≠ 0, the quotient ring
R / (f2 − f)
has the idempotent f. For example, this could be applied to x ∈ Z[x], or any polynomial f ∈ k[x1, ..., xn].
=== Idempotents in the ring of split-quaternions ===
There is a circle of idempotents in the ring of split-quaternions. Split quaternions have the structure of a real algebra, so elements can be written w + xi + yj + zk over a basis {1, i, j, k}, with j2 = k2 = +1. For any θ,
s
=
j
cos
θ
+
k
sin
θ
{\displaystyle s=j\cos \theta +k\sin \theta }
satisfies s2 = +1 since j and k satisfy the anticommutative property. Now
(
1
+
s
2
)
2
=
1
+
2
s
+
s
2
4
=
1
+
s
2
,
{\displaystyle ({\frac {1+s}{2}})^{2}={\frac {1+2s+s^{2}}{4}}={\frac {1+s}{2}},}
the idempotent property.
The element s is called a hyperbolic unit and so far, the i-coordinate has been taken as zero. When this coordinate is non-zero, then there is a hyperboloid of one sheet of hyperbolic units in split-quaternions. The same equality shows the idempotent property of
1
+
s
2
{\displaystyle {\frac {1+s}{2}}}
where s is on the hyperboloid.
== Types of ring idempotents ==
A partial list of important types of idempotents includes:
Two idempotents a and b are called orthogonal if ab = ba = 0. If a is idempotent in the ring R (with unity), then so is b = 1 − a; moreover, a and b are orthogonal.
An idempotent a in R is called a central idempotent if ax = xa for all x in R, that is, if a is in the center of R.
A trivial idempotent refers to either of the elements 0 and 1, which are always idempotent.
A primitive idempotent of a ring R is a nonzero idempotent a such that aR is indecomposable as a right R-module; that is, such that aR is not a direct sum of two nonzero submodules. Equivalently, a is a primitive idempotent if it cannot be written as a = e + f, where e and f are nonzero orthogonal idempotents in R.
A local idempotent is an idempotent a such that aRa is a local ring. This implies that aR is directly indecomposable, so local idempotents are also primitive.
A right irreducible idempotent is an idempotent a for which aR is a simple module. By Schur's lemma, EndR(aR) = aRa is a division ring, and hence is a local ring, so right (and left) irreducible idempotents are local.
A centrally primitive idempotent is a central idempotent a that cannot be written as the sum of two nonzero orthogonal central idempotents.
An idempotent a + I in the quotient ring R / I is said to lift modulo I if there is an idempotent b in R such that b + I = a + I.
An idempotent a of R is called a full idempotent if RaR = R.
A separability idempotent; see Separable algebra.
Any non-trivial idempotent a is a zero divisor (because ab = 0 with neither a nor b being zero, where b = 1 − a). This shows that integral domains and division rings do not have such idempotents. Local rings also do not have such idempotents, but for a different reason. The only idempotent contained in the Jacobson radical of a ring is 0.
== Rings characterized by idempotents ==
A ring in which all elements are idempotent is called a Boolean ring. Some authors use the term "idempotent ring" for this type of ring. In such a ring, multiplication is commutative and every element is its own additive inverse.
A ring is semisimple if and only if every right (or every left) ideal is generated by an idempotent.
A ring is von Neumann regular if and only if every finitely generated right (or every finitely generated left) ideal is generated by an idempotent.
A ring for which the annihilator r.Ann(S) every subset S of R is generated by an idempotent is called a Baer ring. If the condition only holds for all singleton subsets of R, then the ring is a right Rickart ring. Both of these types of rings are interesting even when they lack a multiplicative identity.
A ring in which all idempotents are central is called an abelian ring. Such rings need not be commutative.
A ring is directly irreducible if and only if 0 and 1 are the only central idempotents.
A ring R can be written as e1R ⊕ e2R ⊕ ... ⊕ enR with each ei a local idempotent if and only if R is a semiperfect ring.
A ring is called an SBI ring or Lift/rad ring if all idempotents of R lift modulo the Jacobson radical.
A ring satisfies the ascending chain condition on right direct summands if and only if the ring satisfies the descending chain condition on left direct summands if and only if every set of pairwise orthogonal idempotents is finite.
If a is idempotent in the ring R, then aRa is again a ring, with multiplicative identity a. The ring aRa is often referred to as a corner ring of R. The corner ring arises naturally since the ring of endomorphisms EndR(aR) ≅ aRa.
== Role in decompositions ==
The idempotents of R have an important connection to decomposition of R-modules. If M is an R-module and E = EndR(M) is its ring of endomorphisms, then A ⊕ B = M if and only if there is a unique idempotent e in E such that A = eM and B = (1 − e)M. Clearly then, M is directly indecomposable if and only if 0 and 1 are the only idempotents in E.
In the case when M = R (assumed unital), the endomorphism ring EndR(R) = R, where each endomorphism arises as left multiplication by a fixed ring element. With this modification of notation, A ⊕ B = R as right modules if and only if there exists a unique idempotent e such that eR = A and (1 − e)R = B. Thus every direct summand of R is generated by an idempotent.
If a is a central idempotent, then the corner ring aRa = Ra is a ring with multiplicative identity a. Just as idempotents determine the direct decompositions of R as a module, the central idempotents of R determine the decompositions of R as a direct sum of rings. If R is the direct sum of the rings R1, ..., Rn, then the identity elements of the rings Ri are central idempotents in R, pairwise orthogonal, and their sum is 1. Conversely, given central idempotents a1, ..., an in R that are pairwise orthogonal and have sum 1, then R is the direct sum of the rings Ra1, ..., Ran. So in particular, every central idempotent a in R gives rise to a decomposition of R as a direct sum of the corner rings aRa and (1 − a)R(1 − a). As a result, a ring R is directly indecomposable as a ring if and only if the identity 1 is centrally primitive.
Working inductively, one can attempt to decompose 1 into a sum of centrally primitive elements. If 1 is centrally primitive, we are done. If not, it is a sum of central orthogonal idempotents, which in turn are primitive or sums of more central idempotents, and so on. The problem that may occur is that this may continue without end, producing an infinite family of central orthogonal idempotents. The condition "R does not contain infinite sets of central orthogonal idempotents" is a type of finiteness condition on the ring. It can be achieved in many ways, such as requiring the ring to be right Noetherian. If a decomposition R = c1R ⊕ c2R ⊕ ... ⊕ cnR exists with each ci a centrally primitive idempotent, then R is a direct sum of the corner rings ciRci, each of which is ring irreducible.
For associative algebras or Jordan algebras over a field, the Peirce decomposition is a decomposition of an algebra as a sum of eigenspaces of commuting idempotent elements.
== Relation with involutions ==
If a is an idempotent of the endomorphism ring EndR(M), then the endomorphism f = 1 − 2a is an R-module involution of M. That is, f is an R-module homomorphism such that f2 is the identity endomorphism of M.
An idempotent element a of R and its associated involution f gives rise to two involutions of the module R, depending on viewing R as a left or right module. If r represents an arbitrary element of R, f can be viewed as a right R-module homomorphism r ↦ fr so that ffr = r, or f can also be viewed as a left R-module homomorphism r ↦ rf, where rff = r.
This process can be reversed if 2 is an invertible element of R: if b is an involution, then 2−1(1 − b) and 2−1(1 + b) are orthogonal idempotents, corresponding to a and 1 − a. Thus for a ring in which 2 is invertible, the idempotent elements correspond to involutions in a one-to-one manner.
== Category of R-modules ==
Lifting idempotents also has major consequences for the category of R-modules. All idempotents lift modulo I if and only if every R direct summand of R/I has a projective cover as an R-module. Idempotents always lift modulo nil ideals and rings for which R is I-adically complete.
Lifting is most important when I = J(R), the Jacobson radical of R. Yet another characterization of semiperfect rings is that they are semilocal rings whose idempotents lift modulo J(R).
== Lattice of idempotents ==
One may define a partial order on the idempotents of a ring as follows: if a and b are idempotents, we write a ≤ b if and only if ab = ba = a. With respect to this order, 0 is the smallest and 1 the largest idempotent. For orthogonal idempotents a and b, a + b is also idempotent, and we have a ≤ a + b and b ≤ a + b. The atoms of this partial order are precisely the primitive idempotents.
When the above partial order is restricted to the central idempotents of R, a lattice structure, or even a Boolean algebra structure, can be given. For two central idempotents e and f, the complement is given by
¬e = 1 − e,
the meet is given by
e ∧ f = ef.
and the join is given by
e ∨ f = ¬(¬e ∧ ¬f) = e + f − ef
The ordering now becomes simply e ≤ f if and only if eR ⊆ fR, and the join and meet satisfy (e ∨ f)R = eR + fR and (e ∧ f)R = eR ∩ fR = (eR)(fR). It is shown in Goodearl 1991, p. 99 that if R is von Neumann regular and right self-injective, then the lattice is a complete lattice.
== Notes ==
== Citations ==
== References == | Wikipedia/Idempotent_element_(ring_theory) |
In mathematical analysis, a bump function (also called a test function) is a function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
on a Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
which is both smooth (in the sense of having continuous derivatives of all orders) and compactly supported. The set of all bump functions with domain
R
n
{\displaystyle \mathbb {R} ^{n}}
forms a vector space, denoted
C
0
∞
(
R
n
)
{\displaystyle \mathrm {C} _{0}^{\infty }(\mathbb {R} ^{n})}
or
C
c
∞
(
R
n
)
.
{\displaystyle \mathrm {C} _{\mathrm {c} }^{\infty }(\mathbb {R} ^{n}).}
The dual space of this space endowed with a suitable topology is the space of distributions.
== Examples ==
The function
Ψ
:
R
→
R
{\displaystyle \Psi :\mathbb {R} \to \mathbb {R} }
given by
Ψ
(
x
)
=
{
exp
(
1
x
2
−
1
)
,
if
|
x
|
<
1
,
0
,
if
|
x
|
≥
1
,
{\displaystyle \Psi (x)={\begin{cases}\exp \left({\frac {1}{x^{2}-1}}\right),&{\text{ if }}|x|<1,\\0,&{\text{ if }}|x|\geq 1,\end{cases}}}
is an example of a bump function in one dimension. Note that the support of this function is the closed interval
[
−
1
,
1
]
{\displaystyle [-1,1]}
. In fact, by definition of support, we have that
supp
(
Ψ
)
:=
{
x
∈
R
:
Ψ
(
x
)
≠
0
}
¯
=
(
−
1
,
1
)
¯
{\displaystyle \operatorname {supp} (\Psi ):={\overline {\{x\in \mathbb {R} :\Psi (x)\neq 0\}}}={\overline {(-1,1)}}}
, where the closure is taken with respect the Euclidean topology of the real line. The proof of smoothness follows along the same lines as for the related function discussed in the Non-analytic smooth function article. This function can be interpreted as the Gaussian function
exp
(
−
y
2
)
{\displaystyle \exp \left(-y^{2}\right)}
scaled to fit into the unit disc: the substitution
y
2
=
1
/
(
1
−
x
2
)
{\displaystyle y^{2}={1}/{\left(1-x^{2}\right)}}
corresponds to sending
x
=
±
1
{\displaystyle x=\pm 1}
to
y
=
∞
.
{\displaystyle y=\infty .}
A simple example of a (square) bump function in
n
{\displaystyle n}
variables is obtained by taking the product of
n
{\displaystyle n}
copies of the above bump function in one variable, so
Φ
(
x
1
,
x
2
,
…
,
x
n
)
=
Ψ
(
x
1
)
Ψ
(
x
2
)
⋯
Ψ
(
x
n
)
.
{\displaystyle \Phi (x_{1},x_{2},\dots ,x_{n})=\Psi (x_{1})\Psi (x_{2})\cdots \Psi (x_{n}).}
A radially symmetric bump function in
n
{\displaystyle n}
variables can be formed by taking the function
Ψ
n
:
R
n
→
R
{\displaystyle \Psi _{n}:\mathbb {R} ^{n}\to \mathbb {R} }
defined by
Ψ
n
(
x
)
=
Ψ
(
|
x
|
)
{\displaystyle \Psi _{n}(\mathbf {x} )=\Psi (|\mathbf {x} |)}
. This function is supported on the unit ball centered at the origin.
For another example, take an
h
{\displaystyle h}
that is positive on
(
c
,
d
)
{\displaystyle (c,d)}
and zero elsewhere, for example
h
(
x
)
=
{
exp
(
−
1
(
x
−
c
)
(
d
−
x
)
)
,
c
<
x
<
d
0
,
o
t
h
e
r
w
i
s
e
{\displaystyle h(x)={\begin{cases}\exp \left(-{\frac {1}{(x-c)(d-x)}}\right),&c<x<d\\0,&\mathrm {otherwise} \end{cases}}}
.
Smooth transition functions
Consider the function
f
(
x
)
=
{
e
−
1
x
if
x
>
0
,
0
if
x
≤
0
,
{\displaystyle f(x)={\begin{cases}e^{-{\frac {1}{x}}}&{\text{if }}x>0,\\0&{\text{if }}x\leq 0,\end{cases}}}
defined for every real number x.
The function
g
(
x
)
=
f
(
x
)
f
(
x
)
+
f
(
1
−
x
)
,
x
∈
R
,
{\displaystyle g(x)={\frac {f(x)}{f(x)+f(1-x)}},\qquad x\in \mathbb {R} ,}
has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x ≤ 0 and g(x) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function
R
∋
x
↦
g
(
x
−
a
b
−
a
)
.
{\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}.}
For real numbers a < b < c < d, the smooth function
R
∋
x
↦
g
(
x
−
a
b
−
a
)
g
(
d
−
x
d
−
c
)
{\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}\,g{\Bigl (}{\frac {d-x}{d-c}}{\Bigr )}}
equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function.
Caution must be taken since, as example, taking
{
a
=
−
1
}
<
{
b
=
c
=
0
}
<
{
d
=
1
}
{\displaystyle \{a=-1\}<\{b=c=0\}<\{d=1\}}
, leads to:
q
(
x
)
=
1
1
+
e
1
−
2
|
x
|
x
2
−
|
x
|
{\displaystyle q(x)={\frac {1}{1+e^{\frac {1-2|x|}{x^{2}-|x|}}}}}
which is not an infinitely differentiable function (so, is not "smooth"), so the constraints a < b < c < d must be strictly fulfilled.
Some interesting facts about the function:
q
(
x
,
a
)
=
1
1
+
e
a
(
1
−
2
|
x
|
)
x
2
−
|
x
|
{\displaystyle q(x,a)={\frac {1}{1+e^{\frac {a(1-2|x|)}{x^{2}-|x|}}}}}
Are that
q
(
x
,
3
2
)
{\displaystyle q\left(x,{\frac {\sqrt {3}}{2}}\right)}
make smooth transition curves with "almost" constant slope edges (a bump function with true straight slopes is portrayed this Another example).
A proper example of a smooth Bump function would be:
u
(
x
)
=
{
1
,
if
x
=
0
,
0
,
if
|
x
|
≥
1
,
1
1
+
e
1
−
2
|
x
|
x
2
−
|
x
|
,
otherwise
,
{\displaystyle u(x)={\begin{cases}1,{\text{if }}x=0,\\0,{\text{if }}|x|\geq 1,\\{\frac {1}{1+e^{\frac {1-2|x|}{x^{2}-|x|}}}},{\text{otherwise}},\end{cases}}}
A proper example of a smooth transition function will be:
w
(
x
)
=
{
1
1
+
e
2
x
−
1
x
2
−
x
if
0
<
x
<
1
,
0
if
x
≤
0
,
1
if
x
≥
1
,
{\displaystyle w(x)={\begin{cases}{\frac {1}{1+e^{\frac {2x-1}{x^{2}-x}}}}&{\text{if }}0<x<1,\\0&{\text{if }}x\leq 0,\\1&{\text{if }}x\geq 1,\end{cases}}}
where could be noticed that it can be represented also through Hyperbolic functions:
1
1
+
e
2
x
−
1
x
2
−
x
=
1
2
(
1
−
tanh
(
2
x
−
1
2
(
x
2
−
x
)
)
)
{\displaystyle {\frac {1}{1+e^{\frac {2x-1}{x^{2}-x}}}}={\frac {1}{2}}\left(1-\tanh \left({\frac {2x-1}{2(x^{2}-x)}}\right)\right)}
== Existence of bump functions ==
It is possible to construct bump functions "to specifications". Stated formally, if
K
{\displaystyle K}
is an arbitrary compact set in
n
{\displaystyle n}
dimensions and
U
{\displaystyle U}
is an open set containing
K
,
{\displaystyle K,}
there exists a bump function
ϕ
{\displaystyle \phi }
which is
1
{\displaystyle 1}
on
K
{\displaystyle K}
and
0
{\displaystyle 0}
outside of
U
.
{\displaystyle U.}
Since
U
{\displaystyle U}
can be taken to be a very small neighborhood of
K
,
{\displaystyle K,}
this amounts to being able to construct a function that is
1
{\displaystyle 1}
on
K
{\displaystyle K}
and falls off rapidly to
0
{\displaystyle 0}
outside of
K
,
{\displaystyle K,}
while still being smooth.
Bump functions defined in terms of convolution
The construction proceeds as follows. One considers a compact neighborhood
V
{\displaystyle V}
of
K
{\displaystyle K}
contained in
U
,
{\displaystyle U,}
so
K
⊆
V
∘
⊆
V
⊆
U
.
{\displaystyle K\subseteq V^{\circ }\subseteq V\subseteq U.}
The characteristic function
χ
V
{\displaystyle \chi _{V}}
of
V
{\displaystyle V}
will be equal to
1
{\displaystyle 1}
on
V
{\displaystyle V}
and
0
{\displaystyle 0}
outside of
V
,
{\displaystyle V,}
so in particular, it will be
1
{\displaystyle 1}
on
K
{\displaystyle K}
and
0
{\displaystyle 0}
outside of
U
.
{\displaystyle U.}
This function is not smooth however. The key idea is to smooth
χ
V
{\displaystyle \chi _{V}}
a bit, by taking the convolution of
χ
V
{\displaystyle \chi _{V}}
with a mollifier. The latter is just a bump function with a very small support and whose integral is
1.
{\displaystyle 1.}
Such a mollifier can be obtained, for example, by taking the bump function
Φ
{\displaystyle \Phi }
from the previous section and performing appropriate scalings.
Bump functions defined in terms of a function
c
:
R
→
[
0
,
∞
)
{\displaystyle c:\mathbb {R} \to [0,\infty )}
with support
(
−
∞
,
0
]
{\displaystyle (-\infty ,0]}
An alternative construction that does not involve convolution is now detailed.
It begins by constructing a smooth function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
that is positive on a given open subset
U
⊆
R
n
{\displaystyle U\subseteq \mathbb {R} ^{n}}
and vanishes off of
U
.
{\displaystyle U.}
This function's support is equal to the closure
U
¯
{\displaystyle {\overline {U}}}
of
U
{\displaystyle U}
in
R
n
,
{\displaystyle \mathbb {R} ^{n},}
so if
U
¯
{\displaystyle {\overline {U}}}
is compact, then
f
{\displaystyle f}
is a bump function.
Start with any smooth function
c
:
R
→
R
{\displaystyle c:\mathbb {R} \to \mathbb {R} }
that vanishes on the negative reals and is positive on the positive reals (that is,
c
=
0
{\displaystyle c=0}
on
(
−
∞
,
0
)
{\displaystyle (-\infty ,0)}
and
c
>
0
{\displaystyle c>0}
on
(
0
,
∞
)
,
{\displaystyle (0,\infty ),}
where continuity from the left necessitates
c
(
0
)
=
0
{\displaystyle c(0)=0}
); an example of such a function is
c
(
x
)
:=
e
−
1
/
x
{\displaystyle c(x):=e^{-1/x}}
for
x
>
0
{\displaystyle x>0}
and
c
(
x
)
:=
0
{\displaystyle c(x):=0}
otherwise.
Fix an open subset
U
{\displaystyle U}
of
R
n
{\displaystyle \mathbb {R} ^{n}}
and denote the usual Euclidean norm by
‖
⋅
‖
{\displaystyle \|\cdot \|}
(so
R
n
{\displaystyle \mathbb {R} ^{n}}
is endowed with the usual Euclidean metric).
The following construction defines a smooth function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
that is positive on
U
{\displaystyle U}
and vanishes outside of
U
.
{\displaystyle U.}
So in particular, if
U
{\displaystyle U}
is relatively compact then this function
f
{\displaystyle f}
will be a bump function.
If
U
=
R
n
{\displaystyle U=\mathbb {R} ^{n}}
then let
f
=
1
{\displaystyle f=1}
while if
U
=
∅
{\displaystyle U=\varnothing }
then let
f
=
0
{\displaystyle f=0}
; so assume
U
{\displaystyle U}
is neither of these. Let
(
U
k
)
k
=
1
∞
{\displaystyle \left(U_{k}\right)_{k=1}^{\infty }}
be an open cover of
U
{\displaystyle U}
by open balls where the open ball
U
k
{\displaystyle U_{k}}
has radius
r
k
>
0
{\displaystyle r_{k}>0}
and center
a
k
∈
U
.
{\displaystyle a_{k}\in U.}
Then the map
f
k
:
R
n
→
R
{\displaystyle f_{k}:\mathbb {R} ^{n}\to \mathbb {R} }
defined by
f
k
(
x
)
=
c
(
r
k
2
−
‖
x
−
a
k
‖
2
)
{\displaystyle f_{k}(x)=c\left(r_{k}^{2}-\left\|x-a_{k}\right\|^{2}\right)}
is a smooth function that is positive on
U
k
{\displaystyle U_{k}}
and vanishes off of
U
k
.
{\displaystyle U_{k}.}
For every
k
∈
N
,
{\displaystyle k\in \mathbb {N} ,}
let
M
k
=
sup
{
|
∂
p
f
k
∂
p
1
x
1
⋯
∂
p
n
x
n
(
x
)
|
:
x
∈
R
n
and
p
1
,
…
,
p
n
∈
Z
satisfy
0
≤
p
i
≤
k
and
p
=
∑
i
p
i
}
,
{\displaystyle M_{k}=\sup \left\{\left|{\frac {\partial ^{p}f_{k}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}(x)\right|~:~x\in \mathbb {R} ^{n}{\text{ and }}p_{1},\ldots ,p_{n}\in \mathbb {Z} {\text{ satisfy }}0\leq p_{i}\leq k{\text{ and }}p=\sum _{i}p_{i}\right\},}
where this supremum is not equal to
+
∞
{\displaystyle +\infty }
(so
M
k
{\displaystyle M_{k}}
is a non-negative real number) because
(
R
n
∖
U
k
)
∪
U
k
¯
=
R
n
,
{\displaystyle \left(\mathbb {R} ^{n}\setminus U_{k}\right)\cup {\overline {U_{k}}}=\mathbb {R} ^{n},}
the partial derivatives all vanish (equal
0
{\displaystyle 0}
) at any
x
{\displaystyle x}
outside of
U
k
,
{\displaystyle U_{k},}
while on the compact set
U
k
¯
,
{\displaystyle {\overline {U_{k}}},}
the values of each of the (finitely many) partial derivatives are (uniformly) bounded above by some non-negative real number.
The series
f
:=
∑
k
=
1
∞
f
k
2
k
M
k
{\displaystyle f~:=~\sum _{k=1}^{\infty }{\frac {f_{k}}{2^{k}M_{k}}}}
converges uniformly on
R
n
{\displaystyle \mathbb {R} ^{n}}
to a smooth function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
that is positive on
U
{\displaystyle U}
and vanishes off of
U
.
{\displaystyle U.}
Moreover, for any non-negative integers
p
1
,
…
,
p
n
∈
Z
,
{\displaystyle p_{1},\ldots ,p_{n}\in \mathbb {Z} ,}
∂
p
1
+
⋯
+
p
n
∂
p
1
x
1
⋯
∂
p
n
x
n
f
=
∑
k
=
1
∞
1
2
k
M
k
∂
p
1
+
⋯
+
p
n
f
k
∂
p
1
x
1
⋯
∂
p
n
x
n
{\displaystyle {\frac {\partial ^{p_{1}+\cdots +p_{n}}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}f~=~\sum _{k=1}^{\infty }{\frac {1}{2^{k}M_{k}}}{\frac {\partial ^{p_{1}+\cdots +p_{n}}f_{k}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}}
where this series also converges uniformly on
R
n
{\displaystyle \mathbb {R} ^{n}}
(because whenever
k
≥
p
1
+
⋯
+
p
n
{\displaystyle k\geq p_{1}+\cdots +p_{n}}
then the
k
{\displaystyle k}
th term's absolute value is
≤
M
k
2
k
M
k
=
1
2
k
{\displaystyle \leq {\tfrac {M_{k}}{2^{k}M_{k}}}={\tfrac {1}{2^{k}}}}
). This completes the construction.
As a corollary, given two disjoint closed subsets
A
,
B
{\displaystyle A,B}
of
R
n
,
{\displaystyle \mathbb {R} ^{n},}
the above construction guarantees the existence of smooth non-negative functions
f
A
,
f
B
:
R
n
→
[
0
,
∞
)
{\displaystyle f_{A},f_{B}:\mathbb {R} ^{n}\to [0,\infty )}
such that for any
x
∈
R
n
,
{\displaystyle x\in \mathbb {R} ^{n},}
f
A
(
x
)
=
0
{\displaystyle f_{A}(x)=0}
if and only if
x
∈
A
,
{\displaystyle x\in A,}
and similarly,
f
B
(
x
)
=
0
{\displaystyle f_{B}(x)=0}
if and only if
x
∈
B
,
{\displaystyle x\in B,}
then the function
h
:=
f
A
f
A
+
f
B
:
R
n
→
[
0
,
1
]
{\displaystyle h~:=~{\frac {f_{A}}{f_{A}+f_{B}}}:\mathbb {R} ^{n}\to [0,1]}
is smooth and for any
x
∈
R
n
,
{\displaystyle x\in \mathbb {R} ^{n},}
h
(
x
)
=
0
{\displaystyle h(x)=0}
if and only if
x
∈
A
,
{\displaystyle x\in A,}
h
(
x
)
=
1
{\displaystyle h(x)=1}
if and only if
x
∈
B
,
{\displaystyle x\in B,}
and
0
<
h
(
x
)
<
1
{\displaystyle 0<h(x)<1}
if and only if
x
∉
A
∪
B
.
{\displaystyle x\not \in A\cup B.}
In particular,
h
(
x
)
≠
0
{\displaystyle h(x)\neq 0}
if and only if
x
∈
R
n
∖
A
,
{\displaystyle x\in \mathbb {R} ^{n}\smallsetminus A,}
so if in addition
U
:=
R
n
∖
A
{\displaystyle U:=\mathbb {R} ^{n}\smallsetminus A}
is relatively compact in
R
n
{\displaystyle \mathbb {R} ^{n}}
(where
A
∩
B
=
∅
{\displaystyle A\cap B=\varnothing }
implies
B
⊆
U
{\displaystyle B\subseteq U}
) then
h
{\displaystyle h}
will be a smooth bump function with support in
U
¯
.
{\displaystyle {\overline {U}}.}
== Properties and uses ==
While bump functions are smooth, the identity theorem prohibits their being analytic unless they vanish identically. Bump functions are often used as mollifiers, as smooth cutoff functions, and to form smooth partitions of unity. They are the most common class of test functions used in analysis. The space of bump functions is closed under many operations. For instance, the sum, product, or convolution of two bump functions is again a bump function, and any differential operator with smooth coefficients, when applied to a bump function, will produce another bump function.
If the boundaries of the Bump function domain is
∂
x
,
{\displaystyle \partial x,}
to fulfill the requirement of "smoothness", it has to preserve the continuity of all its derivatives, which leads to the following requirement at the boundaries of its domain:
lim
x
→
∂
x
±
d
n
d
x
n
f
(
x
)
=
0
,
for all
n
≥
0
,
n
∈
Z
{\displaystyle \lim _{x\to \partial x^{\pm }}{\frac {d^{n}}{dx^{n}}}f(x)=0,\,{\text{ for all }}n\geq 0,\,n\in \mathbb {Z} }
The Fourier transform of a bump function is a (real) analytic function, and it can be extended to the whole complex plane: hence it cannot be compactly supported unless it is zero, since the only entire analytic bump function is the zero function (see Paley–Wiener theorem and Liouville's theorem). Because the bump function is infinitely differentiable, its Fourier transform must decay faster than any finite power of
1
/
k
{\displaystyle 1/k}
for a large angular frequency
|
k
|
.
{\displaystyle |k|.}
The Fourier transform of the particular bump function
Ψ
(
x
)
=
e
−
1
/
(
1
−
x
2
)
1
{
|
x
|
<
1
}
{\displaystyle \Psi (x)=e^{-1/(1-x^{2})}\mathbf {1} _{\{|x|<1\}}}
from above can be analyzed by a saddle-point method, and decays asymptotically as
|
k
|
−
3
/
4
e
−
|
k
|
{\displaystyle |k|^{-3/4}e^{-{\sqrt {|k|}}}}
for large
|
k
|
.
{\displaystyle |k|.}
== See also ==
Cutoff function – Integration kernels for smoothing out sharp featuresPages displaying short descriptions of redirect targets
Laplacian of the indicator – Limit of sequence of smooth functions
Non-analytic smooth function – Mathematical functions which are smooth but not analytic
Schwartz space – Function space of all functions whose derivatives are rapidly decreasing
== Citations ==
== References ==
Nestruev, Jet (10 September 2020). Smooth Manifolds and Observables. Graduate Texts in Mathematics. Vol. 220. Cham, Switzerland: Springer Nature. ISBN 978-3-030-45649-8. OCLC 1195920718. | Wikipedia/Test_function |
In mathematics, a group action of a group
G
{\displaystyle G}
on a set
S
{\displaystyle S}
is a group homomorphism from
G
{\displaystyle G}
to some group (under function composition) of functions from
S
{\displaystyle S}
to itself. It is said that
G
{\displaystyle G}
acts on
S
{\displaystyle S}
.
Many sets of transformations form a group under function composition; for example, the rotations around a point in the plane. It is often useful to consider the group as an abstract group, and to say that one has a group action of the abstract group that consists of performing the transformations of the group of transformations. The reason for distinguishing the group from the transformations is that, generally, a group of transformations of a structure acts also on various related structures; for example, the above rotation group also acts on triangles by transforming triangles into triangles.
If a group acts on a structure, it will usually also act on objects built from that structure. For example, the group of Euclidean isometries acts on Euclidean space and also on the figures drawn in it; in particular, it acts on the set of all triangles. Similarly, the group of symmetries of a polyhedron acts on the vertices, the edges, and the faces of the polyhedron.
A group action on a vector space is called a representation of the group. In the case of a finite-dimensional vector space, it allows one to identify many groups with subgroups of the general linear group
GL
(
n
,
K
)
{\displaystyle \operatorname {GL} (n,K)}
, the group of the invertible matrices of dimension
n
{\displaystyle n}
over a field
K
{\displaystyle K}
.
The symmetric group
S
n
{\displaystyle S_{n}}
acts on any set with
n
{\displaystyle n}
elements by permuting the elements of the set. Although the group of all permutations of a set depends formally on the set, the concept of group action allows one to consider a single group for studying the permutations of all sets with the same cardinality.
== Definition ==
=== Left group action ===
If
G
{\displaystyle G}
is a group with identity element
e
{\displaystyle e}
, and
X
{\displaystyle X}
is a set, then a (left) group action
α
{\displaystyle \alpha }
of
G
{\displaystyle G}
on X is a function
α
:
G
×
X
→
X
{\displaystyle \alpha :G\times X\to X}
that satisfies the following two axioms:
for all g and h in G and all x in
X
{\displaystyle X}
.
The group
G
{\displaystyle G}
is then said to act on
X
{\displaystyle X}
(from the left). A set
X
{\displaystyle X}
together with an action of
G
{\displaystyle G}
is called a (left)
G
{\displaystyle G}
-set.
It can be notationally convenient to curry the action
α
{\displaystyle \alpha }
, so that, instead, one has a collection of transformations αg : X → X, with one transformation αg for each group element g ∈ G. The identity and compatibility relations then read
α
e
(
x
)
=
x
{\displaystyle \alpha _{e}(x)=x}
and
α
g
(
α
h
(
x
)
)
=
(
α
g
∘
α
h
)
(
x
)
=
α
g
h
(
x
)
{\displaystyle \alpha _{g}(\alpha _{h}(x))=(\alpha _{g}\circ \alpha _{h})(x)=\alpha _{gh}(x)}
The second axiom states that the function composition is compatible with the group multiplication; they form a commutative diagram. This axiom can be shortened even further, and written as
α
g
∘
α
h
=
α
g
h
{\displaystyle \alpha _{g}\circ \alpha _{h}=\alpha _{gh}}
.
With the above understanding, it is very common to avoid writing
α
{\displaystyle \alpha }
entirely, and to replace it with either a dot, or with nothing at all. Thus, α(g, x) can be shortened to g⋅x or gx, especially when the action is clear from context. The axioms are then
e
⋅
x
=
x
{\displaystyle e{\cdot }x=x}
g
⋅
(
h
⋅
x
)
=
(
g
h
)
⋅
x
{\displaystyle g{\cdot }(h{\cdot }x)=(gh){\cdot }x}
From these two axioms, it follows that for any fixed g in
G
{\displaystyle G}
, the function from X to itself which maps x to g⋅x is a bijection, with inverse bijection the corresponding map for g−1. Therefore, one may equivalently define a group action of G on X as a group homomorphism from G into the symmetric group Sym(X) of all bijections from X to itself.
=== Right group action ===
Likewise, a right group action of
G
{\displaystyle G}
on
X
{\displaystyle X}
is a function
α
:
X
×
G
→
X
,
{\displaystyle \alpha :X\times G\to X,}
that satisfies the analogous axioms:
(with α(x, g) often shortened to xg or x⋅g when the action being considered is clear from context)
for all g and h in G and all x in X.
The difference between left and right actions is in the order in which a product gh acts on x. For a left action, h acts first, followed by g second. For a right action, g acts first, followed by h second. Because of the formula (gh)−1 = h−1g−1, a left action can be constructed from a right action by composing with the inverse operation of the group. Also, a right action of a group G on X can be considered as a left action of its opposite group Gop on X.
Thus, for establishing general properties of group actions, it suffices to consider only left actions. However, there are cases where this is not possible. For example, the multiplication of a group induces both a left action and a right action on the group itself—multiplication on the left and on the right, respectively.
== Notable properties of actions ==
Let G be a group acting on a set X. The action is called faithful or effective if g⋅x = x for all x ∈ X implies that g = eG. Equivalently, the homomorphism from G to the group of bijections of X corresponding to the action is injective.
The action is called free (or semiregular or fixed-point free) if the statement that g⋅x = x for some x ∈ X already implies that g = eG. In other words, no non-trivial element of G fixes a point of X. This is a much stronger property than faithfulness.
For example, the action of any group on itself by left multiplication is free. This observation implies Cayley's theorem that any group can be embedded in a symmetric group (which is infinite when the group is). A finite group may act faithfully on a set of size much smaller than its cardinality (however such an action cannot be free). For instance the abelian 2-group (Z / 2Z)n (of cardinality 2n) acts faithfully on a set of size 2n. This is not always the case, for example the cyclic group Z / 2nZ cannot act faithfully on a set of size less than 2n.
In general the smallest set on which a faithful action can be defined can vary greatly for groups of the same size. For example, three groups of size 120 are the symmetric group S5, the icosahedral group A5 × Z / 2Z and the cyclic group Z / 120Z. The smallest sets on which faithful actions can be defined for these groups are of size 5, 7, and 16 respectively.
=== Transitivity properties ===
The action of G on X is called transitive if for any two points x, y ∈ X there exists a g ∈ G so that g ⋅ x = y.
The action is simply transitive (or sharply transitive, or regular) if it is both transitive and free. This means that given x, y ∈ X there is exactly one g ∈ G such that g ⋅ x = y. If X is acted upon simply transitively by a group G then it is called a principal homogeneous space for G or a G-torsor.
For an integer n ≥ 1, the action is n-transitive if X has at least n elements, and for any pair of n-tuples (x1, ..., xn), (y1, ..., yn) ∈ Xn with pairwise distinct entries (that is xi ≠ xj, yi ≠ yj when i ≠ j) there exists a g ∈ G such that g⋅xi = yi for i = 1, ..., n. In other words, the action on the subset of Xn of tuples without repeated entries is transitive. For n = 2, 3 this is often called double, respectively triple, transitivity. The class of 2-transitive groups (that is, subgroups of a finite symmetric group whose action is 2-transitive) and more generally multiply transitive groups is well-studied in finite group theory.
An action is sharply n-transitive when the action on tuples without repeated entries in Xn is sharply transitive.
==== Examples ====
The action of the symmetric group of X is transitive, in fact n-transitive for any n up to the cardinality of X. If X has cardinality n, the action of the alternating group is (n − 2)-transitive but not (n − 1)-transitive.
The action of the general linear group of a vector space V on the set V ∖ {0} of non-zero vectors is transitive, but not 2-transitive (similarly for the action of the special linear group if the dimension of v is at least 2). The action of the orthogonal group of a Euclidean space is not transitive on nonzero vectors but it is on the unit sphere.
=== Primitive actions ===
The action of G on X is called primitive if there is no partition of X preserved by all elements of G apart from the trivial partitions (the partition in a single piece and its dual, the partition into singletons).
=== Topological properties ===
Assume that X is a topological space and the action of G is by homeomorphisms.
The action is wandering if every x ∈ X has a neighbourhood U such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅.
More generally, a point x ∈ X is called a point of discontinuity for the action of G if there is an open subset U ∋ x such that there are only finitely many g ∈ G with g⋅U ∩ U ≠ ∅. The domain of discontinuity of the action is the set of all points of discontinuity. Equivalently it is the largest G-stable open subset Ω ⊂ X such that the action of G on Ω is wandering. In a dynamical context this is also called a wandering set.
The action is properly discontinuous if for every compact subset K ⊂ X there are only finitely many g ∈ G such that g⋅K ∩ K ≠ ∅. This is strictly stronger than wandering; for instance the action of Z on R2 ∖ {(0, 0)} given by n⋅(x, y) = (2nx, 2−ny) is wandering and free but not properly discontinuous.
The action by deck transformations of the fundamental group of a locally simply connected space on a universal cover is wandering and free. Such actions can be characterized by the following property: every x ∈ X has a neighbourhood U such that g⋅U ∩ U = ∅ for every g ∈ G ∖ {eG}. Actions with this property are sometimes called freely discontinuous, and the largest subset on which the action is freely discontinuous is then called the free regular set.
An action of a group G on a locally compact space X is called cocompact if there exists a compact subset A ⊂ X such that X = G ⋅ A. For a properly discontinuous action, cocompactness is equivalent to compactness of the quotient space X / G.
=== Actions of topological groups ===
Now assume G is a topological group and X a topological space on which it acts by homeomorphisms. The action is said to be continuous if the map G × X → X is continuous for the product topology.
The action is said to be proper if the map G × X → X × X defined by (g, x) ↦ (x, g⋅x) is proper. This means that given compact sets K, K′ the set of g ∈ G such that g⋅K ∩ K′ ≠ ∅ is compact. In particular, this is equivalent to proper discontinuity if G is a discrete group.
It is said to be locally free if there exists a neighbourhood U of eG such that g⋅x ≠ x for all x ∈ X and g ∈ U ∖ {eG}.
The action is said to be strongly continuous if the orbital map g ↦ g⋅x is continuous for every x ∈ X. Contrary to what the name suggests, this is a weaker property than continuity of the action.
If G is a Lie group and X a differentiable manifold, then the subspace of smooth points for the action is the set of points x ∈ X such that the map g ↦ g⋅x is smooth. There is a well-developed theory of Lie group actions, i.e. action which are smooth on the whole space.
=== Linear actions ===
If g acts by linear transformations on a module over a commutative ring, the action is said to be irreducible if there are no proper nonzero g-invariant submodules. It is said to be semisimple if it decomposes as a direct sum of irreducible actions.
== Orbits and stabilizers ==
Consider a group G acting on a set X. The orbit of an element x in X is the set of elements in X to which x can be moved by the elements of G. The orbit of x is denoted by G⋅x:
G
⋅
x
=
{
g
⋅
x
:
g
∈
G
}
.
{\displaystyle G{\cdot }x=\{g{\cdot }x:g\in G\}.}
The defining properties of a group guarantee that the set of orbits of (points x in) X under the action of G form a partition of X. The associated equivalence relation is defined by saying x ~ y if and only if there exists a g in G with g⋅x = y. The orbits are then the equivalence classes under this relation; two elements x and y are equivalent if and only if their orbits are the same, that is, G⋅x = G⋅y.
The group action is transitive if and only if it has exactly one orbit, that is, if there exists x in X with G⋅x = X. This is the case if and only if G⋅x = X for all x in X (given that X is non-empty).
The set of all orbits of X under the action of G is written as X / G (or, less frequently, as G \ X), and is called the quotient of the action. In geometric situations it may be called the orbit space, while in algebraic situations it may be called the space of coinvariants, and written XG, by contrast with the invariants (fixed points), denoted XG: the coinvariants are a quotient while the invariants are a subset. The coinvariant terminology and notation are used particularly in group cohomology and group homology, which use the same superscript/subscript convention.
=== Invariant subsets ===
If Y is a subset of X, then G⋅Y denotes the set {g⋅y : g ∈ G and y ∈ Y}. The subset Y is said to be invariant under G if G⋅Y = Y (which is equivalent G⋅Y ⊆ Y). In that case, G also operates on Y by restricting the action to Y. The subset Y is called fixed under G if g⋅y = y for all g in G and all y in Y. Every subset that is fixed under G is also invariant under G, but not conversely.
Every orbit is an invariant subset of X on which G acts transitively. Conversely, any invariant subset of X is a union of orbits. The action of G on X is transitive if and only if all elements are equivalent, meaning that there is only one orbit.
A G-invariant element of X is x ∈ X such that g⋅x = x for all g ∈ G. The set of all such x is denoted XG and called the G-invariants of X. When X is a G-module, XG is the zeroth cohomology group of G with coefficients in X, and the higher cohomology groups are the derived functors of the functor of G-invariants.
=== Fixed points and stabilizer subgroups ===
Given g in G and x in X with g⋅x = x, it is said that "x is a fixed point of g" or that "g fixes x". For every x in X, the stabilizer subgroup of G with respect to x (also called the isotropy group or little group) is the set of all elements in G that fix x:
G
x
=
{
g
∈
G
:
g
⋅
x
=
x
}
.
{\displaystyle G_{x}=\{g\in G:g{\cdot }x=x\}.}
This is a subgroup of G, though typically not a normal one. The action of G on X is free if and only if all stabilizers are trivial. The kernel N of the homomorphism with the symmetric group, G → Sym(X), is given by the intersection of the stabilizers Gx for all x in X. If N is trivial, the action is said to be faithful (or effective).
Let x and y be two elements in X, and let g be a group element such that y = g⋅x. Then the two stabilizer groups Gx and Gy are related by Gy = gGxg−1. Proof: by definition, h ∈ Gy if and only if h⋅(g⋅x) = g⋅x. Applying g−1 to both sides of this equality yields (g−1hg)⋅x = x; that is, g−1hg ∈ Gx. An opposite inclusion follows similarly by taking h ∈ Gx and x = g−1⋅y.
The above says that the stabilizers of elements in the same orbit are conjugate to each other. Thus, to each orbit, we can associate a conjugacy class of a subgroup of G (that is, the set of all conjugates of the subgroup). Let (H) denote the conjugacy class of H. Then the orbit O has type (H) if the stabilizer Gx of some/any x in O belongs to (H). A maximal orbit type is often called a principal orbit type.
=== Orbit-stabilizer theorem ===
Orbits and stabilizers are closely related. For a fixed x in X, consider the map f : G → X given by g ↦ g⋅x. By definition the image f(G) of this map is the orbit G⋅x. The condition for two elements to have the same image is
f
(
g
)
=
f
(
h
)
⟺
g
⋅
x
=
h
⋅
x
⟺
g
−
1
h
⋅
x
=
x
⟺
g
−
1
h
∈
G
x
⟺
h
∈
g
G
x
.
{\displaystyle f(g)=f(h)\iff g{\cdot }x=h{\cdot }x\iff g^{-1}h{\cdot }x=x\iff g^{-1}h\in G_{x}\iff h\in gG_{x}.}
In other words, f(g) = f(h) if and only if g and h lie in the same coset for the stabilizer subgroup Gx. Thus, the fiber f−1({y}) of f over any y in G⋅x is contained in such a coset, and every such coset also occurs as a fiber. Therefore f induces a bijection between the set G / Gx of cosets for the stabilizer subgroup and the orbit G⋅x, which sends gGx ↦ g⋅x. This result is known as the orbit-stabilizer theorem.
If G is finite then the orbit-stabilizer theorem, together with Lagrange's theorem, gives
|
G
⋅
x
|
=
[
G
:
G
x
]
=
|
G
|
/
|
G
x
|
,
{\displaystyle |G\cdot x|=[G\,:\,G_{x}]=|G|/|G_{x}|,}
in other words the length of the orbit of x times the order of its stabilizer is the order of the group. In particular that implies that the orbit length is a divisor of the group order.
Example: Let G be a group of prime order p acting on a set X with k elements. Since each orbit has either 1 or p elements, there are at least k mod p orbits of length 1 which are G-invariant elements. More specifically, k and the number of G-invariant elements are congruent modulo p.
This result is especially useful since it can be employed for counting arguments (typically in situations where X is finite as well).
Example: We can use the orbit-stabilizer theorem to count the automorphisms of a graph. Consider the cubical graph as pictured, and let G denote its automorphism group. Then G acts on the set of vertices {1, 2, ..., 8}, and this action is transitive as can be seen by composing rotations about the center of the cube. Thus, by the orbit-stabilizer theorem, |G| = |G ⋅ 1| |G1| = 8 |G1|. Applying the theorem now to the stabilizer G1, we can obtain |G1| = |(G1) ⋅ 2| |(G1)2|. Any element of G that fixes 1 must send 2 to either 2, 4, or 5. As an example of such automorphisms consider the rotation around the diagonal axis through 1 and 7 by 2π/3, which permutes 2, 4, 5 and 3, 6, 8, and fixes 1 and 7. Thus, |(G1) ⋅ 2| = 3. Applying the theorem a third time gives |(G1)2| = |((G1)2) ⋅ 3| |((G1)2)3|. Any element of G that fixes 1 and 2 must send 3 to either 3 or 6. Reflecting the cube at the plane through 1, 2, 7 and 8 is such an automorphism sending 3 to 6, thus |((G1)2) ⋅ 3| = 2. One also sees that ((G1)2)3 consists only of the identity automorphism, as any element of G fixing 1, 2 and 3 must also fix all other vertices, since they are determined by their adjacency to 1, 2 and 3. Combining the preceding calculations, we can now obtain |G| = 8 ⋅ 3 ⋅ 2 ⋅ 1 = 48.
=== Burnside's lemma ===
A result closely related to the orbit-stabilizer theorem is Burnside's lemma:
|
X
/
G
|
=
1
|
G
|
∑
g
∈
G
|
X
g
|
,
{\displaystyle |X/G|={\frac {1}{|G|}}\sum _{g\in G}|X^{g}|,}
where Xg is the set of points fixed by g. This result is mainly of use when G and X are finite, when it can be interpreted as follows: the number of orbits is equal to the average number of points fixed per group element.
Fixing a group G, the set of formal differences of finite G-sets forms a ring called the Burnside ring of G, where addition corresponds to disjoint union, and multiplication to Cartesian product.
== Examples ==
The trivial action of any group G on any set X is defined by g⋅x = x for all g in G and all x in X; that is, every group element induces the identity permutation on X.
In every group G, left multiplication is an action of G on G: g⋅x = gx for all g, x in G. This action is free and transitive (regular), and forms the basis of a rapid proof of Cayley's theorem – that every group is isomorphic to a subgroup of the symmetric group of permutations of the set G.
In every group G with subgroup H, left multiplication is an action of G on the set of cosets G / H: g⋅aH = gaH for all g, a in G. In particular if H contains no nontrivial normal subgroups of G this induces an isomorphism from G to a subgroup of the permutation group of degree [G : H].
In every group G, conjugation is an action of G on G: g⋅x = gxg−1. An exponential notation is commonly used for the right-action variant: xg = g−1xg; it satisfies (xg)h = xgh.
In every group G with subgroup H, conjugation is an action of G on conjugates of H: g⋅K = gKg−1 for all g in G and K conjugates of H.
An action of Z on a set X uniquely determines and is determined by an automorphism of X, given by the action of 1. Similarly, an action of Z / 2Z on X is equivalent to the data of an involution of X.
The symmetric group Sn and its subgroups act on the set {1, ..., n} by permuting its elements
The symmetry group of a polyhedron acts on the set of vertices of that polyhedron. It also acts on the set of faces or the set of edges of the polyhedron.
The symmetry group of any geometrical object acts on the set of points of that object.
For a coordinate space V over a field F with group of units F*, the mapping F* × V → V given by a × (x1, x2, ..., xn) ↦ (ax1, ax2, ..., axn) is a group action called scalar multiplication.
The automorphism group of a vector space (or graph, or group, or ring ...) acts on the vector space (or set of vertices of the graph, or group, or ring ...).
The general linear group GL(n, K) and its subgroups, particularly its Lie subgroups (including the special linear group SL(n, K), orthogonal group O(n, K), special orthogonal group SO(n, K), and symplectic group Sp(n, K)) are Lie groups that act on the vector space Kn. The group operations are given by multiplying the matrices from the groups with the vectors from Kn.
The general linear group GL(n, Z) acts on Zn by natural matrix action. The orbits of its action are classified by the greatest common divisor of coordinates of the vector in Zn.
The affine group acts transitively on the points of an affine space, and the subgroup V of the affine group (that is, a vector space) has transitive and free (that is, regular) action on these points; indeed this can be used to give a definition of an affine space.
The projective linear group PGL(n + 1, K) and its subgroups, particularly its Lie subgroups, which are Lie groups that act on the projective space Pn(K). This is a quotient of the action of the general linear group on projective space. Particularly notable is PGL(2, K), the symmetries of the projective line, which is sharply 3-transitive, preserving the cross ratio; the Möbius group PGL(2, C) is of particular interest.
The isometries of the plane act on the set of 2D images and patterns, such as wallpaper patterns. The definition can be made more precise by specifying what is meant by image or pattern, for example, a function of position with values in a set of colors. Isometries are in fact one example of affine group (action).
The sets acted on by a group G comprise the category of G-sets in which the objects are G-sets and the morphisms are G-set homomorphisms: functions f : X → Y such that g⋅(f(x)) = f(g⋅x) for every g in G.
The Galois group of a field extension L / K acts on the field L but has only a trivial action on elements of the subfield K. Subgroups of Gal(L / K) correspond to subfields of L that contain K, that is, intermediate field extensions between L and K.
The additive group of the real numbers (R, +) acts on the phase space of "well-behaved" systems in classical mechanics (and in more general dynamical systems) by time translation: if t is in R and x is in the phase space, then x describes a state of the system, and t + x is defined to be the state of the system t seconds later if t is positive or −t seconds ago if t is negative.
The additive group of the real numbers (R, +) acts on the set of real functions of a real variable in various ways, with (t⋅f)(x) equal to, for example, f(x + t), f(x) + t, f(xet), f(x)et, f(x + t)et, or f(xet) + t, but not f(xet + t).
Given a group action of G on X, we can define an induced action of G on the power set of X, by setting g⋅U = {g⋅u : u ∈ U} for every subset U of X and every g in G. This is useful, for instance, in studying the action of the large Mathieu group on a 24-set and in studying symmetry in certain models of finite geometries.
The quaternions with norm 1 (the versors), as a multiplicative group, act on R3: for any such quaternion z = cos α/2 + v sin α/2, the mapping f(x) = zxz* is a counterclockwise rotation through an angle α about an axis given by a unit vector v; z is the same rotation; see quaternions and spatial rotation. This is not a faithful action because the quaternion −1 leaves all points where they were, as does the quaternion 1.
Given left G-sets X, Y, there is a left G-set YX whose elements are G-equivariant maps α : X × G → Y, and with left G-action given by g⋅α = α ∘ (idX × –g) (where "–g" indicates right multiplication by g). This G-set has the property that its fixed points correspond to equivariant maps X → Y; more generally, it is an exponential object in the category of G-sets.
== Group actions and groupoids ==
The notion of group action can be encoded by the action groupoid G′ = G ⋉ X associated to the group action. The stabilizers of the action are the vertex groups of the groupoid and the orbits of the action are its components.
== Morphisms and isomorphisms between G-sets ==
If X and Y are two G-sets, a morphism from X to Y is a function f : X → Y such that f(g⋅x) = g⋅f(x) for all g in G and all x in X. Morphisms of G-sets are also called equivariant maps or G-maps.
The composition of two morphisms is again a morphism. If a morphism f is bijective, then its inverse is also a morphism. In this case f is called an isomorphism, and the two G-sets X and Y are called isomorphic; for all practical purposes, isomorphic G-sets are indistinguishable.
Some example isomorphisms:
Every regular G action is isomorphic to the action of G on G given by left multiplication.
Every free G action is isomorphic to G × S, where S is some set and G acts on G × S by left multiplication on the first coordinate. (S can be taken to be the set of orbits X / G.)
Every transitive G action is isomorphic to left multiplication by G on the set of left cosets of some subgroup H of G. (H can be taken to be the stabilizer group of any element of the original G-set.)
With this notion of morphism, the collection of all G-sets forms a category; this category is a Grothendieck topos (in fact, assuming a classical metalogic, this topos will even be Boolean).
== Variants and generalizations ==
We can also consider actions of monoids on sets, by using the same two axioms as above. This does not define bijective maps and equivalence relations however. See semigroup action.
Instead of actions on sets, we can define actions of groups and monoids on objects of an arbitrary category: start with an object X of some category, and then define an action on X as a monoid homomorphism into the monoid of endomorphisms of X. If X has an underlying set, then all definitions and facts stated above can be carried over. For example, if we take the category of vector spaces, we obtain group representations in this fashion.
We can view a group G as a category with a single object in which every morphism is invertible. A (left) group action is then nothing but a (covariant) functor from G to the category of sets, and a group representation is a functor from G to the category of vector spaces. A morphism between G-sets is then a natural transformation between the group action functors. In analogy, an action of a groupoid is a functor from the groupoid to the category of sets or to some other category.
In addition to continuous actions of topological groups on topological spaces, one also often considers smooth actions of Lie groups on smooth manifolds, regular actions of algebraic groups on algebraic varieties, and actions of group schemes on schemes. All of these are examples of group objects acting on objects of their respective category.
== Gallery ==
== See also ==
Gain graph
Group with operators
Measurable group action
Monoid action
Young–Deruyts development
== Notes ==
== Citations ==
== References ==
Aschbacher, Michael (2000). Finite Group Theory. Cambridge University Press. ISBN 978-0-521-78675-1. MR 1777008.
Dummit, David; Richard Foote (2003). Abstract Algebra (3rd ed.). Wiley. ISBN 0-471-43334-9.
Eie, Minking; Chang, Shou-Te (2010). A Course on Abstract Algebra. World Scientific. ISBN 978-981-4271-88-2.
Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 978-0-521-79540-1, MR 1867354.
Rotman, Joseph (1995). An Introduction to the Theory of Groups. Graduate Texts in Mathematics 148 (4th ed.). Springer-Verlag. ISBN 0-387-94285-8.
Smith, Jonathan D.H. (2008). Introduction to abstract algebra. Textbooks in mathematics. CRC Press. ISBN 978-1-4200-6371-4.
Kapovich, Michael (2009), Hyperbolic manifolds and discrete groups, Modern Birkhäuser Classics, Birkhäuser, pp. xxvii+467, ISBN 978-0-8176-4912-8, Zbl 1180.57001
Maskit, Bernard (1988), Kleinian groups, Grundlehren der Mathematischen Wissenschaften, vol. 287, Springer-Verlag, pp. XIII+326, Zbl 0627.30039
Perrone, Paolo (2024), Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1
Thurston, William (1980), The geometry and topology of three-manifolds, Princeton lecture notes, p. 175, archived from the original on 2020-07-27, retrieved 2016-02-08
Thurston, William P. (1997), Three-dimensional geometry and topology. Vol. 1., Princeton Mathematical Series, vol. 35, Princeton University Press, pp. x+311, Zbl 0873.57001
tom Dieck, Tammo (1987), Transformation groups, de Gruyter Studies in Mathematics, vol. 8, Berlin: Walter de Gruyter & Co., p. 29, doi:10.1515/9783110858372.312, ISBN 978-3-11-009745-0, MR 0889050
== External links ==
"Action of a group on a manifold", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Group Action". MathWorld. | Wikipedia/Stabilizer_(group_theory) |
Derived algebraic geometry is a branch of mathematics that generalizes algebraic geometry to a situation where commutative rings, which provide local charts, are replaced by either differential graded algebras (over
Q
{\displaystyle \mathbb {Q} }
), simplicial commutative rings or
E
∞
{\displaystyle E_{\infty }}
-ring spectra from algebraic topology, whose higher homotopy groups account for the non-discreteness (e.g., Tor) of the structure sheaf. Grothendieck's scheme theory allows the structure sheaf to carry nilpotent elements. Derived algebraic geometry can be thought of as an extension of this idea, and provides natural settings for intersection theory (or motivic homotopy theory) of singular algebraic varieties and cotangent complexes in deformation theory (cf. J. Francis), among the other applications.
== Introduction ==
Basic objects of study in the field are derived schemes and derived stacks. The oft-cited motivation is Serre's intersection formula. In the usual formulation, the formula involves the Tor functor and thus, unless higher Tor vanish, the scheme-theoretic intersection (i.e., fiber product of immersions) does not yield the correct intersection number. In the derived context, one takes the derived tensor product
A
⊗
L
B
{\displaystyle A\otimes ^{L}B}
, whose higher homotopy is higher Tor, whose Spec is not a scheme but a derived scheme. Hence, the "derived" fiber product yields the correct intersection number. See Theorem 3.22 in Khan, where derived intersection theory has been developed.
The term "derived" is used in the same way as derived functor or derived category, in the sense that the category of commutative rings is being replaced with a ∞-category of "derived rings." In classical algebraic geometry, the derived category of quasi-coherent sheaves is viewed as a triangulated category, but it has natural enhancement to a stable ∞-category, which can be thought of as the ∞-categorical analogue of an abelian category.
== Definitions ==
Derived algebraic geometry is fundamentally the study of geometric objects using homological algebra and homotopy. Since objects in this field should encode the homological and homotopy information, there are various notions of what derived spaces encapsulate. The basic objects of study in derived algebraic geometry are derived schemes, and more generally, derived stacks. Heuristically, derived schemes should be functors from some category of derived rings to the category of sets
F
:
DerRings
→
Sets
{\displaystyle F:{\text{DerRings}}\to {\text{Sets}}}
which can be generalized further to have targets of higher groupoids (which are expected to be modelled by homotopy types). These derived stacks are suitable functors of the form
F
:
DerRings
→
HoT
{\displaystyle F:{\text{DerRings}}\to {\text{HoT}}}
Many authors model such functors as functors with values in simplicial sets, since they model homotopy types and are well-studied. Differing definitions on these derived spaces depend on a choice of what the derived rings are, and what the homotopy types should look like. Some examples of derived rings include commutative differential graded algebras, simplicial rings, and
E
∞
{\displaystyle E_{\infty }}
-rings.
=== Derived geometry over characteristic 0 ===
Over characteristic 0 many of the derived geometries agree since the derived rings are the same.
E
∞
{\displaystyle E_{\infty }}
algebras are just commutative differential graded algebras over characteristic zero. We can then define derived schemes similarly to schemes in algebraic geometry. Similar to algebraic geometry, we could also view these objects as a pair
(
X
,
O
X
∙
)
{\displaystyle (X,{\mathcal {O}}_{X}^{\bullet })}
which is a topological space
X
{\displaystyle X}
with a sheaf of commutative differential graded algebras. Sometimes authors take the convention that these are negatively graded, so
O
X
n
=
0
{\displaystyle {\mathcal {O}}_{X}^{n}=0}
for
n
>
0
{\displaystyle n>0}
. The sheaf condition could also be weakened so that for a cover
U
i
{\displaystyle U_{i}}
of
X
{\displaystyle X}
, the sheaves
O
U
i
∙
{\displaystyle {\mathcal {O}}_{U_{i}}^{\bullet }}
would glue on overlaps
U
i
j
{\displaystyle U_{ij}}
only by quasi-isomorphism.
Unfortunately, over characteristic p, differential graded algebras work poorly for homotopy theory, due to the fact
d
[
x
p
]
=
p
d
[
x
p
−
1
]
{\displaystyle d[x^{p}]=pd[x^{p-1}]}
[1]. This can be overcome by using simplicial algebras.
=== Derived geometry over arbitrary characteristic ===
Derived rings over arbitrary characteristic are taken as simplicial commutative rings because of the nice categorical properties these have. In particular, the category of simplicial rings is simplicially enriched, meaning the hom-sets are themselves simplicial sets. Also, there is a canonical model structure on simplicial commutative rings coming from simplicial sets. In fact, it is a theorem of Quillen's that the model structure on simplicial sets can be transferred over to simplicial commutative rings.
=== Higher stacks ===
It is conjectured there is a final theory of higher stacks which model homotopy types. Grothendieck conjectured these would be modelled by globular groupoids, or a weak form of their definition. Simpson gives a useful definition in the spirit of Grothendieck's ideas. Recall that an algebraic stack (here a 1-stack) is called representable if the fiber product of any two schemes is isomorphic to a scheme. If we take the ansatz that a 0-stack is just an algebraic space and a 1-stack is just a stack, we can recursively define an n-stack as an object such that the fiber product along any two schemes is an (n-1)-stack. If we go back to the definition of an algebraic stack, this new definition agrees.
== Spectral schemes ==
Another theory of derived algebraic geometry is encapsulated by the theory of spectral schemes. Their definition requires a fair amount of technology in order to precisely state. But, in short, spectral schemes
X
=
(
X
,
O
X
)
{\displaystyle X=({\mathfrak {X}},{\mathcal {O}}_{\mathfrak {X}})}
are given by a spectrally ringed
∞
{\displaystyle \infty }
-topos
X
{\displaystyle {\mathfrak {X}}}
together with a sheaf of
E
∞
{\displaystyle \mathbb {E} _{\infty }}
-rings
O
X
{\displaystyle {\mathcal {O}}_{\mathfrak {X}}}
on it subject to some locality conditions similar to the definition of affine schemes. In particular
X
≅
Shv
(
X
t
o
p
)
{\displaystyle {\mathfrak {X}}\cong {\text{Shv}}(X_{top})}
must be equivalent to the
∞
{\displaystyle \infty }
-topos of some topological space
There must exist a cover
U
i
{\displaystyle U_{i}}
of
X
t
o
p
{\displaystyle X_{top}}
such that the induced topos
(
X
U
i
,
O
X
U
i
)
{\displaystyle ({\mathfrak {X}}_{U_{i}},{\mathcal {O}}_{{\mathfrak {X}}_{U_{i}}})}
is equivalent to a spectrally ringed topos
Spec
(
A
i
)
{\displaystyle {\text{Spec}}(A_{i})}
for some
E
∞
{\displaystyle \mathbb {E} _{\infty }}
-ring
A
i
{\displaystyle A_{i}}
Moreover, the spectral scheme
X
{\displaystyle X}
is called connective if
π
i
(
O
X
)
=
0
{\displaystyle \pi _{i}({\mathcal {O}}_{\mathfrak {X}})=0}
for
i
<
0
{\displaystyle i<0}
.
=== Examples ===
Recall that the topos of a point
Sh
(
∗
)
{\displaystyle {\text{Sh}}(*)}
is equivalent to the category of sets. Then, in the
∞
{\displaystyle \infty }
-topos setting, we instead consider
∞
{\displaystyle \infty }
-sheaves of
∞
{\displaystyle \infty }
-groupoids (which are
∞
{\displaystyle \infty }
-categories with all morphisms invertible), denoted
Shv
(
∗
)
{\displaystyle {\text{Shv}}(*)}
, giving an analogue of the point topos in the
∞
{\displaystyle \infty }
-topos setting. Then, the structure of a spectrally ringed space can be given by attaching an
E
∞
{\displaystyle \mathbb {E} _{\infty }}
-ring
A
{\displaystyle A}
. Notice this implies that spectrally ringed spaces generalize
E
∞
{\displaystyle \mathbb {E} _{\infty }}
-rings since every
E
∞
{\displaystyle \mathbb {E} _{\infty }}
-ring can be associated with a spectrally ringed site.
This spectrally ringed topos can be a spectral scheme if the spectrum of this ring gives an equivalent
∞
{\displaystyle \infty }
-topos, so its underlying space is a point. For example, this can be given by the ring spectrum
H
Q
{\displaystyle H\mathbb {Q} }
, called the Eilenberg–Maclane spectrum, constructed from the Eilenberg–MacLane spaces
K
(
Q
,
n
)
{\displaystyle K(\mathbb {Q} ,n)}
.
== Applications ==
Derived algebraic geometry was used by Kerz, Strunk & Tamme (2018) to prove Weibel's conjecture on vanishing of negative K-theory.
The formulation of the Geometric Langlands conjecture by Arinkin and Gaitsgory uses derived algebraic geometry.
== See also ==
Derived scheme
Pursuing Stacks
Noncommutative algebraic geometry
Simplicial commutative ring
Derivator
Algebra over an operad
En-ring
Higher Topos Theory
∞-topos
étale spectrum
== Notes ==
== References ==
=== Simplicial DAG ===
Toën, Bertrand (2014-01-06). "Derived Algebraic Geometry". arXiv:1401.1044 [math.AG].
Toën, Bertrand; Vezzosi, Gabriele (2004). "From HAG to DAG: derived moduli stacks". In Greenlees, J. P. C. (ed.). Axiomatic, enriched and motivic homotopy theory. Proceedings of the NATO Advanced Study Institute, Cambridge, UK, September 9–20, 2002. NATO Science Series II: Mathematics, Physics and Chemistry. Vol. 131. Dordrecht: Kluwer Academic Publishers. pp. 173–216. ISBN 1-4020-1833-9. Zbl 1076.14002.
Vezzosi, Gabriele (2011). "What is ...a derived stack?" (PDF). Notices Am. Math. Soc. 58 (7): 955–958. Zbl 1228.14004.
=== Differential graded DAG ===
Eugster, J.; Pridham, J.P. (2021-10-25). "An introduction to derived (algebraic) geometry". arXiv:2109.14594 [math.AG].
=== En and E∞ -rings ===
Spectral algebraic geometry - Rezk
Operads and Sheaf Cohomology - JP May -
E
∞
{\displaystyle E_{\infty }}
-rings over characteristic 0 and
E
∞
{\displaystyle E_{\infty }}
-structure for sheaf cohomology
Tangent complex and Hochschild cohomology of En-rings https://arxiv.org/abs/1104.0181
Francis, John; Derived Algebraic Geometry Over
E
n
{\displaystyle {\mathcal {E}}_{n}}
-Rings
=== Applications ===
Lowrey, Parker; Schürg, Timo. (2018). Grothendieck-Riemann-Roch for Derived Schemes
Ciocan-Fontanine, I., Kapranov, M. (2007). Virtual fundamental classes via dg-manifolds
Mann, E., Robalo M. (2018). Gromov-Witten theory with derived algebraic geometry
Ben-Zvi, D., Francis, J., and D. Nadler. Integral Transforms and Drinfeld Centers in Derived Algebraic Geometry.
Kerz, Moritz; Strunk, Florian; Tamme, Georg (2018), "Algebraic K-theory and descent for blow-ups", Invent. Math., 211 (2): 523–577, arXiv:1611.08466, Bibcode:2018InMat.211..523K, doi:10.1007/s00222-017-0752-2, MR 3748313, S2CID 119165673
==== Quantum Field Theories ====
Notes on supersymmetric and holomorphic field theories in dimensions 2 and 4
== External links ==
Jacob Lurie's Home Page
Overview of Spectral Algebraic Geometry
DAG reading group (Fall 2011) at Harvard
http://ncatlab.org/nlab/show/derived+algebraic+geometry
Michigan Derived Algebraic Geometry RTG Learning Workshop, 2012
Derived algebraic geometry: how to reach research level math?
Derived Algebraic Geometry and Chow Rings/Chow Motives
Gabriele Vezzosi, An overview of derived algebraic geometry, October 2013 | Wikipedia/Derived_algebraic_geometry |
In mathematics, Artin–Schreier theory is a branch of Galois theory, specifically a positive characteristic analogue of Kummer theory, for Galois extensions of degree equal to the characteristic p. Artin and Schreier (1927) introduced Artin–Schreier theory for extensions of prime degree p, and Witt (1936) generalized it to extensions of prime power degree pn.
If K is a field of characteristic p, a prime number, any polynomial of the form
X
p
−
X
−
α
,
{\displaystyle X^{p}-X-\alpha ,\,}
for
α
{\displaystyle \alpha }
in K, is called an Artin–Schreier polynomial. When
α
≠
β
p
−
β
{\displaystyle \alpha \neq \beta ^{p}-\beta }
for all
β
∈
K
{\displaystyle \beta \in K}
, this polynomial is irreducible in K[X], and its splitting field over K is a cyclic extension of K of degree p. This follows since for any root β, the numbers β + i, for
1
≤
i
≤
p
{\displaystyle 1\leq i\leq p}
, form all the roots—by Fermat's little theorem—so the splitting field is
K
(
β
)
{\displaystyle K(\beta )}
.
Conversely, any Galois extension of K of degree p equal to the characteristic of K is the splitting field of an Artin–Schreier polynomial. This can be proved using additive counterparts of the methods involved in Kummer theory, such as Hilbert's theorem 90 and additive Galois cohomology. These extensions are called Artin–Schreier extensions.
Artin–Schreier extensions play a role in the theory of solvability by radicals, in characteristic p, representing one of the possible classes of extensions in a solvable chain.
They also play a part in the theory of abelian varieties and their isogenies. In characteristic p, an isogeny of degree p of abelian varieties must, for their function fields, give either an Artin–Schreier extension or a purely inseparable extension.
== Artin–Schreier–Witt extensions ==
There is an analogue of Artin–Schreier theory which describes cyclic extensions in characteristic p of p-power degree (not just degree p itself), using
Witt vectors, developed by Witt (1936).
== References ==
Artin, Emil; Schreier, Otto (1927), "Eine Kennzeichnung der reell abgeschlossenen Körper", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg, 5, Springer Berlin / Heidelberg: 225–231, doi:10.1007/BF02952522, ISSN 0025-5858
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001 Section VI.6
Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2000), Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323, Berlin: Springer-Verlag, ISBN 978-3-540-66671-4, MR 1737196, Zbl 0948.11001 Section VI.1
Witt, Ernst (1936), "Zyklische Körper und Algebren der Characteristik p vom Grad pn. Struktur diskret bewerteter perfekter Körper mit vollkommenem Restklassenkörper der Charakteristik pn", Journal für die reine und angewandte Mathematik (in German), 176: 126–140, doi:10.1515/crll.1937.176.126 | Wikipedia/Artin–Schreier_theory |
In mathematics, a function of
n
{\displaystyle n}
variables is symmetric if its value is the same no matter the order of its arguments. For example, a function
f
(
x
1
,
x
2
)
{\displaystyle f\left(x_{1},x_{2}\right)}
of two arguments is a symmetric function if and only if
f
(
x
1
,
x
2
)
=
f
(
x
2
,
x
1
)
{\displaystyle f\left(x_{1},x_{2}\right)=f\left(x_{2},x_{1}\right)}
for all
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
such that
(
x
1
,
x
2
)
{\displaystyle \left(x_{1},x_{2}\right)}
and
(
x
2
,
x
1
)
{\displaystyle \left(x_{2},x_{1}\right)}
are in the domain of
f
.
{\displaystyle f.}
The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials.
A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric
k
{\displaystyle k}
-tensors on a vector space
V
{\displaystyle V}
is isomorphic to the space of homogeneous polynomials of degree
k
{\displaystyle k}
on
V
.
{\displaystyle V.}
Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry.
== Symmetrization ==
Given any function
f
{\displaystyle f}
in
n
{\displaystyle n}
variables with values in an abelian group, a symmetric function can be constructed by summing values of
f
{\displaystyle f}
over all permutations of the arguments. Similarly, an anti-symmetric function can be constructed by summing over even permutations and subtracting the sum over odd permutations. These operations are of course not invertible, and could well result in a function that is identically zero for nontrivial functions
f
.
{\displaystyle f.}
The only general case where
f
{\displaystyle f}
can be recovered if both its symmetrization and antisymmetrization are known is when
n
=
2
{\displaystyle n=2}
and the abelian group admits a division by 2 (inverse of doubling); then
f
{\displaystyle f}
is equal to half the sum of its symmetrization and its antisymmetrization.
== Examples ==
Consider the real function
f
(
x
1
,
x
2
,
x
3
)
=
(
x
−
x
1
)
(
x
−
x
2
)
(
x
−
x
3
)
.
{\displaystyle f(x_{1},x_{2},x_{3})=(x-x_{1})(x-x_{2})(x-x_{3}).}
By definition, a symmetric function with
n
{\displaystyle n}
variables has the property that
f
(
x
1
,
x
2
,
…
,
x
n
)
=
f
(
x
2
,
x
1
,
…
,
x
n
)
=
f
(
x
3
,
x
1
,
…
,
x
n
,
x
n
−
1
)
,
etc.
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=f(x_{2},x_{1},\ldots ,x_{n})=f(x_{3},x_{1},\ldots ,x_{n},x_{n-1}),\quad {\text{ etc.}}}
In general, the function remains the same for every permutation of its variables. This means that, in this case,
(
x
−
x
1
)
(
x
−
x
2
)
(
x
−
x
3
)
=
(
x
−
x
2
)
(
x
−
x
1
)
(
x
−
x
3
)
=
(
x
−
x
3
)
(
x
−
x
1
)
(
x
−
x
2
)
{\displaystyle (x-x_{1})(x-x_{2})(x-x_{3})=(x-x_{2})(x-x_{1})(x-x_{3})=(x-x_{3})(x-x_{1})(x-x_{2})}
and so on, for all permutations of
x
1
,
x
2
,
x
3
.
{\displaystyle x_{1},x_{2},x_{3}.}
Consider the function
f
(
x
,
y
)
=
x
2
+
y
2
−
r
2
.
{\displaystyle f(x,y)=x^{2}+y^{2}-r^{2}.}
If
x
{\displaystyle x}
and
y
{\displaystyle y}
are interchanged the function becomes
f
(
y
,
x
)
=
y
2
+
x
2
−
r
2
,
{\displaystyle f(y,x)=y^{2}+x^{2}-r^{2},}
which yields exactly the same results as the original
f
(
x
,
y
)
.
{\displaystyle f(x,y).}
Consider now the function
f
(
x
,
y
)
=
a
x
2
+
b
y
2
−
r
2
.
{\displaystyle f(x,y)=ax^{2}+by^{2}-r^{2}.}
If
x
{\displaystyle x}
and
y
{\displaystyle y}
are interchanged, the function becomes
f
(
y
,
x
)
=
a
y
2
+
b
x
2
−
r
2
.
{\displaystyle f(y,x)=ay^{2}+bx^{2}-r^{2}.}
This function is not the same as the original if
a
≠
b
,
{\displaystyle a\neq b,}
which makes it non-symmetric.
== Applications ==
=== U-statistics ===
In statistics, an
n
{\displaystyle n}
-sample statistic (a function in
n
{\displaystyle n}
variables) that is obtained by bootstrapping symmetrization of a
k
{\displaystyle k}
-sample statistic, yielding a symmetric function in
n
{\displaystyle n}
variables, is called a U-statistic. Examples include the sample mean and sample variance.
== See also ==
Alternating polynomial
Elementary symmetric polynomial – Mathematical function
Even and odd functions – Functions such that f(–x) equals f(x) or –f(x)
Exchangeable random variables – Concept in statistics
Quasisymmetric function
Ring of symmetric functions
Symmetrization – process that converts any function in n variables to a symmetric function in n variablesPages displaying wikidata descriptions as a fallback
Vandermonde polynomial – determinant of Vandermonde matrixPages displaying wikidata descriptions as a fallback
== References ==
F. N. David, M. G. Kendall & D. E. Barton (1966) Symmetric Function and Allied Tables, Cambridge University Press.
Joseph P. S. Kung, Gian-Carlo Rota, & Catherine H. Yan (2009) Combinatorics: The Rota Way, §5.1 Symmetric functions, pp 222–5, Cambridge University Press, ISBN 978-0-521-73794-4. | Wikipedia/Symmetric_function |
A lattice is an abstract structure studied in the mathematical subdisciplines of order theory and abstract algebra. It consists of a partially ordered set in which every pair of elements has a unique supremum (also called a least upper bound or join) and a unique infimum (also called a greatest lower bound or meet). An example is given by the power set of a set, partially ordered by inclusion, for which the supremum is the union and the infimum is the intersection. Another example is given by the natural numbers, partially ordered by divisibility, for which the supremum is the least common multiple and the infimum is the greatest common divisor.
Lattices can also be characterized as algebraic structures satisfying certain axiomatic identities. Since the two definitions are equivalent, lattice theory draws on both order theory and universal algebra. Semilattices include lattices, which in turn include Heyting and Boolean algebras. These lattice-like structures all admit order-theoretic as well as algebraic descriptions.
The sub-field of abstract algebra that studies lattices is called lattice theory.
== Definition ==
A lattice can be defined either order-theoretically as a partially ordered set, or as an algebraic structure.
=== As partially ordered set ===
A partially ordered set (poset)
(
L
,
≤
)
{\displaystyle (L,\leq )}
is called a lattice if it is both a join- and a meet-semilattice, i.e. each two-element subset
{
a
,
b
}
⊆
L
{\displaystyle \{a,b\}\subseteq L}
has a join (i.e. least upper bound, denoted by
a
∨
b
{\displaystyle a\vee b}
) and dually a meet (i.e. greatest lower bound, denoted by
a
∧
b
{\displaystyle a\wedge b}
). This definition makes
∧
{\displaystyle \,\wedge \,}
and
∨
{\displaystyle \,\vee \,}
binary operations. Both operations are monotone with respect to the given order:
a
1
≤
a
2
{\displaystyle a_{1}\leq a_{2}}
and
b
1
≤
b
2
{\displaystyle b_{1}\leq b_{2}}
implies that
a
1
∨
b
1
≤
a
2
∨
b
2
{\displaystyle a_{1}\vee b_{1}\leq a_{2}\vee b_{2}}
and
a
1
∧
b
1
≤
a
2
∧
b
2
.
{\displaystyle a_{1}\wedge b_{1}\leq a_{2}\wedge b_{2}.}
It follows by an induction argument that every non-empty finite subset of a lattice has a least upper bound and a greatest lower bound. With additional assumptions, further conclusions may be possible; see Completeness (order theory) for more discussion of this subject. That article also discusses how one may rephrase the above definition in terms of the existence of suitable Galois connections between related partially ordered sets—an approach of special interest for the category theoretic approach to lattices, and for formal concept analysis.
Given a subset of a lattice,
H
⊆
L
,
{\displaystyle H\subseteq L,}
meet and join restrict to partial functions – they are undefined if their value is not in the subset
H
.
{\displaystyle H.}
The resulting structure on
H
{\displaystyle H}
is called a partial lattice. In addition to this extrinsic definition as a subset of some other algebraic structure (a lattice), a partial lattice can also be intrinsically defined as a set with two partial binary operations satisfying certain axioms.
=== As algebraic structure ===
A lattice is an algebraic structure
(
L
,
∨
,
∧
)
{\displaystyle (L,\vee ,\wedge )}
, consisting of a set
L
{\displaystyle L}
and two binary, commutative and associative operations
∨
{\displaystyle \vee }
and
∧
{\displaystyle \wedge }
on
L
{\displaystyle L}
satisfying the following axiomatic identities for all elements
a
,
b
∈
L
{\displaystyle a,b\in L}
(sometimes called absorption laws):
a
∨
(
a
∧
b
)
=
a
{\displaystyle a\vee (a\wedge b)=a}
a
∧
(
a
∨
b
)
=
a
{\displaystyle a\wedge (a\vee b)=a}
The following two identities are also usually regarded as axioms, even though they follow from the two absorption laws taken together. These are called idempotent laws.
a
∨
a
=
a
{\displaystyle a\vee a=a}
a
∧
a
=
a
{\displaystyle a\wedge a=a}
These axioms assert that both
(
L
,
∨
)
{\displaystyle (L,\vee )}
and
(
L
,
∧
)
{\displaystyle (L,\wedge )}
are semilattices. The absorption laws, the only axioms above in which both meet and join appear, distinguish a lattice from an arbitrary pair of semilattice structures and assure that the two semilattices interact appropriately. In particular, each semilattice is the dual of the other. The absorption laws can be viewed as a requirement that the meet and join semilattices define the same partial order.
=== Connection between the two definitions ===
An order-theoretic lattice gives rise to the two binary operations
∨
{\displaystyle \vee }
and
∧
.
{\displaystyle \wedge .}
Since the commutative, associative and absorption laws can easily be verified for these operations, they make
(
L
,
∨
,
∧
)
{\displaystyle (L,\vee ,\wedge )}
into a lattice in the algebraic sense.
The converse is also true. Given an algebraically defined lattice
(
L
,
∨
,
∧
)
,
{\displaystyle (L,\vee ,\wedge ),}
one can define a partial order
≤
{\displaystyle \leq }
on
L
{\displaystyle L}
by setting
a
≤
b
if
a
=
a
∧
b
,
or
{\displaystyle a\leq b{\text{ if }}a=a\wedge b,{\text{ or }}}
a
≤
b
if
b
=
a
∨
b
,
{\displaystyle a\leq b{\text{ if }}b=a\vee b,}
for all elements
a
,
b
∈
L
.
{\displaystyle a,b\in L.}
The laws of absorption ensure that both definitions are equivalent:
a
=
a
∧
b
implies
b
=
b
∨
(
b
∧
a
)
=
(
a
∧
b
)
∨
b
=
a
∨
b
{\displaystyle a=a\wedge b{\text{ implies }}b=b\vee (b\wedge a)=(a\wedge b)\vee b=a\vee b}
and dually for the other direction.
One can now check that the relation
≤
{\displaystyle \leq }
introduced in this way defines a partial ordering within which binary meets and joins are given through the original operations
∨
{\displaystyle \vee }
and
∧
.
{\displaystyle \wedge .}
Since the two definitions of a lattice are equivalent, one may freely invoke aspects of either definition in any way that suits the purpose at hand.
== Bounded lattice ==
A bounded lattice is a lattice that additionally has a greatest element (also called maximum, or top element, and denoted by
1
,
{\displaystyle 1,}
or by
⊤
{\displaystyle \top }
) and a least element (also called minimum, or bottom, denoted by
0
{\displaystyle 0}
or by
⊥
{\displaystyle \bot }
), which satisfy
0
≤
x
≤
1
for every
x
∈
L
.
{\displaystyle 0\leq x\leq 1\;{\text{ for every }}x\in L.}
A bounded lattice may also be defined as an algebraic structure of the form
(
L
,
∨
,
∧
,
0
,
1
)
{\displaystyle (L,\vee ,\wedge ,0,1)}
such that
(
L
,
∨
,
∧
)
{\displaystyle (L,\vee ,\wedge )}
is a lattice,
0
{\displaystyle 0}
(the lattice's bottom) is the identity element for the join operation
∨
,
{\displaystyle \vee ,}
and
1
{\displaystyle 1}
(the lattice's top) is the identity element for the meet operation
∧
.
{\displaystyle \wedge .}
a
∨
0
=
a
{\displaystyle a\vee 0=a}
a
∧
1
=
a
{\displaystyle a\wedge 1=a}
It can be shown that a partially ordered set is a bounded lattice if and only if every finite set of elements (including the empty set) has a join and a meet.
Every lattice can be embedded into a bounded lattice by adding a greatest and a least element. Furthermore, every non-empty finite lattice is bounded, by taking the join (respectively, meet) of all elements, denoted by
1
=
⋁
L
=
a
1
∨
⋯
∨
a
n
{\textstyle 1=\bigvee L=a_{1}\lor \cdots \lor a_{n}}
(respectively
0
=
⋀
L
=
a
1
∧
⋯
∧
a
n
{\textstyle 0=\bigwedge L=a_{1}\land \cdots \land a_{n}}
) where
L
=
{
a
1
,
…
,
a
n
}
{\displaystyle L=\left\{a_{1},\ldots ,a_{n}\right\}}
is the set of all elements.
== Connection to other algebraic structures ==
Lattices have some connections to the family of group-like algebraic structures. Because meet and join both commute and associate, a lattice can be viewed as consisting of two commutative semigroups having the same domain. For a bounded lattice, these semigroups are in fact commutative monoids. The absorption law is the only defining identity that is peculiar to lattice theory. A bounded lattice can also be thought of as a commutative rig without the distributive axiom.
By commutativity, associativity and idempotence one can think of join and meet as operations on non-empty finite sets, rather than on pairs of elements. In a bounded lattice the join and meet of the empty set can also be defined (as
0
{\displaystyle 0}
and
1
,
{\displaystyle 1,}
respectively). This makes bounded lattices somewhat more natural than general lattices, and many authors require all lattices to be bounded.
The algebraic interpretation of lattices plays an essential role in universal algebra.
== Examples ==
For any set
A
,
{\displaystyle A,}
the collection of all subsets of
A
{\displaystyle A}
(called the power set of
A
{\displaystyle A}
) can be ordered via subset inclusion to obtain a lattice bounded by
A
{\displaystyle A}
itself and the empty set. In this lattice, the supremum is provided by set union and the infimum is provided by set intersection (see Pic. 1).
For any set
A
,
{\displaystyle A,}
the collection of all finite subsets of
A
,
{\displaystyle A,}
ordered by inclusion, is also a lattice, and will be bounded if and only if
A
{\displaystyle A}
is finite.
For any set
A
,
{\displaystyle A,}
the collection of all partitions of
A
,
{\displaystyle A,}
ordered by refinement, is a lattice (see Pic. 3).
The positive integers in their usual order form an unbounded lattice, under the operations of "min" and "max". 1 is bottom; there is no top (see Pic. 4).
The Cartesian square of the natural numbers, ordered so that
(
a
,
b
)
≤
(
c
,
d
)
{\displaystyle (a,b)\leq (c,d)}
if
a
≤
c
and
b
≤
d
.
{\displaystyle a\leq c{\text{ and }}b\leq d.}
The pair
(
0
,
0
)
{\displaystyle (0,0)}
is the bottom element; there is no top (see Pic. 5).
The natural numbers also form a lattice under the operations of taking the greatest common divisor and least common multiple, with divisibility as the order relation:
a
≤
b
{\displaystyle a\leq b}
if
a
{\displaystyle a}
divides
b
.
{\displaystyle b.}
1
{\displaystyle 1}
is bottom;
0
{\displaystyle 0}
is top. Pic. 2 shows a finite sublattice.
Every complete lattice (also see below) is a (rather specific) bounded lattice. This class gives rise to a broad range of practical examples.
The set of compact elements of an arithmetic complete lattice is a lattice with a least element, where the lattice operations are given by restricting the respective operations of the arithmetic lattice. This is the specific property that distinguishes arithmetic lattices from algebraic lattices, for which the compacts only form a join-semilattice. Both of these classes of complete lattices are studied in domain theory.
Further examples of lattices are given for each of the additional properties discussed below.
== Examples of non-lattices ==
Most partially ordered sets are not lattices, including the following.
A discrete poset, meaning a poset such that
x
≤
y
{\displaystyle x\leq y}
implies
x
=
y
,
{\displaystyle x=y,}
is a lattice if and only if it has at most one element. In particular the two-element discrete poset is not a lattice.
Although the set
{
1
,
2
,
3
,
6
}
{\displaystyle \{1,2,3,6\}}
partially ordered by divisibility is a lattice, the set
{
1
,
2
,
3
}
{\displaystyle \{1,2,3\}}
so ordered is not a lattice because the pair 2, 3 lacks a join; similarly, 2, 3 lacks a meet in
{
2
,
3
,
6
}
.
{\displaystyle \{2,3,6\}.}
The set
{
1
,
2
,
3
,
12
,
18
,
36
}
{\displaystyle \{1,2,3,12,18,36\}}
partially ordered by divisibility is not a lattice. Every pair of elements has an upper bound and a lower bound, but the pair 2, 3 has three upper bounds, namely 12, 18, and 36, none of which is the least of those three under divisibility (12 and 18 do not divide each other). Likewise the pair 12, 18 has three lower bounds, namely 1, 2, and 3, none of which is the greatest of those three under divisibility (2 and 3 do not divide each other).
== Morphisms of lattices ==
The appropriate notion of a morphism between two lattices flows easily from the above algebraic definition. Given two lattices
(
L
,
∨
L
,
∧
L
)
{\displaystyle \left(L,\vee _{L},\wedge _{L}\right)}
and
(
M
,
∨
M
,
∧
M
)
,
{\displaystyle \left(M,\vee _{M},\wedge _{M}\right),}
a lattice homomorphism from L to M is a function
f
:
L
→
M
{\displaystyle f:L\to M}
such that for all
a
,
b
∈
L
:
{\displaystyle a,b\in L:}
f
(
a
∨
L
b
)
=
f
(
a
)
∨
M
f
(
b
)
,
and
{\displaystyle f\left(a\vee _{L}b\right)=f(a)\vee _{M}f(b),{\text{ and }}}
f
(
a
∧
L
b
)
=
f
(
a
)
∧
M
f
(
b
)
.
{\displaystyle f\left(a\wedge _{L}b\right)=f(a)\wedge _{M}f(b).}
Thus
f
{\displaystyle f}
is a homomorphism of the two underlying semilattices. When lattices with more structure are considered, the morphisms should "respect" the extra structure, too. In particular, a bounded-lattice homomorphism (usually called just "lattice homomorphism")
f
{\displaystyle f}
between two bounded lattices
L
{\displaystyle L}
and
M
{\displaystyle M}
should also have the following property:
f
(
0
L
)
=
0
M
,
and
{\displaystyle f\left(0_{L}\right)=0_{M},{\text{ and }}}
f
(
1
L
)
=
1
M
.
{\displaystyle f\left(1_{L}\right)=1_{M}.}
In the order-theoretic formulation, these conditions just state that a homomorphism of lattices is a function preserving binary meets and joins. For bounded lattices, preservation of least and greatest elements is just preservation of join and meet of the empty set.
Any homomorphism of lattices is necessarily monotone with respect to the associated ordering relation; see Limit preserving function. The converse is not true: monotonicity by no means implies the required preservation of meets and joins (see Pic. 9), although an order-preserving bijection is a homomorphism if its inverse is also order-preserving.
Given the standard definition of isomorphisms as invertible morphisms, a lattice isomorphism is just a bijective lattice homomorphism. Similarly, a lattice endomorphism is a lattice homomorphism from a lattice to itself, and a lattice automorphism is a bijective lattice endomorphism. Lattices and their homomorphisms form a category.
Let
L
{\displaystyle \mathbb {L} }
and
L
′
{\displaystyle \mathbb {L} '}
be two lattices with 0 and 1. A homomorphism from
L
{\displaystyle \mathbb {L} }
to
L
′
{\displaystyle \mathbb {L} '}
is called 0,1-separating if and only if
f
−
1
{
f
(
0
)
}
=
{
0
}
{\displaystyle f^{-1}\{f(0)\}=\{0\}}
(
f
{\displaystyle f}
separates 0) and
f
−
1
{
f
(
1
)
}
=
{
1
}
{\displaystyle f^{-1}\{f(1)\}=\{1\}}
(
f
{\displaystyle f}
separates 1).
== Sublattices ==
A sublattice of a lattice
L
{\displaystyle L}
is a subset of
L
{\displaystyle L}
that is a lattice with the same meet and join operations as
L
.
{\displaystyle L.}
That is, if
L
{\displaystyle L}
is a lattice and
M
{\displaystyle M}
is a subset of
L
{\displaystyle L}
such that for every pair of elements
a
,
b
∈
M
{\displaystyle a,b\in M}
both
a
∧
b
{\displaystyle a\wedge b}
and
a
∨
b
{\displaystyle a\vee b}
are in
M
,
{\displaystyle M,}
then
M
{\displaystyle M}
is a sublattice of
L
.
{\displaystyle L.}
A sublattice
M
{\displaystyle M}
of a lattice
L
{\displaystyle L}
is a convex sublattice of
L
,
{\displaystyle L,}
if
x
≤
z
≤
y
{\displaystyle x\leq z\leq y}
and
x
,
y
∈
M
{\displaystyle x,y\in M}
implies that
z
{\displaystyle z}
belongs to
M
,
{\displaystyle M,}
for all elements
x
,
y
,
z
∈
L
.
{\displaystyle x,y,z\in L.}
== Properties of lattices ==
We now introduce a number of important properties that lead to interesting special classes of lattices. One, boundedness, has already been discussed.
=== Completeness ===
A poset is called a complete lattice if all its subsets have both a join and a meet. In particular, every complete lattice is a bounded lattice. While bounded lattice homomorphisms in general preserve only finite joins and meets, complete lattice homomorphisms are required to preserve arbitrary joins and meets.
Every poset that is a complete semilattice is also a complete lattice. Related to this result is the interesting phenomenon that there are various competing notions of homomorphism for this class of posets, depending on whether they are seen as complete lattices, complete join-semilattices, complete meet-semilattices, or as join-complete or meet-complete lattices.
"Partial lattice" is not the opposite of "complete lattice" – rather, "partial lattice", "lattice", and "complete lattice" are increasingly restrictive definitions.
=== Conditional completeness ===
A conditionally complete lattice is a lattice in which every nonempty subset that has an upper bound has a join (that is, a least upper bound). Such lattices provide the most direct generalization of the completeness axiom of the real numbers. A conditionally complete lattice is either a complete lattice, or a complete lattice without its maximum element
1
,
{\displaystyle 1,}
its minimum element
0
,
{\displaystyle 0,}
or both.
=== Distributivity ===
Since lattices come with two binary operations, it is natural to ask whether one of them distributes over the other, that is, whether one or the other of the following dual laws holds for every three elements
a
,
b
,
c
∈
L
,
{\displaystyle a,b,c\in L,}
:
Distributivity of
∨
{\displaystyle \vee }
over
∧
{\displaystyle \wedge }
a
∨
(
b
∧
c
)
=
(
a
∨
b
)
∧
(
a
∨
c
)
.
{\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge (a\vee c).}
Distributivity of
∧
{\displaystyle \wedge }
over
∨
{\displaystyle \vee }
a
∧
(
b
∨
c
)
=
(
a
∧
b
)
∨
(
a
∧
c
)
.
{\displaystyle a\wedge (b\vee c)=(a\wedge b)\vee (a\wedge c).}
A lattice that satisfies the first or, equivalently (as it turns out), the second axiom, is called a distributive lattice.
The only non-distributive lattices with fewer than 6 elements are called M3 and N5; they are shown in Pictures 10 and 11, respectively. A lattice is distributive if and only if it does not have a sublattice isomorphic to M3 or N5. Each distributive lattice is isomorphic to a lattice of sets (with union and intersection as join and meet, respectively).
For an overview of stronger notions of distributivity that are appropriate for complete lattices and that are used to define more special classes of lattices such as frames and completely distributive lattices, see distributivity in order theory.
=== Modularity ===
For some applications the distributivity condition is too strong, and the following weaker property is often useful. A lattice
(
L
,
∨
,
∧
)
{\displaystyle (L,\vee ,\wedge )}
is modular if, for all elements
a
,
b
,
c
∈
L
,
{\displaystyle a,b,c\in L,}
the following identity holds:
(
a
∧
c
)
∨
(
b
∧
c
)
=
(
(
a
∧
c
)
∨
b
)
∧
c
.
{\displaystyle (a\wedge c)\vee (b\wedge c)=((a\wedge c)\vee b)\wedge c.}
(Modular identity)
This condition is equivalent to the following axiom:
a
≤
c
{\displaystyle a\leq c}
implies
a
∨
(
b
∧
c
)
=
(
a
∨
b
)
∧
c
.
{\displaystyle a\vee (b\wedge c)=(a\vee b)\wedge c.}
(Modular law)
A lattice is modular if and only if it does not have a sublattice isomorphic to N5 (shown in Pic. 11).
Besides distributive lattices, examples of modular lattices are the lattice of submodules of a module (hence modular), the lattice of two-sided ideals of a ring, and the lattice of normal subgroups of a group. The set of first-order terms with the ordering "is more specific than" is a non-modular lattice used in automated reasoning.
=== Semimodularity ===
A finite lattice is modular if and only if it is both upper and lower semimodular.
For a lattice of finite length, the (upper) semimodularity is equivalent to the condition that the lattice is graded and its rank function
r
{\displaystyle r}
satisfies the following condition:
r
(
x
)
+
r
(
y
)
≥
r
(
x
∧
y
)
+
r
(
x
∨
y
)
.
{\displaystyle r(x)+r(y)\geq r(x\wedge y)+r(x\vee y).}
Another equivalent (for graded lattices) condition is Birkhoff's condition:
for each
x
{\displaystyle x}
and
y
{\displaystyle y}
in
L
,
{\displaystyle L,}
if
x
{\displaystyle x}
and
y
{\displaystyle y}
both cover
x
∧
y
,
{\displaystyle x\wedge y,}
then
x
∨
y
{\displaystyle x\vee y}
covers both
x
{\displaystyle x}
and
y
.
{\displaystyle y.}
A lattice is called lower semimodular if its dual is semimodular. For finite lattices this means that the previous conditions hold with
∨
{\displaystyle \vee }
and
∧
{\displaystyle \wedge }
exchanged, "covers" exchanged with "is covered by", and inequalities reversed.
=== Continuity and algebraicity ===
In domain theory, it is natural to seek to approximate the elements in a partial order by "much simpler" elements. This leads to the class of continuous posets, consisting of posets where every element can be obtained as the supremum of a directed set of elements that are way-below the element. If one can additionally restrict these to the compact elements of a poset for obtaining these directed sets, then the poset is even algebraic. Both concepts can be applied to lattices as follows:
A continuous lattice is a complete lattice that is continuous as a poset.
An algebraic lattice is a complete lattice that is algebraic as a poset.
Both of these classes have interesting properties. For example, continuous lattices can be characterized as algebraic structures (with infinitary operations) satisfying certain identities. While such a characterization is not known for algebraic lattices, they can be described "syntactically" via Scott information systems.
=== Complements and pseudo-complements ===
Let
L
{\displaystyle L}
be a bounded lattice with greatest element 1 and least element 0. Two elements
x
{\displaystyle x}
and
y
{\displaystyle y}
of
L
{\displaystyle L}
are complements of each other if and only if:
x
∨
y
=
1
and
x
∧
y
=
0.
{\displaystyle x\vee y=1\quad {\text{ and }}\quad x\wedge y=0.}
In general, some elements of a bounded lattice might not have a complement, and others might have more than one complement. For example, the set
{
0
,
1
/
2
,
1
}
{\displaystyle \{0,1/2,1\}}
with its usual ordering is a bounded lattice, and
1
2
{\displaystyle {\tfrac {1}{2}}}
does not have a complement. In the bounded lattice N5, the element
a
{\displaystyle a}
has two complements, viz.
b
{\displaystyle b}
and
c
{\displaystyle c}
(see Pic. 11). A bounded lattice for which every element has a complement is called a complemented lattice.
A complemented lattice that is also distributive is a Boolean algebra. For a distributive lattice, the complement of
x
,
{\displaystyle x,}
when it exists, is unique.
In the case that the complement is unique, we write
¬
x
=
y
{\textstyle \lnot x=y}
and equivalently,
¬
y
=
x
.
{\textstyle \lnot y=x.}
The corresponding unary operation over
L
,
{\displaystyle L,}
called complementation, introduces an analogue of logical negation into lattice theory.
Heyting algebras are an example of distributive lattices where some members might be lacking complements. Every element
z
{\displaystyle z}
of a Heyting algebra has, on the other hand, a pseudo-complement, also denoted
¬
x
.
{\textstyle \lnot x.}
The pseudo-complement is the greatest element
y
{\displaystyle y}
such that
x
∧
y
=
0.
{\displaystyle x\wedge y=0.}
If the pseudo-complement of every element of a Heyting algebra is in fact a complement, then the Heyting algebra is in fact a Boolean algebra.
=== Jordan–Dedekind chain condition ===
A chain from
x
0
{\displaystyle x_{0}}
to
x
n
{\displaystyle x_{n}}
is a set
{
x
0
,
x
1
,
…
,
x
n
}
,
{\displaystyle \left\{x_{0},x_{1},\ldots ,x_{n}\right\},}
where
x
0
<
x
1
<
x
2
<
…
<
x
n
.
{\displaystyle x_{0}<x_{1}<x_{2}<\ldots <x_{n}.}
The length of this chain is n, or one less than its number of elements. A chain is maximal if
x
i
{\displaystyle x_{i}}
covers
x
i
−
1
{\displaystyle x_{i-1}}
for all
1
≤
i
≤
n
.
{\displaystyle 1\leq i\leq n.}
If for any pair,
x
{\displaystyle x}
and
y
,
{\displaystyle y,}
where
x
<
y
,
{\displaystyle x<y,}
all maximal chains from
x
{\displaystyle x}
to
y
{\displaystyle y}
have the same length, then the lattice is said to satisfy the Jordan–Dedekind chain condition.
=== Graded/ranked ===
A lattice
(
L
,
≤
)
{\displaystyle (L,\leq )}
is called graded, sometimes ranked (but see Ranked poset for an alternative meaning), if it can be equipped with a rank function
r
:
L
→
N
{\displaystyle r:L\to \mathbb {N} }
sometimes to
Z
{\displaystyle \mathbb {Z} }
, compatible with the ordering (so
r
(
x
)
<
r
(
y
)
{\displaystyle r(x)<r(y)}
whenever
x
<
y
{\displaystyle x<y}
) such that whenever
y
{\displaystyle y}
covers
x
,
{\displaystyle x,}
then
r
(
y
)
=
r
(
x
)
+
1.
{\displaystyle r(y)=r(x)+1.}
The value of the rank function for a lattice element is called its rank.
A lattice element
y
{\displaystyle y}
is said to cover another element
x
,
{\displaystyle x,}
if
y
>
x
,
{\displaystyle y>x,}
but there does not exist a
z
{\displaystyle z}
such that
y
>
z
>
x
.
{\displaystyle y>z>x.}
Here,
y
>
x
{\displaystyle y>x}
means
x
≤
y
{\displaystyle x\leq y}
and
x
≠
y
.
{\displaystyle x\neq y.}
== Free lattices ==
Any set
X
{\displaystyle X}
may be used to generate the free semilattice
F
X
.
{\displaystyle FX.}
The free semilattice is defined to consist of all of the finite subsets of
X
,
{\displaystyle X,}
with the semilattice operation given by ordinary set union. The free semilattice has the universal property. For the free lattice over a set
X
,
{\displaystyle X,}
Whitman gave a construction based on polynomials over
X
{\displaystyle X}
's members.
== Important lattice-theoretic notions ==
We now define some order-theoretic notions of importance to lattice theory. In the following, let
x
{\displaystyle x}
be an element of some lattice
L
.
{\displaystyle L.}
x
{\displaystyle x}
is called:
Join irreducible if
x
=
a
∨
b
{\displaystyle x=a\vee b}
implies
x
=
a
or
x
=
b
.
{\displaystyle x=a{\text{ or }}x=b.}
for all
a
,
b
∈
L
.
{\displaystyle a,b\in L.}
If
L
{\displaystyle L}
has a bottom element
0
,
{\displaystyle 0,}
some authors require
x
≠
0
{\displaystyle x\neq 0}
. When the first condition is generalized to arbitrary joins
⋁
i
∈
I
a
i
,
{\displaystyle \bigvee _{i\in I}a_{i},}
x
{\displaystyle x}
is called completely join irreducible (or
∨
{\displaystyle \vee }
-irreducible). The dual notion is meet irreducibility (
∧
{\displaystyle \wedge }
-irreducible). For example, in Pic. 2, the elements 2, 3, 4, and 5 are join irreducible, while 12, 15, 20, and 30 are meet irreducible. Depending on definition, the bottom element 1 and top element 60 may or may not be considered join irreducible and meet irreducible, respectively. In the lattice of real numbers with the usual order, each element is join irreducible, but none is completely join irreducible.
Join prime if
x
≤
a
∨
b
{\displaystyle x\leq a\vee b}
implies
x
≤
a
or
x
≤
b
.
{\displaystyle x\leq a{\text{ or }}x\leq b.}
Again some authors require
x
≠
0
{\displaystyle x\neq 0}
, although this is unusual. This too can be generalized to obtain the notion completely join prime. The dual notion is meet prime. Every join-prime element is also join irreducible, and every meet-prime element is also meet irreducible. The converse holds if
L
{\displaystyle L}
is distributive.
Let
L
{\displaystyle L}
have a bottom element 0. An element
x
{\displaystyle x}
of
L
{\displaystyle L}
is an atom if
0
<
x
{\displaystyle 0<x}
and there exists no element
y
∈
L
{\displaystyle y\in L}
such that
0
<
y
<
x
.
{\displaystyle 0<y<x.}
Then
L
{\displaystyle L}
is called:
Atomic if for every nonzero element
x
{\displaystyle x}
of
L
,
{\displaystyle L,}
there exists an atom
a
{\displaystyle a}
of
L
{\displaystyle L}
such that
a
≤
x
;
{\displaystyle a\leq x;}
Atomistic if every element of
L
{\displaystyle L}
is a supremum of atoms.
However, many sources and mathematical communities use the term "atomic" to mean "atomistic" as defined above.
The notions of ideals and the dual notion of filters refer to particular kinds of subsets of a partially ordered set, and are therefore important for lattice theory. Details can be found in the respective entries.
== See also ==
Join and meet – Concept in order theory
Map of lattices – Concept in mathematics
Orthocomplemented lattice – Bound lattice in which every element has a complementPages displaying short descriptions of redirect targets
Total order – Order whose elements are all comparable
Ideal – Nonempty, upper-bounded, downward-closed subset and filter (dual notions)
Skew lattice – Algebraic StructurePages displaying wikidata descriptions as a fallback (generalization to non-commutative join and meet)
Eulerian lattice
Post's lattice – lattice of all clones (sets of logical connectives closed under composition and containing all projections) on a two-element set {0, 1}, ordered by inclusionPages displaying wikidata descriptions as a fallback
Tamari lattice – mathematical object formed by an order on the way of parenthesing an expressionPages displaying wikidata descriptions as a fallback
Young–Fibonacci lattice
0,1-simple lattice
=== Applications that use lattice theory ===
Note that in many applications the sets are only partial lattices: not every pair of elements has a meet or join.
Pointless topology
Lattice of subgroups
Spectral space
Invariant subspace
Closure operator
Abstract interpretation
Subsumption lattice
Fuzzy set theory
Algebraizations of first-order logic
Semantics of programming languages
Domain theory
Ontology (computer science)
Multiple inheritance
Formal concept analysis and Lattice Miner (theory and tool)
Bloom filter
Information flow
Ordinal optimization
Quantum logic
Median graph
Knowledge space
Regular language learning
Analogical modeling
== Notes ==
== References ==
== External links == | Wikipedia/Lattice_(order_theory) |
In mathematics, Grothendieck's Galois theory is an abstract approach to the Galois theory of fields, developed around 1960 to provide a way to study the fundamental group of algebraic topology in the setting of algebraic geometry. It provides, in the classical setting of field theory, an alternative perspective to that of Emil Artin based on linear algebra, which became standard from about the 1930s.
The approach of Alexander Grothendieck is concerned with the category-theoretic properties that characterise the categories of finite G-sets for a fixed profinite group G. For example, G might be the group denoted
Z
^
{\displaystyle {\hat {\mathbb {Z} }}}
(see profinite integer), which is the inverse limit of the cyclic additive groups
Z
/
n
Z
{\displaystyle \mathbb {Z} /n\mathbb {Z} }
— or equivalently the completion of the infinite cyclic group
Z
{\displaystyle \mathbb {Z} }
for the topology of subgroups of finite index. A finite G-set is then a finite set X on which G acts through a quotient finite cyclic group, so that it is specified by giving some permutation of X.
In the above example, a connection with classical Galois theory can be seen by regarding
Z
^
{\displaystyle {\hat {\mathbb {Z} }}}
as the profinite Galois group Gal(F/F) of the algebraic closure F of any finite field F, over F. That is, the automorphisms of F fixing F are described by the inverse limit, as we take larger and larger finite splitting fields over F. The connection with geometry can be seen when we look at covering spaces of the unit disk in the complex plane with the origin removed: the finite covering realised by the zn map of the disk, thought of by means of a complex number variable z, corresponds to the subgroup
n
Z
{\displaystyle n\mathbb {Z} }
of the fundamental group of the punctured disk.
The theory of Grothendieck, published in SGA1, shows how to reconstruct the category of G-sets from a fibre functor
Φ
{\displaystyle \Phi }
, which in the geometric setting takes the fibre of a covering above a fixed base point (as a set). In fact there is an isomorphism proved of the type
G
≅
Aut
(
Φ
)
{\displaystyle G\cong \operatorname {Aut} (\Phi )}
,
the latter being the group of automorphisms (self-natural equivalences) of
Φ
{\displaystyle \Phi }
. An abstract classification of categories with a functor to the category of sets is given, by means of which one can recognise categories of G-sets for G profinite.
To see how this applies to the case of fields, one has to study the tensor product of fields. In topos theory this is a part of the study of atomic toposes.
== See also ==
Tannakian formalism
Fiber functor
Anabelian geometry
== References ==
Grothendieck, Alexander; et al. (1971). SGA1 Revêtements étales et groupe fondamental, 1960–1961. Lecture Notes in Mathematics. Vol. 224. Springer Verlag. arXiv:math/0206203. ISBN 978-3-540-36910-3.
Joyal, André; Tierney, Myles (1984). An Extension of the Galois Theory of Grothendieck. Memoirs of the American Mathematical Society. ISBN 0-8218-2312-4.
Borceux, F.; Janelidze, G. (2001). Galois theories. Cambridge University Press. ISBN 0-521-80309-8. (This book introduces the reader to the Galois theory of Grothendieck, and some generalisations, leading to Galois groupoids.)
Szamuely, Tamás (2009). Galois Groups and Fundamental Groups. Cambridge University Press. ISBN 978-1-139-48114-4.
Dubuc, E.J; de la Vega, C.S. (2000). "On the Galois theory of Grothendieck". arXiv:math/0009145.
Caramello, Olivia (2016). "Topological Galois theory". Advances in Mathematics. 291: 646–695. arXiv:1301.0300. doi:10.1016/j.aim.2015.11.050. | Wikipedia/Grothendieck's_Galois_theory |
The French Academy of Sciences (French: Académie des sciences, [akademi de sjɑ̃s]) is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at the forefront of scientific developments in Europe in the 17th and 18th centuries, and is one of the earliest Academies of Sciences.
Currently headed by Patrick Flandrin (President of the academy), it is one of the five Academies of the Institut de France.
== History ==
The Academy of Sciences traces its origin to Colbert's plan to create a general academy. He chose a small group of scholars who met on 22 December 1666 in the King's library, near the present-day Bibliothèque Nationale, and thereafter held twice-weekly working meetings there in the two rooms assigned to the group. The first 30 years of the academy's existence were relatively informal, since no statutes had as yet been laid down for the institution.
In contrast to its British counterpart, the academy was founded as an organ of government. In Paris, there were not many membership openings, to fill positions there were contentious elections. The election process was at least a 6-stage process with rules and regulations that allowed for chosen candidates to canvas other members and for current members to consider postponing certain stages of the process if the need would arise. Elections in the early days of the academy were important activities, and as such made up a large part of the proceedings at the academy, with many meetings being held regarding the election to fill a single vacancy within the academy. That is not to say that discussion of candidates and the election process as a whole was relegated to the meetings. Members that belonged to the vacancy's respective field would continue discussion of potential candidates for the vacancy in private. Being elected into the academy did not necessarily guarantee being a full member, in some cases, one would enter the academy as an associate or correspondent before being appointed as a full member of the academy.
The election process was originally only to replace members from a specific section. For example, if someone whose study was mathematics was either removed or resigned from his position, the following election process nominated only those whose focus was also mathematics in order to fill that discipline's vacancy. That led to some periods of time in which no specialists for specific fields of study could be found, which left positions in those fields vacant since they could not be filled with people in other disciplines.
The needed reform came late in the 20th century, in 1987, when the academy decided against the practice and to begin filling vacancies with people with new disciplines. This reform was not only aimed at further diversifying the disciplines under the academy, but also to help combat the internal aging of the academy itself. The academy was expected to remain apolitical, and to avoid discussion of religious and social issues.
On 20 January 1699, Louis XIV gave the Company its first rules. The academy received the name of Royal Academy of Sciences and was installed in the Louvre in Paris. Following this reform, the academy began publishing a volume each year with information on all the work done by its members and obituaries for members who had died. This reform also codified the method by which members of the academy could receive pensions for their work.
The academy was originally organized by the royal reform hierarchically into the following groups: Pensionaires, Pupils, Honoraires, and Associés.
The reform also added new groups not previously recognized, such as Vétéran. Some of these role's member limits were expanded and some roles even removed or combined throughout the course of academy's history. The Honoraires group establish by this reform in 1699 whose members were directly appointed by the King was recognized until its abolishment in 1793.
Membership in the academy the exceeded 100 officially-recognised full members only in 1976, 310 years after the academy's inception in 1666. The membership increase came with a large-scale reorganization in 1976. Under this reorganization, 130 resident members, 160 correspondents, and 80 foreign associates could be elected.
A vacancy opens only upon the death of members, as they serve for life. During elections, half of the vacancies are reserved for people less than 55 years old. This was created as an attempt to encourage younger members to join the academy.
The reorganization also divided the academy into 2 divisions:
One division, Division 1, covers the applications of mathematics and physical sciences,
the other, Division 2, covers the applications of chemical, natural, biological, and medical sciences.
On 8 August 1793, the National Convention abolished all the academies. On 22 August 1795, a National Institute of Sciences and Arts was put in place, bringing together the old academies of the sciences, literature and arts, among them the Académie française and the Académie des sciences.
Also in 1795, The academy determined these 10 titles (first 4 in Division 1 and the others in Division 2) to be their newly accepted branches of scientific study:
Mathematics
Mechanics
Astronomy
Physics
Chemistry
Mineralogy
Botany
Agriculture
Anatomy and Zoology
Medicine and Surgery
The last two sections are bundled since there were many good candidates fit to be elected for those practices, and the competition was stiff. Some individuals like Francois Magendie had made stellar advancements in their selected fields of study, that warranted a possible addition of new fields. However, even someone like Magendie that had made breakthroughs in Physiology and impressed the academy with his hands-on vivisection experiments, could not get his study into its own category. Despite Magendie being one of the leading innovators of his time, it was still a battle for him to become an official member of the academy, a feat he would later accomplish in 1821. He further improved the reverence of the academy when he and anatomist Charles Bell produced the widely known "Bell-Magendie Law".
From 1795 until 1914, the first world war, the French Academy of Science was the most prevalent organization of French science. Almost all the old members of the previously abolished Académie were formally re-elected and retook their ancient seats. Among the exceptions was Dominique, comte de Cassini, who refused to take his seat. Membership in the academy was not restricted to scientists: in 1798 Napoleon Bonaparte was elected a member of the academy and three years later a president in connection with his Egyptian expedition, which had a scientific component. In 1816, the again renamed "Royal Academy of Sciences" became autonomous, while forming part of the Institute of France; the head of State became its patron. In the Second Republic, the name returned to Académie des sciences. During this period, the academy was funded by and accountable to the Ministry of Public Instruction.
The academy came to control French patent laws in the course of the eighteenth century, acting as the liaison of artisans' knowledge to the public domain. As a result, academicians dominated technological activities in France.
The academy proceedings were published under the name Comptes rendus de l'Académie des Sciences (1835–1965). The Comptes rendus is now a journal series with seven titles. The publications can be found on site of the French National Library.
In 1818 the French Academy of Sciences launched a competition to explain the properties of light. The civil engineer Augustin-Jean Fresnel entered the competition by submitting a new wave theory of light. Siméon Denis Poisson, one of the members of the judging committee, studied Fresnel's theory in detail. Being a supporter of the particle-theory of light, he looked for a way to disprove it. Poisson thought that he had found a flaw when he demonstrate that Fresnel's theory predicts that an on-axis bright spot would exist in the shadow of a circular obstacle, where there should be complete darkness according to the particle-theory of light. The Poisson spot is not easily observed in every-day situations and so it was only natural for Poisson to interpret it as an absurd result and that it should disprove Fresnel's theory. However, the head of the committee, Dominique-François-Jean Arago, and who incidentally later became Prime Minister of France, decided to perform the experiment in more detail. He molded a 2-mm metallic disk to a glass plate with wax. To everyone's surprise he succeeded in observing the predicted spot, which convinced most scientists of the wave-nature of light.
For three centuries women were not allowed as members of the academy. This meant that many women scientists were excluded, including two-time Nobel Prize winner Marie Curie, Nobel winner Irène Joliot-Curie, mathematician Sophie Germain, and many other deserving women scientists. The first woman admitted as a correspondent member was a student of Curie's, Marguerite Perey, in 1962. The first female full member was Yvonne Choquet-Bruhat in 1979.
Membership in the academy is highly geared towards representing common French populace demographics. French population increases and changes in the early 21st century led to the academy expanding reference population sizes by reform in the early 2002.
The overwhelming majority of members leave the academy posthumously, with a few exceptions of removals, transfers, and resignations. The last member to be removed from the academy was in 1944. Removal from the academy was often for not performing to standards, not performing at all, leaving the country, or political reasons. In some rare occasions, a member has been elected twice and subsequently removed twice. This is the case for Marie-Adolphe Carnot.
== Government interference ==
The most direct involvement of the government in the affairs of the institute came in the initial nomination of members in 1795, but as its members nominated constituted only one third of the membership and most of these had previously been elected as members of the respective academies under the old regime, few objections were raised. Moreover, these nominated members were then completely free to nominate the remaining members of the institute. Members expected to remain such for life, but interference occurred in a few cases where the government suddenly terminated membership for political reasons. The other main interference came when the government refused to accept the result of academy elections. The academies control by the government was apparent in 1803, when Bonaparte decided on a general reorganization. His principal concern was not the First class but the Second, which included political scientists who were potential critics of his government. Bonaparte abolished the second class completely and, after a few expulsions, redistributed its remaining members, together with those of the Third class, into a new Second class concerned with literature and a new Third class devoted to the fine arts. Still this relationship between the academy and the government was not a one-way affair, as members expected to receive their payment of an honorarium.
== Decline ==
Although the academy still exists today, after World War I, the reputation and status of the academy was largely questioned. One factor behind its decline was the development from a meritocracy to gerontocracy: a shift from those with demonstrated scientific ability leading the academy to instead favoring those with seniority. It became known as a sort of "hall of fame" that lost control, real and symbolic, of the professional scientific diversity in France at the time. Another factor was that in the span of five years, 1909 to 1914, funding to science faculties considerably dropped, eventually leading to a financial crisis in France.
== Present use ==
Today the academy is one of five academies comprising the Institut de France. Its members are elected for life. Currently, there are 150 full members, 300 corresponding members, and 120 foreign associates. They are divided into two scientific groups: the Mathematical and Physical sciences and their applications and the Chemical, Biological, Geological and Medical sciences and their applications. The academy currently has five missions that it pursues. These being the encouraging of the scientific life, promoting the teaching of science, transmitting knowledge between scientific communities, fostering international collaborations, and ensuring a dual role of expertise and advise. The French Academy of Science originally focused its development efforts into creating a true co-development Euro-African program beginning in 1997. Since then they have broadened their scope of action to other regions of the world. The standing committee COPED is in charge of the international development projects undertaken by the French Academy of Science and their associates. The current president of COPED is Pierre Auger, the vice president is Michel Delseny, and the honorary president is Francois Gros. All of which are current members of the French Academy of Science. COPED has hosted several workshops or colloquia in Paris, involving representatives from African academies, universities or research centers, addressing a variety of themes and challenges dealing with African development and covering a large field spectrum. Specifically higher education in sciences, and research practices in basic and applied sciences that deal with various aspects relevant to development (renewable energy, infectious diseases, animal pathologies, food resources, access to safe water, agriculture, urban health, etc.).
== Current committees and working parties ==
The Academic Standing Committees and Working Parties prepare the advice notes, policy statements and the Academic Reports. Some have a statutory remit, such as the Select Committee, the Committee for International Affairs and the Committee for Scientists' Rights, some are created ad hoc by the academy and approved formally by vote in a members-only session.
Today the academies standing committees and working parties include:
The Academic Standing Committee in charge of the Biennial Report on Science and Technology
The Academic Standing Committee for Science, Ethics and Society
The Academic Standing Committee for the Environment
The Academic Standing Committee for Space Research
The Academic Standing Committee for Science and Metrology
The Academic Standing Committee for the Science History and Epistemology
The Academic Standing Committee for Science and Safety Issues
The Academic Standing Committee for Science Education and Training
The Academic Standing La main à la pâte Committee
The Academic Standing Committee for the Defense of Scientists' Rights (CODHOS)
The Academic Standing Committee for International Affairs (CORI)
The French Committee for International Scientific Unions (COFUSI)
The Academic Standing Committee for Scientific and Technological International Relations (CARIST)
The Academic Standing Committee for Developing Countries (COPED)
The Inter-academic Group for Development (GID) – Cf. for further reading
The Academic Standing Commission for Sealed Deposits
The Academic Standing Committee for Terminology and Neologisms
The Antoine Lavoisier Standing Committee
The Academic Standing Committee for Prospects in Energy Procurement
The Special Academic Working Party on Scientific Computing
The Special Academic Working Party on Material Sciences and Engineering
== Medals, awards and prizes ==
Each year, the Academy of Sciences distributes about 80 prizes. These include:
Marie Skłodowska-Curie and Pierre Curie Polish-French Science Award, created in 2022.
the Grande Médaille, awarded annually, in rotation, in the relevant disciplines of each division of the academy, to a French or foreign scholar who has contributed to the development of science in a decisive way.
the Lalande Prize, awarded from 1802 through 1970, for outstanding achievement in astronomy
the Valz Prize, awarded from 1877 through 1970, to honor advances in astronomy
the Richard Lounsbery Award, jointly with the National Academy of Sciences
the Prix Jacques Herbrand, for mathematics and physics
the Prix Paul Pascal, for chemistry
the Louis Bachelier Prize for major contributions to mathematical modeling in finance
the Prix Michel Montpetit for computer science and applied mathematics, awarded since 1977
the Leconte Prize, awarded annually since 1886, to recognize important discoveries in mathematics, physics, chemistry, natural history or medicine
the Prix Tchihatcheff (Tchihatchef; Chikhachev)
== People ==
The following are incomplete lists of the officers of the academy. See also Category:Officers of the French Academy of Sciences.
For a list of the academy's members past and present, see Category:Members of the French Academy of Sciences
=== Presidents ===
Source: French Academy of Sciences
=== Treasurers ===
?–1788 Georges-Louis Leclerc, Comte de Buffon
1788–1791 Mathieu Tillet
=== Permanent secretaries ===
==== General ====
==== Mathematical Sciences ====
==== Physical Sciences ====
==== Chemistry and Biology ====
== Publications ==
Publications of the French Academy of Sciences "Histoire de l'Académie royale des sciences" (1700–1790)
== See also ==
French art salons and academies
French Geodesic Mission
History of the metre
Seconds pendulum
Royal Commission on Animal Magnetism
== Notes ==
== References ==
== External links ==
Official website (in French) – English-language version
Complete listing of current members
Notes on the Académie des Sciences from the Scholarly Societies project (includes information on the society journals)
Search the Proceedings of the Académie des sciences in the French National Library (search item: Comptes Rendus)
Comptes rendus de l'Académie des sciences. Série 1, Mathématique in Gallica, the digital library of the BnF. | Wikipedia/Paris_Academy_of_Sciences |
In mathematics, differential Galois theory is the field that studies extensions of differential fields.
Whereas algebraic Galois theory studies extensions of algebraic fields, differential Galois theory studies extensions of differential fields, i.e. fields that are equipped with a derivation, D. Much of the theory of differential Galois theory is parallel to algebraic Galois theory. One difference between the two constructions is that the Galois groups in differential Galois theory tend to be matrix Lie groups, as compared with the finite groups often encountered in algebraic Galois theory.
== Motivation and basic concepts ==
In mathematics, some types of elementary functions cannot express the indefinite integrals of other elementary functions. A well-known example is
e
−
x
2
{\displaystyle e^{-x^{2}}}
, whose indefinite integral is the error function
erf
x
{\displaystyle \operatorname {erf} x}
, familiar in statistics. Other examples include the sinc function
sin
x
x
{\displaystyle {\tfrac {\sin x}{x}}}
and
x
x
{\displaystyle x^{x}}
.
It's important to note that the concept of elementary functions is merely conventional. If we redefine elementary functions to include the error function, then under this definition, the indefinite integral of
e
−
x
2
{\displaystyle e^{-x^{2}}}
would be considered an elementary function. However, no matter how many functions are added to the definition of elementary functions, there will always be functions whose indefinite integrals are not elementary.
Using the theory of differential Galois theory , it is possible to determine which indefinite integrals of elementary functions cannot be expressed as elementary functions. Differential Galois theory is based on the framework of Galois theory. While algebraic Galois theory studies field extensions of fields, differential Galois theory studies extensions of differential fields—fields with a derivation D.
Most of differential Galois theory is analogous to algebraic Galois theory. The significant difference in the structure is that the Galois group in differential Galois theory is an algebraic group, whereas in algebraic Galois theory, it is a profinite group equipped with the Krull topology.
== Definition ==
For any differential field F with derivation D, there exists a subfield called the field of constants of F, defined as:
Con(F) = {f ∈ F | Df = 0}.
The field of constants contains the prime field of F.
Given two differential fields F and G, G is called a simple differential extension of F if
and satisfies
∃s∈F; Dt = Ds/s,
then G is called a logarithmic extension of F.
This has the form of a logarithmic derivative. Intuitively, t can be thought of as the logarithm of some element s in F, corresponding to the usual chain rule. F does not necessarily have a uniquely defined logarithm. Various logarithmic extensions of F can be considered. Similarly, an exponential extension satisfies
∃s∈F; Dt = tDs,
and a differential extension satisfies
∃s∈F; Dt = s.
A differential extension or exponential extension becomes a Picard-Vessiot extension when the field has characteristic zero and the constant fields of the extended fields match.
Keeping the above caveat in mind, this element can be regarded as the exponential of an element s in F. Finally, if there is a finite sequence of intermediate fields from F to G with Con(F) = Con(G), such that each extension in the sequence is either a finite algebraic extension, a logarithmic extension, or an exponential extension, then G is called an elementary differential extension .
Consider the homogeneous linear differential equation for
a
1
,
⋯
,
a
n
∈
F
{\displaystyle a_{1},\cdots ,a_{n}\in F}
:
D
n
y
+
a
1
D
n
−
1
y
+
⋯
+
a
n
−
1
D
y
+
a
n
y
=
0
{\displaystyle D^{n}y+a_{1}D^{n-1}y+\cdots +a_{n-1}Dy+a_{n}y=0}
… (1).
There exist at most n linearly independent solutions over the field of constants. An extension G of F is a Picard-Vessiot extension for the differential equation (1) if G is generated by all solutions of (1) and satisfies Con(F) = Con(G).
An extension G of F is a Liouville extension if Con(F) = Con(G) is an algebraically closed field, and there exists an increasing chain of subfields
F = F0 ⊂ F1 ⊂ … ⊂ Fn = G
such that each extension Fk+1 : Fk is either a finite algebraic extension, a differential extension, or an exponential extension. A Liouville extension of the rational function field C(x) consists of functions obtained by finite combinations of rational functions, exponential functions, roots of algebraic equations, and their indefinite integrals. Clearly, logarithmic functions, trigonometric functions, and their inverses are Liouville functions over C(x), and especially elementary differential extensions are Liouville extensions.
An example of a function that is contained in an elementary extension over C(x) but not in a Liouville extension is the indefinite integral of
e
−
x
2
{\displaystyle e^{-x^{2}}}
.
== Basic properties ==
For a differential field F, if G is a separable algebraic extension of F, the derivation of F uniquely extends to a derivation of G. Hence, G uniquely inherits the differential structure of F.
Suppose F and G are differential fields satisfying Con(F) = Con(G), and G is an elementary differential extension of F. Let a ∈ F and y ∈ G such that Dy = a (i.e., G contains the indefinite integral of a). Then there exist c1, …, cn ∈ Con(F) and u1, …, un, v ∈ F such that
a
=
c
1
D
u
1
u
1
+
⋯
+
c
n
D
u
n
u
n
+
D
v
{\displaystyle a=c_{1}{\frac {Du_{1}}{u_{1}}}+\dotsb +c_{n}{\frac {Du_{n}}{u_{n}}}+Dv}
(Liouville's theorem). In other words, only functions whose indefinite integrals are elementary (i.e., at worst contained within the elementary differential extension of F) have the form stated in the theorem. Intuitively, only elementary indefinite integrals can be expressed as the sum of a finite number of logarithms of simple functions.
If G/F is a Picard-Vessiot extension, then G being a Liouville extension of F is equivalent to the differential Galois group having a solvable identity component. Furthermore, G being a Liouville extension of F is equivalent to G being embeddable in some Liouville extension field of F.
== Examples ==
The field of rational functions of one complex variable C(x) becomes a differential field when taking the usual differentiation with respect to the variable x as the derivation. The field of constants of this field is the complex number field C.
By Liouville's theorem mentioned above, if f(z) and g(z) are rational functions in z, f(z) is non-zero, and g(z) is non-constant, then
∫
f
(
z
)
e
g
(
z
)
d
z
{\displaystyle \textstyle \int f(z)e^{g(z)}\,dz}
is an elementary function if and only if there exists a rational function h(z) such that
f
(
z
)
=
h
′
(
z
)
+
h
(
z
)
g
′
(
z
)
{\displaystyle f(z)=h'(z)+h(z)g'(z)\,}
. The fact that the error function and the sine integral (indefinite integral of the sinc function) cannot be expressed as elementary functions follows immediately from this property.
In the case of the differential equation
y
″
+
y
=
0
{\displaystyle y''+y=0}
, the Galois group is the multiplicative group of complex numbers with absolute value 1, also known as the circle group. This is an example of a solvable group, and indeed, the solutions to this differential equation are elementary functions (trigonometric functions in this case).
The differential Galois group of the Airy equation,
y
″
−
x
y
=
0
{\displaystyle y''-xy=0}
, over the complex numbers is the special linear group of degree two, SL(2,C). This group is not solvable, indicating that its solutions cannot be expressed using elementary functions. Instead, the solutions are known as Airy functions.
== Applications ==
Differential Galois theory has numerous applications in mathematics and physics. It is used, for instance, in determining whether a given differential equation can be solved by quadrature (integration). It also has applications in the study of dynamic systems, including the integrability of Hamiltonian systems in classical mechanics.
One significant application is the analysis of integrability conditions for differential equations, which has implications in the study of symmetries and conservation laws in physics.
== See also ==
Picard–Vessiot theory
== References ==
== Sources ==
Kolchin, E. R., Differential Algebra and Algebraic Groups, Academic Press, 1973.
Hubbard, John H.; Lundell, Benjamin E. (2011). "A First Look at Differential Algebra" (PDF). The American Mathematical Monthly. 118 (3): 245–261. doi:10.4169/amer.math.monthly.118.03.245. JSTOR 10.4169/amer.math.monthly.118.03.245. S2CID 1567399.
Bertrand, D. (1996), "Review of "Lectures on differential Galois theory"" (PDF), Bulletin of the American Mathematical Society, 33 (2), doi:10.1090/s0273-0979-96-00652-0, ISSN 0002-9904
Beukers, Frits (1992), "8. Differential Galois theory", in Waldschmidt, Michel; Moussa, Pierre; Luck, Jean-Marc; Itzykson, Claude (eds.), From number theory to physics. Lectures of a meeting on number theory and physics held at the Centre de Physique, Les Houches (France), March 7–16, 1989, Berlin: Springer-Verlag, pp. 413–439, ISBN 3-540-53342-7, Zbl 0813.12001
Magid, Andy R. (1994), Lectures on differential Galois theory, University Lecture Series, vol. 7, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-7004-4, MR 1301076
Magid, Andy R. (1999), "Differential Galois theory" (PDF), Notices of the American Mathematical Society, 46 (9): 1041–1049, ISSN 0002-9920, MR 1710665
van der Put, Marius; Singer, Michael F. (2003), Galois theory of linear differential equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 328, Berlin, New York: Springer-Verlag, ISBN 978-3-540-44228-8, MR 1960772
Juan J. Morales Ruiz : Differential Galois Theory and Non-Integrability of Hamiltonian Systems, Birkhaeuser, 1999, ISBN 978-3764360788 . | Wikipedia/Differential_Galois_theory |
In mathematics, topological Galois theory is a mathematical theory which originated from a topological proof of Abel's impossibility theorem found by Vladimir Arnold and concerns the applications of some topological concepts to some problems in the field of Galois theory. It connects many ideas from algebra to ideas in topology. As described in Askold Khovanskii's book: "According to this theory, the way the Riemann surface of an analytic function covers the plane of complex numbers can obstruct the representability of this function by explicit formulas. The strongest known results on the unexpressibility of functions by explicit formulas have been obtained in this way."
== References ==
Alekseev, Valerij B. (2004). Abel's theorem in problems and solutions: based on the lectures of Professor V. I. Arnold. Dordrecht: Kluwer. ISBN 978-1-4020-2186-2. MR 2110624.
Khovanskii, Askold G. (2014). Topological Galois Theory. Springer Monographs in Mathematics. Heidelberg: Springer. ISBN 978-3-642-38870-5. MR 3289210.
Burda, Yuri (2012). Topological Methods in Galois Theory (PDF) (Thesis). University of Toronto. ISBN 978-0494-79401-2. MR 3153194. | Wikipedia/Topological_Galois_theory |
In mathematics, a derivation is a function on an algebra that generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map D : A → A that satisfies Leibniz's law:
D
(
a
b
)
=
a
D
(
b
)
+
D
(
a
)
b
.
{\displaystyle D(ab)=aD(b)+D(a)b.}
More generally, if M is an A-bimodule, a K-linear map D : A → M that satisfies the Leibniz law is also called a derivation. The collection of all K-derivations of A to itself is denoted by DerK(A). The collection of K-derivations of A into an A-module M is denoted by DerK(A, M).
Derivations occur in many different contexts in diverse areas of mathematics. The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on Rn. The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra. The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra A is noncommutative, then the commutator with respect to an element of the algebra A defines a linear endomorphism of A to itself, which is a derivation over K. That is,
[
F
G
,
N
]
=
[
F
,
N
]
G
+
F
[
G
,
N
]
,
{\displaystyle [FG,N]=[F,N]G+F[G,N],}
where
[
⋅
,
N
]
{\displaystyle [\cdot ,N]}
is the commutator with respect to
N
{\displaystyle N}
. An algebra A equipped with a distinguished derivation d forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory.
== Properties ==
If A is a K-algebra, for K a ring, and D: A → A is a K-derivation, then
If A has a unit 1, then D(1) = D(12) = 2D(1), so that D(1) = 0. Thus by K-linearity, D(k) = 0 for all k ∈ K.
If A is commutative, D(x2) = xD(x) + D(x)x = 2xD(x), and D(xn) = nxn−1D(x), by the Leibniz rule.
More generally, for any x1, x2, …, xn ∈ A, it follows by induction that
D
(
x
1
x
2
⋯
x
n
)
=
∑
i
x
1
⋯
x
i
−
1
D
(
x
i
)
x
i
+
1
⋯
x
n
{\displaystyle D(x_{1}x_{2}\cdots x_{n})=\sum _{i}x_{1}\cdots x_{i-1}D(x_{i})x_{i+1}\cdots x_{n}}
which is
∑
i
D
(
x
i
)
∏
j
≠
i
x
j
{\textstyle \sum _{i}D(x_{i})\prod _{j\neq i}x_{j}}
if for all i, D(xi) commutes with
x
1
,
x
2
,
…
,
x
i
−
1
{\displaystyle x_{1},x_{2},\ldots ,x_{i-1}}
.
For n > 1, Dn is not a derivation, instead satisfying a higher-order Leibniz rule:
D
n
(
u
v
)
=
∑
k
=
0
n
(
n
k
)
⋅
D
n
−
k
(
u
)
⋅
D
k
(
v
)
.
{\displaystyle D^{n}(uv)=\sum _{k=0}^{n}{\binom {n}{k}}\cdot D^{n-k}(u)\cdot D^{k}(v).}
Moreover, if M is an A-bimodule, write
Der
K
(
A
,
M
)
{\displaystyle \operatorname {Der} _{K}(A,M)}
for the set of K-derivations from A to M.
DerK(A, M) is a module over K.
DerK(A) is a Lie algebra with Lie bracket defined by the commutator:
[
D
1
,
D
2
]
=
D
1
∘
D
2
−
D
2
∘
D
1
.
{\displaystyle [D_{1},D_{2}]=D_{1}\circ D_{2}-D_{2}\circ D_{1}.}
since it is readily verified that the commutator of two derivations is again a derivation.
There is an A-module ΩA/K (called the Kähler differentials) with a K-derivation d: A → ΩA/K through which any derivation D: A → M factors. That is, for any derivation D there is a A-module map φ with
D
:
A
⟶
d
Ω
A
/
K
⟶
φ
M
{\displaystyle D:A{\stackrel {d}{\longrightarrow }}\Omega _{A/K}{\stackrel {\varphi }{\longrightarrow }}M}
The correspondence
D
↔
φ
{\displaystyle D\leftrightarrow \varphi }
is an isomorphism of A-modules:
Der
K
(
A
,
M
)
≃
Hom
A
(
Ω
A
/
K
,
M
)
{\displaystyle \operatorname {Der} _{K}(A,M)\simeq \operatorname {Hom} _{A}(\Omega _{A/K},M)}
If k ⊂ K is a subring, then A inherits a k-algebra structure, so there is an inclusion
Der
K
(
A
,
M
)
⊂
Der
k
(
A
,
M
)
,
{\displaystyle \operatorname {Der} _{K}(A,M)\subset \operatorname {Der} _{k}(A,M),}
since any K-derivation is a fortiori a k-derivation.
== Graded derivations ==
Given a graded algebra A and a homogeneous linear map D of grade |D| on A, D is a homogeneous derivation if
D
(
a
b
)
=
D
(
a
)
b
+
ε
|
a
|
|
D
|
a
D
(
b
)
{\displaystyle {D(ab)=D(a)b+\varepsilon ^{|a||D|}aD(b)}}
for every homogeneous element a and every element b of A for a commutator factor ε = ±1. A graded derivation is sum of homogeneous derivations with the same ε.
If ε = 1, this definition reduces to the usual case. If ε = −1, however, then
D
(
a
b
)
=
D
(
a
)
b
+
(
−
1
)
|
a
|
|
D
|
a
D
(
b
)
{\displaystyle {D(ab)=D(a)b+(-1)^{|a||D|}aD(b)}}
for odd |D|, and D is called an anti-derivation.
Examples of anti-derivations include the exterior derivative and the interior product acting on differential forms.
Graded derivations of superalgebras (i.e. Z2-graded algebras) are often called superderivations.
== Related notions ==
Hasse–Schmidt derivations are K-algebra homomorphisms
A
→
A
[
[
t
]
]
.
{\displaystyle A\to A[[t]].}
Composing further with the map that sends a formal power series
∑
a
n
t
n
{\displaystyle \sum a_{n}t^{n}}
to the coefficient
a
1
{\displaystyle a_{1}}
gives a derivation.
== See also ==
In differential geometry derivations are tangent vectors
Kähler differential
Hasse derivative
p-derivation
Wirtinger derivatives
Derivative of the exponential map
== References ==
Bourbaki, Nicolas (1989), Algebra I, Elements of mathematics, Springer-Verlag, ISBN 3-540-64243-9.
Eisenbud, David (1999), Commutative algebra with a view toward algebraic geometry (3rd. ed.), Springer-Verlag, ISBN 978-0-387-94269-8.
Matsumura, Hideyuki (1970), Commutative algebra, Mathematics lecture note series, W. A. Benjamin, ISBN 978-0-8053-7025-6.
Kolař, Ivan; Slovák, Jan; Michor, Peter W. (1993), Natural operations in differential geometry, Springer-Verlag. | Wikipedia/Derivation_(differential_algebra) |
In algebra, the cofree coalgebra of a vector space or module is a coalgebra analog of the free algebra of a vector space. The cofree coalgebra of any vector space over a field exists, though it is more complicated than one might expect by analogy with the free algebra.
== Definition ==
If V is a vector space over a field F, then the cofree coalgebra C (V), of V, is a coalgebra together with a linear map C (V) → V, such that any linear map from a coalgebra X to V factors through a coalgebra homomorphism from X to C (V). In other words, the functor C is right adjoint to the forgetful functor from coalgebras to vector spaces.
The cofree coalgebra of a vector space always exists, and is unique up to canonical isomorphism.
Cofree cocommutative coalgebras are defined in a similar way, and can be constructed as the largest cocommutative coalgebra in the cofree coalgebra.
== Construction ==
C (V) may be constructed as a completion of the tensor coalgebra T(V) of V. For k ∈ N = {0, 1, 2, ...}, let TkV denote the k-fold tensor power of V:
T
k
V
=
V
⊗
k
=
V
⊗
V
⊗
⋯
⊗
V
,
{\displaystyle T^{k}V=V^{\otimes k}=V\otimes V\otimes \cdots \otimes V,}
with T0V = F, and T1V = V. Then T(V) is the direct sum of all TkV:
T
(
V
)
=
⨁
k
∈
N
T
k
V
=
F
⊕
V
⊕
(
V
⊗
V
)
⊕
(
V
⊗
V
⊗
V
)
⊕
⋯
.
{\displaystyle T(V)=\bigoplus _{k\in \mathbb {N} }T^{k}V=\mathbb {F} \oplus V\oplus (V\otimes V)\oplus (V\otimes V\otimes V)\oplus \cdots .}
In addition to the graded algebra structure given by the tensor product isomorphisms TjV ⊗ TkV → Tj+kV for j, k ∈ N, T(V) has a graded coalgebra structure Δ : T(V) → T(V) ⊠ T(V) defined by extending
Δ
(
v
1
⊗
⋯
⊗
v
k
)
:=
∑
j
=
0
k
(
v
0
⊗
⋯
⊗
v
j
)
⊠
(
v
j
+
1
⊗
⋯
⊗
v
k
+
1
)
{\displaystyle \Delta (v_{1}\otimes \dots \otimes v_{k}):=\sum _{j=0}^{k}(v_{0}\otimes \dots \otimes v_{j})\boxtimes (v_{j+1}\otimes \dots \otimes v_{k+1})}
by linearity to all of T(V).
Here, the tensor product symbol ⊠ is used to indicate the tensor product used to define a coalgebra; it must not be confused with the tensor product ⊗, which is used to define the bilinear multiplication operator of the tensor algebra. The two act in different spaces, on different objects. Additional discussion of this point can be found in the tensor algebra article.
The sum above makes use of a short-hand trick, defining
v
0
=
v
k
+
1
=
1
∈
F
{\displaystyle v_{0}=v_{k+1}=1\in \mathbb {F} }
to be the unit in the field
F
{\displaystyle \mathbb {F} }
. For example, this short-hand trick gives, for the case of
k
=
1
{\displaystyle k=1}
in the above sum, the result that
Δ
(
v
)
=
1
⊠
v
+
v
⊠
1
{\displaystyle \Delta (v)=1\boxtimes v+v\boxtimes 1}
for
v
∈
V
{\displaystyle v\in V}
. Similarly, for
k
=
2
{\displaystyle k=2}
and
v
,
w
∈
V
{\displaystyle v,w\in V}
, one gets
Δ
(
v
⊗
w
)
=
1
⊠
(
v
⊗
w
)
+
v
⊠
w
+
(
v
⊗
w
)
⊠
1.
{\displaystyle \Delta (v\otimes w)=1\boxtimes (v\otimes w)+v\boxtimes w+(v\otimes w)\boxtimes 1.}
Note that there is no need to ever write
1
⊗
v
{\displaystyle 1\otimes v}
as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that
1
⊗
v
=
1
⋅
v
=
v
.
{\displaystyle 1\otimes v=1\cdot v=v.}
With the usual product this coproduct does not make T(V) into a bialgebra, but is instead dual to the algebra structure on T(V∗), where V∗ denotes the dual vector space of linear maps V → F. It can be turned into a bialgebra with the product
v
i
⋅
v
j
=
(
i
,
j
)
v
i
+
j
{\displaystyle v_{i}\cdot v_{j}=(i,j)v_{i+j}}
where (i,j) denotes the binomial coefficient
(
i
+
j
i
)
{\displaystyle {\tbinom {i+j}{i}}}
. This bialgebra is known as the divided power Hopf algebra. The product is dual to the coalgebra structure on T(V∗) which makes the tensor algebra a bialgebra.
Here an element of T(V) defines a linear form on T(V∗) using the nondegenerate pairings
T
k
V
×
T
k
V
∗
→
F
{\displaystyle T^{k}V\times T^{k}V^{*}\to \mathbb {F} }
induced by evaluation, and the duality between the coproduct on T(V) and the product on T(V∗) means that
Δ
(
f
)
(
a
⊗
b
)
=
f
(
a
b
)
.
{\displaystyle \Delta (f)(a\otimes b)=f(ab).}
This duality extends to a nondegenerate pairing
T
^
(
V
)
×
T
(
V
∗
)
→
F
,
{\displaystyle {\hat {T}}(V)\times T(V^{*})\to \mathbb {F} ,}
where
T
^
(
V
)
=
∏
k
∈
N
T
k
V
{\displaystyle {\hat {T}}(V)=\prod _{k\in \mathbb {N} }T^{k}V}
is the direct product of the tensor powers of V. (The direct sum T(V) is the subspace of the direct product for which only finitely many components are nonzero.) However, the coproduct Δ on T(V) only extends to a linear map
Δ
^
:
T
^
(
V
)
→
T
^
(
V
)
⊗
^
T
^
(
V
)
{\displaystyle {\hat {\Delta }}\colon {\hat {T}}(V)\to {\hat {T}}(V){\hat {\otimes }}{\hat {T}}(V)}
with values in the completed tensor product, which in this case is
T
^
(
V
)
⊗
^
T
^
(
V
)
=
∏
j
,
k
∈
N
T
j
V
⊗
T
k
V
,
{\displaystyle {\hat {T}}(V){\hat {\otimes }}{\hat {T}}(V)=\prod _{j,k\in \mathbb {N} }T^{j}V\otimes T^{k}V,}
and contains the tensor product as a proper subspace:
T
^
(
V
)
⊗
T
^
(
V
)
=
{
X
∈
T
^
(
V
)
⊗
^
T
^
(
V
)
:
∃
k
∈
N
,
f
j
,
g
j
∈
T
^
(
V
)
s.t.
X
=
∑
j
=
0
k
(
f
j
⊗
g
j
)
}
.
{\displaystyle {\hat {T}}(V)\otimes {\hat {T}}(V)=\{X\in {\hat {T}}(V){\hat {\otimes }}{\hat {T}}(V):\exists \,k\in \mathbb {N} ,f_{j},g_{j}\in {\hat {T}}(V){\text{ s.t. }}X={\textstyle \sum }_{j=0}^{k}(f_{j}\otimes g_{j})\}.}
The completed tensor coalgebra C (V) is the largest subspace C satisfying
T
(
V
)
⊆
C
⊆
T
^
(
V
)
and
Δ
^
(
C
)
⊆
C
⊗
C
⊆
T
^
(
V
)
⊗
^
T
^
(
V
)
,
{\displaystyle T(V)\subseteq C\subseteq {\hat {T}}(V){\text{ and }}{\hat {\Delta }}(C)\subseteq C\otimes C\subseteq {\hat {T}}(V){\hat {\otimes }}{\hat {T}}(V),}
which exists because if C1 and C2 satisfiy these conditions, then so does their sum C1 + C2.
It turns out that C (V) is the subspace of all representative elements:
C
(
V
)
=
{
f
∈
T
^
(
V
)
:
Δ
^
(
f
)
∈
T
^
(
V
)
⊗
T
^
(
V
)
}
.
{\displaystyle C(V)=\{f\in {\hat {T}}(V):{\hat {\Delta }}(f)\in {\hat {T}}(V)\otimes {\hat {T}}(V)\}.}
Furthermore, by the finiteness principle for coalgebras, any f ∈ C (V) must belong to a finite-dimensional subcoalgebra of C (V). Using the duality pairing with T(V∗), it follows that f ∈ C (V) if and only if the kernel of f on T(V∗) contains a two-sided ideal of finite codimension. Equivalently,
C
(
V
)
=
⋃
{
I
0
⊆
T
^
(
V
)
:
I
◃
T
(
V
∗
)
,
c
o
d
i
m
I
<
∞
}
{\displaystyle C(V)=\bigcup \{I^{0}\subseteq {\hat {T}}(V):I\triangleleft T(V^{*}),\,\mathrm {codim} \,I<\infty \}}
is the union of annihilators I 0 of finite codimension ideals I in T(V∗), which are isomorphic to the duals of the finite-dimensional algebra quotients T(V∗)/I.
=== Example ===
When V = F, T(V∗) is the polynomial algebra F[t] in one variable t, and the direct product
T
^
(
V
)
=
∏
k
∈
N
T
k
V
{\displaystyle {\hat {T}}(V)=\prod _{k\in \mathbb {N} }T^{k}V}
may be identified with the vector space F[[τ]] of formal power series
∑
j
∈
N
a
j
τ
j
{\displaystyle \sum _{j\in \mathbb {N} }a_{j}\tau ^{j}}
in an indeterminate τ. The coproduct Δ on the subspace F[τ] is determined by
Δ
(
τ
k
)
=
∑
i
+
j
=
k
τ
i
⊗
τ
j
{\displaystyle \Delta (\tau ^{k})=\sum _{i+j=k}\tau ^{i}\otimes \tau ^{j}}
and C (V) is the largest subspace of F[[τ]] on which this extends to a coalgebra structure.
The duality F[[τ]] × F[t] → F is determined by τj(tk) = δjk so that
(
∑
j
∈
N
a
j
τ
j
)
(
∑
k
=
0
N
b
k
t
k
)
=
∑
k
=
0
N
a
k
b
k
.
{\displaystyle {\biggl (}\sum _{j\in \mathbb {N} }a_{j}\tau ^{j}{\biggr )}{\biggl (}\sum _{k=0}^{N}b_{k}t^{k}{\biggr )}=\sum _{k=0}^{N}a_{k}b_{k}.}
Putting t=τ−1, this is the constant term in the product of two formal Laurent series. Thus, given a polynomial p(t) with leading term tN, the formal Laurent series
τ
j
−
N
p
(
τ
−
1
)
=
τ
j
τ
N
p
(
τ
−
1
)
{\displaystyle {\frac {\tau ^{j-N}}{p(\tau ^{-1})}}={\frac {\tau ^{j}}{\tau ^{N}p(\tau ^{-1})}}}
is a formal power series for any j ∈ N, and annihilates the ideal I(p) generated by p for j < N. Since F[t]/I(p) has dimension N, these formal power series span the annihilator of I(p). Furthermore, they all belong to the localization of F[τ] at the ideal generated by τ, i.e., they have the form f(τ)/g(τ) where f and g are polynomials, and g has nonzero constant term. This is the space of rational functions in τ which are regular at zero. Conversely, any proper rational function annihilates an ideal of the form I(p).
Any nonzero ideal of F[t] is principal, with finite-dimensional quotient. Thus C (V) is the sum of the annihilators of the principal ideals I(p), i.e., the space of rational functions regular at zero.
== References ==
Block, Richard E.; Leroux, Pierre (1985), "Generalized dual coalgebras of algebras, with applications to cofree coalgebras", Journal of Pure and Applied Algebra, 36 (1): 15–21, doi:10.1016/0022-4049(85)90060-X, ISSN 0022-4049, MR 0782637
Hazewinkel, Michiel (2003), "Cofree coalgebras and multivariable recursiveness", Journal of Pure and Applied Algebra, 183 (1): 61–103, doi:10.1016/S0022-4049(03)00013-6, ISSN 0022-4049, MR 1992043
cofree coalgebra at the nLab | Wikipedia/Cofree_coalgebra |
In mathematics, a braided Hopf algebra is a Hopf algebra in a braided monoidal category. The most common braided Hopf algebras are objects in a Yetter–Drinfeld category of a Hopf algebra H, particularly the Nichols algebra of a braided vector space in that category.
The notion should not be confused with quasitriangular Hopf algebra.
== Definition ==
Let H be a Hopf algebra over a field k, and assume that the antipode of H is bijective. A Yetter–Drinfeld module R over H is called a braided bialgebra in the Yetter–Drinfeld category
H
H
Y
D
{\displaystyle {}_{H}^{H}{\mathcal {YD}}}
if
(
R
,
⋅
,
η
)
{\displaystyle (R,\cdot ,\eta )}
is a unital associative algebra, where the multiplication map
⋅
:
R
×
R
→
R
{\displaystyle \cdot :R\times R\to R}
and the unit
η
:
k
→
R
{\displaystyle \eta :k\to R}
are maps of Yetter–Drinfeld modules,
(
R
,
Δ
,
ε
)
{\displaystyle (R,\Delta ,\varepsilon )}
is a coassociative coalgebra with counit
ε
{\displaystyle \varepsilon }
, and both
Δ
{\displaystyle \Delta }
and
ε
{\displaystyle \varepsilon }
are maps of Yetter–Drinfeld modules,
the maps
Δ
:
R
→
R
⊗
R
{\displaystyle \Delta :R\to R\otimes R}
and
ε
:
R
→
k
{\displaystyle \varepsilon :R\to k}
are algebra maps in the category
H
H
Y
D
{\displaystyle {}_{H}^{H}{\mathcal {YD}}}
, where the algebra structure of
R
⊗
R
{\displaystyle R\otimes R}
is determined by the unit
η
⊗
η
(
1
)
:
k
→
R
⊗
R
{\displaystyle \eta \otimes \eta (1):k\to R\otimes R}
and the multiplication map
(
R
⊗
R
)
×
(
R
⊗
R
)
→
R
⊗
R
,
(
r
⊗
s
,
t
⊗
u
)
↦
∑
i
r
t
i
⊗
s
i
u
,
and
c
(
s
⊗
t
)
=
∑
i
t
i
⊗
s
i
.
{\displaystyle (R\otimes R)\times (R\otimes R)\to R\otimes R,\quad (r\otimes s,t\otimes u)\mapsto \sum _{i}rt_{i}\otimes s_{i}u,\quad {\text{and}}\quad c(s\otimes t)=\sum _{i}t_{i}\otimes s_{i}.}
Here c is the canonical braiding in the Yetter–Drinfeld category
H
H
Y
D
{\displaystyle {}_{H}^{H}{\mathcal {YD}}}
.
A braided bialgebra in
H
H
Y
D
{\displaystyle {}_{H}^{H}{\mathcal {YD}}}
is called a braided Hopf algebra, if there is a morphism
S
:
R
→
R
{\displaystyle S:R\to R}
of Yetter–Drinfeld modules such that
S
(
r
(
1
)
)
r
(
2
)
=
r
(
1
)
S
(
r
(
2
)
)
=
η
(
ε
(
r
)
)
{\displaystyle S(r^{(1)})r^{(2)}=r^{(1)}S(r^{(2)})=\eta (\varepsilon (r))}
for all
r
∈
R
,
{\displaystyle r\in R,}
where
Δ
R
(
r
)
=
r
(
1
)
⊗
r
(
2
)
{\displaystyle \Delta _{R}(r)=r^{(1)}\otimes r^{(2)}}
in slightly modified Sweedler notation – a change of notation is performed in order to avoid confusion in Radford's biproduct below.
== Examples ==
Any Hopf algebra is also a braided Hopf algebra over
H
=
k
{\displaystyle H=k}
A super Hopf algebra is nothing but a braided Hopf algebra over the group algebra
H
=
k
[
Z
/
2
Z
]
{\displaystyle H=k[\mathbb {Z} /2\mathbb {Z} ]}
.
The tensor algebra
T
V
{\displaystyle TV}
of a Yetter–Drinfeld module
V
∈
H
H
Y
D
{\displaystyle V\in {}_{H}^{H}{\mathcal {YD}}}
is always a braided Hopf algebra. The coproduct
Δ
{\displaystyle \Delta }
of
T
V
{\displaystyle TV}
is defined in such a way that the elements of V are primitive, that is
Δ
(
v
)
=
1
⊗
v
+
v
⊗
1
for all
v
∈
V
.
{\displaystyle \Delta (v)=1\otimes v+v\otimes 1\quad {\text{for all}}\quad v\in V.}
The counit
ε
:
T
V
→
k
{\displaystyle \varepsilon :TV\to k}
then satisfies the equation
ε
(
v
)
=
0
{\displaystyle \varepsilon (v)=0}
for all
v
∈
V
.
{\displaystyle v\in V.}
The universal quotient of
T
V
{\displaystyle TV}
, that is still a braided Hopf algebra containing
V
{\displaystyle V}
as primitive elements is called the Nichols algebra. They take the role of quantum Borel algebras in the classification of pointed Hopf algebras, analogously to the classical Lie algebra case.
== Radford's biproduct ==
For any braided Hopf algebra R in
H
H
Y
D
{\displaystyle {}_{H}^{H}{\mathcal {YD}}}
there exists a natural Hopf algebra
R
#
H
{\displaystyle R\#H}
which contains R as a subalgebra and H as a Hopf subalgebra. It is called Radford's biproduct, named after its discoverer, the Hopf algebraist David Radford. It was rediscovered by Shahn Majid, who called it bosonization.
As a vector space,
R
#
H
{\displaystyle R\#H}
is just
R
⊗
H
{\displaystyle R\otimes H}
. The algebra structure of
R
#
H
{\displaystyle R\#H}
is given by
(
r
#
h
)
(
r
′
#
h
′
)
=
r
(
h
(
1
)
⋅
r
′
)
#
h
(
2
)
h
′
,
{\displaystyle (r\#h)(r'\#h')=r(h_{(1)}\cdot r')\#h_{(2)}h',}
where
r
,
r
′
∈
R
,
h
,
h
′
∈
H
{\displaystyle r,r'\in R,h,h'\in H}
,
Δ
(
h
)
=
h
(
1
)
⊗
h
(
2
)
{\displaystyle \Delta (h)=h_{(1)}\otimes h_{(2)}}
(Sweedler notation) is the coproduct of
h
∈
H
{\displaystyle h\in H}
, and
⋅
:
H
⊗
R
→
R
{\displaystyle \cdot :H\otimes R\to R}
is the left action of H on R. Further, the coproduct of
R
#
H
{\displaystyle R\#H}
is determined by the formula
Δ
(
r
#
h
)
=
(
r
(
1
)
#
r
(
2
)
(
−
1
)
h
(
1
)
)
⊗
(
r
(
2
)
(
0
)
#
h
(
2
)
)
,
r
∈
R
,
h
∈
H
.
{\displaystyle \Delta (r\#h)=(r^{(1)}\#r^{(2)}{}_{(-1)}h_{(1)})\otimes (r^{(2)}{}_{(0)}\#h_{(2)}),\quad r\in R,h\in H.}
Here
Δ
R
(
r
)
=
r
(
1
)
⊗
r
(
2
)
{\displaystyle \Delta _{R}(r)=r^{(1)}\otimes r^{(2)}}
denotes the coproduct of r in R, and
δ
(
r
(
2
)
)
=
r
(
2
)
(
−
1
)
⊗
r
(
2
)
(
0
)
{\displaystyle \delta (r^{(2)})=r^{(2)}{}_{(-1)}\otimes r^{(2)}{}_{(0)}}
is the left coaction of H on
r
(
2
)
∈
R
.
{\displaystyle r^{(2)}\in R.}
== References ==
Andruskiewitsch, Nicolás and Schneider, Hans-Jürgen, Pointed Hopf algebras, New directions in Hopf algebras, 1–68, Math. Sci. Res. Inst. Publ., 43, Cambridge Univ. Press, Cambridge, 2002. | Wikipedia/Braided_Hopf_algebra |
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.
== Definition ==
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
=== As multidimensional arrays ===
A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an n-dimensional space is represented by a one-dimensional array with n components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by T ij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and T ij can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices (m) required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an m-dimensional array or an m-way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors.
Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors
e
^
i
{\displaystyle \mathbf {\hat {e}} _{i}}
are expressed in terms of the old basis vectors
e
j
{\displaystyle \mathbf {e} _{j}}
as,
e
^
i
=
∑
j
=
1
n
e
j
R
i
j
=
e
j
R
i
j
.
{\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.}
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R,
v
^
i
=
(
R
−
1
)
j
i
v
j
,
{\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},}
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself,
w
^
i
=
w
j
R
i
j
.
{\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.}
This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array
T
{\displaystyle T}
that transforms under a change of basis matrix
R
=
(
R
i
j
)
{\displaystyle R=\left(R_{i}^{j}\right)}
by
T
^
=
R
−
1
T
R
{\displaystyle {\hat {T}}=R^{-1}TR}
. For the individual matrix entries, this transformation law has the form
T
^
j
′
i
′
=
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
{\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}}
so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
v
=
v
^
i
e
^
i
=
(
(
R
−
1
)
j
i
v
j
)
(
e
k
R
i
k
)
=
(
(
R
−
1
)
j
i
R
i
k
)
v
j
e
k
=
δ
j
k
v
j
e
k
=
v
k
e
k
=
v
i
e
i
{\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}}
,
where
δ
j
k
{\displaystyle \delta _{j}^{k}}
is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like
v
i
e
i
{\displaystyle {v}^{i}\,\mathbf {e} _{i}}
can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components
(
T
v
)
i
{\displaystyle (Tv)^{i}}
are given by
(
T
v
)
i
=
T
j
i
v
j
{\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}}
. These components transform contravariantly, since
(
T
v
^
)
i
′
=
T
^
j
′
i
′
v
^
j
′
=
[
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
]
[
(
R
−
1
)
k
j
′
v
k
]
=
(
R
−
1
)
i
i
′
(
T
v
)
i
.
{\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{k}^{j'}v^{k}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.}
The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as,
T
^
j
1
′
,
…
,
j
q
′
i
1
′
,
…
,
i
p
′
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short.
This discussion motivates the following formal definition:
Definition. A tensor of type (p, q) is an assignment of a multidimensional array
T
j
1
…
j
q
i
1
…
i
p
[
f
]
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]}
to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis
f
↦
f
⋅
R
=
(
e
i
R
1
i
,
…
,
e
i
R
n
i
)
{\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)}
then the multidimensional array obeys the transformation law
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If
f
=
(
f
1
,
…
,
f
n
)
{\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})}
is an ordered basis, and
R
=
(
R
j
i
)
{\displaystyle R=\left(R_{j}^{i}\right)}
is an invertible
n
×
n
{\displaystyle n\times n}
matrix, then the action is given by
f
R
=
(
f
i
R
1
i
,
…
,
f
i
R
n
i
)
.
{\displaystyle \mathbf {f} R=\left(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}\right).}
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let
ρ
{\displaystyle \rho }
be a representation of GL(n) on W (that is, a group homomorphism
ρ
:
GL
(
n
)
→
GL
(
W
)
{\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}
). Then a tensor of type
ρ
{\displaystyle \rho }
is an equivariant map
T
:
F
→
W
{\displaystyle T:F\to W}
. Equivariance here means that
T
(
F
R
)
=
ρ
(
R
−
1
)
T
(
F
)
.
{\displaystyle T(FR)=\rho \left(R^{-1}\right)T(F).}
When
ρ
{\displaystyle \rho }
is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups.
=== As multilinear maps ===
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type (p, q) tensor T is defined as a multilinear map,
T
:
V
∗
×
⋯
×
V
∗
⏟
p
copies
×
V
×
⋯
×
V
⏟
q
copies
→
R
,
{\displaystyle T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,}
where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers,
R
{\displaystyle \mathbb {R} }
. More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing
R
{\displaystyle \mathbb {R} }
as the codomain of the multilinear maps.
By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V∗,
T
j
1
…
j
q
i
1
…
i
p
≡
T
(
ε
i
1
,
…
,
ε
i
p
,
e
j
1
,
…
,
e
j
q
)
,
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),}
a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.
=== Using tensor products ===
For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here.
A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces,
T
∈
V
⊗
⋯
⊗
V
⏟
p
copies
⊗
V
∗
⊗
⋯
⊗
V
∗
⏟
q
copies
.
{\displaystyle T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.}
A basis vi of V and basis wj of W naturally induce a basis vi ⊗ wj of the tensor product V ⊗ W. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e.
T
=
T
j
1
…
j
q
i
1
…
i
p
e
i
1
⊗
⋯
⊗
e
i
p
⊗
ε
j
1
⊗
⋯
⊗
ε
j
q
.
{\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.}
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps.
This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:
U
⊗
V
≅
(
U
∗
∗
)
⊗
(
V
∗
∗
)
≅
(
U
∗
⊗
V
∗
)
∗
≅
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle U\otimes V\cong \left(U^{**}\right)\otimes \left(V^{**}\right)\cong \left(U^{*}\otimes V^{*}\right)^{*}\cong \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
and
Hom
(
U
∗
⊗
V
∗
;
F
)
{\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)}
.
Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.
=== Tensors in infinite dimensions ===
This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories.
=== Tensor fields ===
In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor.
In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions,
x
¯
i
(
x
1
,
…
,
x
n
)
,
{\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),}
defining a coordinate transformation,
T
^
j
1
′
…
j
q
′
i
1
′
…
i
p
′
(
x
¯
1
,
…
,
x
¯
n
)
=
∂
x
¯
i
1
′
∂
x
i
1
⋯
∂
x
¯
i
p
′
∂
x
i
p
∂
x
j
1
∂
x
¯
j
1
′
⋯
∂
x
j
q
∂
x
¯
j
q
′
T
j
1
…
j
q
i
1
…
i
p
(
x
1
,
…
,
x
n
)
.
{\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).}
== History ==
The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898.
Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.
In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product.
From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.
== Examples ==
An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.
This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
== Properties ==
Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
not being a tensor, for the sign change under transformations changing the orientation.
Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, vectors: n (contravariant indices) and dual vectors: m (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers (n, m), which determine the precise form of the transformation law. The order of a tensor is the sum of these two numbers.
The order (also degree or rank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order 2 + 0 = 2, the same as the stress tensor, taking one vector and returning another 1 + 1 = 2. The
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
-symbol, mapping two vectors to one vector, would have order 2 + 1 = 3.
The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order 2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.
== Notation ==
There are several notational systems that are used to describe tensors and perform calculations involving them.
=== Ricci calculus ===
Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.
=== Einstein summation convention ===
The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.
=== Penrose graphical notation ===
Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.
=== Abstract index notation ===
The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
=== Component-free notation ===
A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.
== Operations ==
There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
=== Tensor product ===
The tensor product takes two tensors, S and T, and produces a new tensor, S ⊗ T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.,
(
S
⊗
T
)
(
v
1
,
…
,
v
n
,
v
n
+
1
,
…
,
v
n
+
m
)
=
S
(
v
1
,
…
,
v
n
)
T
(
v
n
+
1
,
…
,
v
n
+
m
)
,
{\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),}
which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.,
(
S
⊗
T
)
j
1
…
j
k
j
k
+
1
…
j
k
+
m
i
1
…
i
l
i
l
+
1
…
i
l
+
n
=
S
j
1
…
j
k
i
1
…
i
l
T
j
k
+
1
…
j
k
+
m
i
l
+
1
…
i
l
+
n
.
{\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.}
If S is of type (l, k) and T is of type (n, m), then the tensor product S ⊗ T has type (l + n, k + m).
=== Contraction ===
Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor
T
i
j
{\displaystyle T_{i}^{j}}
can be contracted to a scalar through
T
i
i
{\displaystyle T_{i}^{i}}
, where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor
T
∈
V
⊗
V
⊗
V
∗
{\displaystyle T\in V\otimes V\otimes V^{*}}
can be written as a linear combination
T
=
v
1
⊗
w
1
⊗
α
1
+
v
2
⊗
w
2
⊗
α
2
+
⋯
+
v
N
⊗
w
N
⊗
α
N
.
{\displaystyle T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.}
The contraction of T on the first and last slots is then the vector
α
1
(
v
1
)
w
1
+
α
2
(
v
2
)
w
2
+
⋯
+
α
N
(
v
N
)
w
N
.
{\displaystyle \alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.}
In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor
T
i
j
{\displaystyle T^{ij}}
can be contracted to a scalar through
T
i
j
g
i
j
{\displaystyle T^{ij}g_{ij}}
(yet again assuming the summation convention).
=== Raising or lowering an index ===
When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index.
Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor.
== Applications ==
=== Continuum mechanics ===
Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.
If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point.
=== Other examples from physics ===
Common applications include:
Electromagnetic tensor (or Faraday tensor) in electromagnetism
Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics
Permittivity and electric susceptibility are tensors in anisotropic media
Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes
Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates
Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments
Quantum mechanics and quantum computing utilize tensor products for combination of quantum states
=== Computer vision and optics ===
The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.
The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:
P
i
ε
0
=
∑
j
χ
i
j
(
1
)
E
j
+
∑
j
k
χ
i
j
k
(
2
)
E
j
E
k
+
∑
j
k
ℓ
χ
i
j
k
ℓ
(
3
)
E
j
E
k
E
ℓ
+
⋯
.
{\displaystyle {\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!}
Here
χ
(
1
)
{\displaystyle \chi ^{(1)}}
is the linear susceptibility,
χ
(
2
)
{\displaystyle \chi ^{(2)}}
gives the Pockels effect and second harmonic generation, and
χ
(
3
)
{\displaystyle \chi ^{(3)}}
gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
=== Machine learning ===
The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.
== Generalizations ==
=== Tensor products of vector spaces ===
The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space V ⊗ W is a second-order "tensor" in this more general sense, and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces. A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring.
=== Tensors in infinite dimensions ===
The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds.
=== Tensor densities ===
Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg⋅m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region:
m
=
∫
Ω
ρ
d
x
d
y
d
z
,
{\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz,}
where the Cartesian coordinates x, y, z are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
x
′
=
100
x
,
y
′
=
100
y
,
z
′
=
100
z
.
{\displaystyle x'=100x,\quad y'=100y,\quad z'=100z.}
The numerical value of the density ρ must then also transform by 100−3 m3/cm3 to compensate, so that the numerical value of the mass in kg is still given by integral of
ρ
d
x
d
y
d
z
{\displaystyle \rho \,dx\,dy\,dz}
. Thus
ρ
′
=
100
−
3
ρ
{\displaystyle \rho '=100^{-3}\rho }
(in units of kg⋅cm−3).
More generally, if the Cartesian coordinates x, y, z undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables x, y, z (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold.
A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
|
det
R
|
−
w
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left|\det R\right|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism.
Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an (x, y) ∈ R2 with the transformation law
(
x
,
y
)
↦
(
x
+
y
log
|
det
R
|
,
y
)
.
{\displaystyle (x,y)\mapsto (x+y\log \left|\det R\right|,y).}
=== Geometric objects ===
The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles.
=== Spinors ===
When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.
Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.
== See also ==
The dictionary definition of tensor at Wiktionary
Array data type, for tensor storage and manipulation
Bitensor
=== Foundational ===
=== Applications ===
== Explanatory notes ==
== References ==
=== Specific ===
=== General ===
This article incorporates material from tensor on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== External links == | Wikipedia/Tensor_order |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
== Definition and motivation ==
=== Motivating examples ===
=== Definition ===
Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:
Right distributivity: (x + y) · z = x · z + y · z
Left distributivity: z · (x + y) = z · x + z · y
Compatibility with scalars: (ax) · (by) = (ab) (x · y).
These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
== Basic concepts ==
=== Algebra homomorphisms ===
Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as
H
o
m
K
-alg
(
A
,
B
)
.
{\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).}
A K-algebra isomorphism is a bijective K-algebra homomorphism.
=== Subalgebras and ideals ===
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
=== Extension of scalars ===
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product
V
F
:=
V
⊗
K
F
{\displaystyle V_{F}:=V\otimes _{K}F}
. So if A is an algebra over K, then
A
F
{\displaystyle A_{F}}
is an algebra over F.
== Kinds of algebras and examples ==
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
=== Unital algebra ===
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
=== Zero algebra ===
An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
A unital zero algebra is the direct sum
K
⊕
V
{\displaystyle K\oplus V}
of a field
K
{\displaystyle K}
and a
K
{\displaystyle K}
-vector space
V
{\displaystyle V}
, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written as
k
+
v
{\displaystyle k+v}
with
k
∈
K
{\displaystyle k\in K}
and
v
∈
V
{\displaystyle v\in V}
, and the product is the only bilinear operation such that
v
w
=
0
{\displaystyle vw=0}
for every
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
{\displaystyle V}
. So, if
k
1
,
k
2
∈
K
{\displaystyle k_{1},k_{2}\in K}
and
v
1
,
v
2
∈
V
{\displaystyle v_{1},v_{2}\in V}
, one has
(
k
1
+
v
1
)
(
k
2
+
v
2
)
=
k
1
k
2
+
(
k
1
v
2
+
k
2
v
1
)
.
{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module
V
{\displaystyle V}
correspond exactly to the ideals of
K
⊕
V
{\displaystyle K\oplus V}
that are contained in
V
{\displaystyle V}
.
For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
=== Associative algebra ===
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
=== Non-associative algebra ===
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map
A
×
A
→
A
{\displaystyle A\times A\rightarrow A}
. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
== Algebras and rings ==
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
η
:
K
→
Z
(
A
)
,
{\displaystyle \eta \colon K\to Z(A),}
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
K
×
A
→
A
{\displaystyle K\times A\to A}
given by
(
k
,
a
)
↦
η
(
k
)
a
.
{\displaystyle (k,a)\mapsto \eta (k)a.}
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
f
(
k
a
)
=
k
f
(
a
)
{\displaystyle f(ka)=kf(a)}
for all
k
∈
K
{\displaystyle k\in K}
and
a
∈
A
{\displaystyle a\in A}
. In other words, the following diagram commutes:
K
η
A
↙
η
B
↘
A
f
⟶
B
{\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}}
== Structure coefficients ==
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
e
i
e
j
=
∑
k
=
1
n
c
i
,
j
,
k
e
k
{\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}}
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
== Classification of low-dimensional unital associative algebras over the complex numbers ==
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
1
⋅
1
=
1
,
1
⋅
a
=
a
,
a
⋅
1
=
a
.
{\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.}
It remains to specify
a
a
=
1
{\displaystyle \textstyle aa=1}
for the first algebra,
a
a
=
0
{\displaystyle \textstyle aa=0}
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
a
a
=
a
,
b
b
=
b
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0}
for the first algebra,
a
a
=
a
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0}
for the second algebra,
a
a
=
b
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0}
for the third algebra,
a
a
=
1
,
b
b
=
0
,
a
b
=
−
b
a
=
b
{\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b}
for the fourth algebra,
a
a
=
0
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0}
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
== Generalization: algebra over a ring ==
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
=== Associative algebras over rings ===
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to
H
×
H
{\displaystyle \mathbb {H} \times \mathbb {H} }
, the direct product of two quaternion algebras. The center of that ring is
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} }
, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional
R
{\displaystyle \mathbb {R} }
-algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism
R
→
A
{\displaystyle R\to A}
defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural
Z
{\displaystyle \mathbb {Z} }
-module structure, since one can take the unique homomorphism
Z
→
A
{\displaystyle \mathbb {Z} \to A}
. On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
== See also ==
Algebra over an operad
Alternative algebra
Clifford algebra
Composition algebra
Differential algebra
Free algebra
Geometric algebra
Max-plus algebra
Mutation (algebra)
Operator algebra
Zariski's lemma
== Notes ==
== References ==
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. | Wikipedia/Unital_algebra |
In mathematics, any vector space
V
{\displaystyle V}
has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on
V
,
{\displaystyle V,}
together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space.
When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space.
Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces.
When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis.
Early terms for dual include polarer Raum [Hahn 1927], espace conjugué, adjoint space [Alaoglu 1940], and transponierter Raum [Schauder 1930] and [Banach 1932]. The term dual is due to Bourbaki 1938.
== Algebraic dual space ==
Given any vector space
V
{\displaystyle V}
over a field
F
{\displaystyle F}
, the (algebraic) dual space
V
∗
{\displaystyle V^{*}}
(alternatively denoted by
V
∨
{\displaystyle V^{\lor }}
or
V
′
{\displaystyle V'}
) is defined as the set of all linear maps
φ
:
V
→
F
{\displaystyle \varphi :V\to F}
(linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted
hom
(
V
,
F
)
{\displaystyle \hom(V,F)}
.
The dual space
V
∗
{\displaystyle V^{*}}
itself becomes a vector space over
F
{\displaystyle F}
when equipped with an addition and scalar multiplication satisfying:
(
φ
+
ψ
)
(
x
)
=
φ
(
x
)
+
ψ
(
x
)
(
a
φ
)
(
x
)
=
a
(
φ
(
x
)
)
{\displaystyle {\begin{aligned}(\varphi +\psi )(x)&=\varphi (x)+\psi (x)\\(a\varphi )(x)&=a\left(\varphi (x)\right)\end{aligned}}}
for all
φ
,
ψ
∈
V
∗
{\displaystyle \varphi ,\psi \in V^{*}}
,
x
∈
V
{\displaystyle x\in V}
, and
a
∈
F
{\displaystyle a\in F}
.
Elements of the algebraic dual space
V
∗
{\displaystyle V^{*}}
are sometimes called covectors, one-forms, or linear forms.
The pairing of a functional
φ
{\displaystyle \varphi }
in the dual space
V
∗
{\displaystyle V^{*}}
and an element
x
{\displaystyle x}
of
V
{\displaystyle V}
is sometimes denoted by a bracket:
φ
(
x
)
=
[
x
,
φ
]
{\displaystyle \varphi (x)=[x,\varphi ]}
or
φ
(
x
)
=
⟨
x
,
φ
⟩
{\displaystyle \varphi (x)=\langle x,\varphi \rangle }
. This pairing defines a nondegenerate bilinear mapping
⟨
⋅
,
⋅
⟩
:
V
×
V
∗
→
F
{\displaystyle \langle \cdot ,\cdot \rangle :V\times V^{*}\to F}
called the natural pairing.
=== Finite-dimensional case ===
If
V
{\displaystyle V}
is finite-dimensional, then
V
∗
{\displaystyle V^{*}}
has the same dimension as
V
{\displaystyle V}
. Given a basis
{
e
1
,
…
,
e
n
}
{\displaystyle \{\mathbf {e} _{1},\dots ,\mathbf {e} _{n}\}}
in
V
{\displaystyle V}
, it is possible to construct a specific basis in
V
∗
{\displaystyle V^{*}}
, called the dual basis. This dual basis is a set
{
e
1
,
…
,
e
n
}
{\displaystyle \{\mathbf {e} ^{1},\dots ,\mathbf {e} ^{n}\}}
of linear functionals on
V
{\displaystyle V}
, defined by the relation
e
i
(
c
1
e
1
+
⋯
+
c
n
e
n
)
=
c
i
,
i
=
1
,
…
,
n
{\displaystyle \mathbf {e} ^{i}(c^{1}\mathbf {e} _{1}+\cdots +c^{n}\mathbf {e} _{n})=c^{i},\quad i=1,\ldots ,n}
for any choice of coefficients
c
i
∈
F
{\displaystyle c^{i}\in F}
. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations
e
i
(
e
j
)
=
δ
j
i
{\displaystyle \mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}}
where
δ
j
i
{\displaystyle \delta _{j}^{i}}
is the Kronecker delta symbol. This property is referred to as the bi-orthogonality property.
For example, if
V
{\displaystyle V}
is
R
2
{\displaystyle \mathbb {R} ^{2}}
, let its basis be chosen as
{
e
1
=
(
1
/
2
,
1
/
2
)
,
e
2
=
(
0
,
1
)
}
{\displaystyle \{\mathbf {e} _{1}=(1/2,1/2),\mathbf {e} _{2}=(0,1)\}}
. The basis vectors are not orthogonal to each other. Then,
e
1
{\displaystyle \mathbf {e} ^{1}}
and
e
2
{\displaystyle \mathbf {e} ^{2}}
are one-forms (functions that map a vector to a scalar) such that
e
1
(
e
1
)
=
1
{\displaystyle \mathbf {e} ^{1}(\mathbf {e} _{1})=1}
,
e
1
(
e
2
)
=
0
{\displaystyle \mathbf {e} ^{1}(\mathbf {e} _{2})=0}
,
e
2
(
e
1
)
=
0
{\displaystyle \mathbf {e} ^{2}(\mathbf {e} _{1})=0}
, and
e
2
(
e
2
)
=
1
{\displaystyle \mathbf {e} ^{2}(\mathbf {e} _{2})=1}
. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as
[
e
11
e
12
e
21
e
22
]
[
e
11
e
21
e
12
e
22
]
=
[
1
0
0
1
]
.
{\displaystyle {\begin{bmatrix}e^{11}&e^{12}\\e^{21}&e^{22}\end{bmatrix}}{\begin{bmatrix}e_{11}&e_{21}\\e_{12}&e_{22}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}.}
Solving for the unknown values in the first matrix shows the dual basis to be
{
e
1
=
(
2
,
0
)
,
e
2
=
(
−
1
,
1
)
}
{\displaystyle \{\mathbf {e} ^{1}=(2,0),\mathbf {e} ^{2}=(-1,1)\}}
. Because
e
1
{\displaystyle \mathbf {e} ^{1}}
and
e
2
{\displaystyle \mathbf {e} ^{2}}
are functionals, they can be rewritten as
e
1
(
x
,
y
)
=
2
x
{\displaystyle \mathbf {e} ^{1}(x,y)=2x}
and
e
2
(
x
,
y
)
=
−
x
+
y
{\displaystyle \mathbf {e} ^{2}(x,y)=-x+y}
.
In general, when
V
{\displaystyle V}
is
R
n
{\displaystyle \mathbb {R} ^{n}}
, if
E
=
[
e
1
|
⋯
|
e
n
]
{\displaystyle E=[\mathbf {e} _{1}|\cdots |\mathbf {e} _{n}]}
is a matrix whose columns are the basis vectors and
E
^
=
[
e
1
|
⋯
|
e
n
]
{\displaystyle {\hat {E}}=[\mathbf {e} ^{1}|\cdots |\mathbf {e} ^{n}]}
is a matrix whose columns are the dual basis vectors, then
E
^
T
⋅
E
=
I
n
,
{\displaystyle {\hat {E}}^{\textrm {T}}\cdot E=I_{n},}
where
I
n
{\displaystyle I_{n}}
is the identity matrix of order
n
{\displaystyle n}
. The biorthogonality property of these two basis sets allows any point
x
∈
V
{\displaystyle \mathbf {x} \in V}
to be represented as
x
=
∑
i
⟨
x
,
e
i
⟩
e
i
=
∑
i
⟨
x
,
e
i
⟩
e
i
,
{\displaystyle \mathbf {x} =\sum _{i}\langle \mathbf {x} ,\mathbf {e} ^{i}\rangle \mathbf {e} _{i}=\sum _{i}\langle \mathbf {x} ,\mathbf {e} _{i}\rangle \mathbf {e} ^{i},}
even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
and the corresponding duality pairing are introduced, as described below in § Bilinear products and dual spaces.
In particular,
R
n
{\displaystyle \mathbb {R} ^{n}}
can be interpreted as the space of columns of
n
{\displaystyle n}
real numbers, its dual space is typically written as the space of rows of
n
{\displaystyle n}
real numbers. Such a row acts on
R
n
{\displaystyle \mathbb {R} ^{n}}
as a linear functional by ordinary matrix multiplication. This is because a functional maps every
n
{\displaystyle n}
-vector
x
{\displaystyle x}
into a real number
y
{\displaystyle y}
. Then, seeing this functional as a matrix
M
{\displaystyle M}
, and
x
{\displaystyle x}
as an
n
×
1
{\displaystyle n\times 1}
matrix, and
y
{\displaystyle y}
a
1
×
1
{\displaystyle 1\times 1}
matrix (trivially, a real number) respectively, if
M
x
=
y
{\displaystyle Mx=y}
then, by dimension reasons,
M
{\displaystyle M}
must be a
1
×
n
{\displaystyle 1\times n}
matrix; that is,
M
{\displaystyle M}
must be a row vector.
If
V
{\displaystyle V}
consists of the space of geometrical vectors in the plane, then the level curves of an element of
V
∗
{\displaystyle V^{*}}
form a family of parallel lines in
V
{\displaystyle V}
, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element.
So an element of
V
∗
{\displaystyle V^{*}}
can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses.
More generally, if
V
{\displaystyle V}
is a vector space of any dimension, then the level sets of a linear functional in
V
∗
{\displaystyle V^{*}}
are parallel hyperplanes in
V
{\displaystyle V}
, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.
=== Infinite-dimensional case ===
If
V
{\displaystyle V}
is not finite-dimensional but has a basis
e
α
{\displaystyle \mathbf {e} _{\alpha }}
indexed by an infinite set
A
{\displaystyle A}
, then the same construction as in the finite-dimensional case yields linearly independent elements
e
α
{\displaystyle \mathbf {e} ^{\alpha }}
(
α
∈
A
{\displaystyle \alpha \in A}
) of the dual space, but they will not form a basis.
For instance, consider the space
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
, whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers
N
{\displaystyle \mathbb {N} }
. For
i
∈
N
{\displaystyle i\in \mathbb {N} }
,
e
i
{\displaystyle \mathbf {e} _{i}}
is the sequence consisting of all zeroes except in the
i
{\displaystyle i}
-th position, which is 1.
The dual space of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is (isomorphic to)
R
N
{\displaystyle \mathbb {R} ^{\mathbb {N} }}
, the space of all sequences of real numbers: each real sequence
(
a
n
)
{\displaystyle (a_{n})}
defines a function where the element
(
x
n
)
{\displaystyle (x_{n})}
of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is sent to the number
∑
n
a
n
x
n
,
{\displaystyle \sum _{n}a_{n}x_{n},}
which is a finite sum because there are only finitely many nonzero
x
n
{\displaystyle x_{n}}
. The dimension of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is countably infinite, whereas
R
N
{\displaystyle \mathbb {R} ^{\mathbb {N} }}
does not have a countable basis.
This observation generalizes to any infinite-dimensional vector space
V
{\displaystyle V}
over any field
F
{\displaystyle F}
: a choice of basis
{
e
α
:
α
∈
A
}
{\displaystyle \{\mathbf {e} _{\alpha }:\alpha \in A\}}
identifies
V
{\displaystyle V}
with the space
(
F
A
)
0
{\displaystyle (F^{A})_{0}}
of functions
f
:
A
→
F
{\displaystyle f:A\to F}
such that
f
α
=
f
(
α
)
{\displaystyle f_{\alpha }=f(\alpha )}
is nonzero for only finitely many
α
∈
A
{\displaystyle \alpha \in A}
, where such a function
f
{\displaystyle f}
is identified with the vector
∑
α
∈
A
f
α
e
α
{\displaystyle \sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }}
in
V
{\displaystyle V}
(the sum is finite by the assumption on
f
{\displaystyle f}
, and any
v
∈
V
{\displaystyle v\in V}
may be written uniquely in this way by the definition of the basis).
The dual space of
V
{\displaystyle V}
may then be identified with the space
F
A
{\displaystyle F^{A}}
of all functions from
A
{\displaystyle A}
to
F
{\displaystyle F}
: a linear functional
T
{\displaystyle T}
on
V
{\displaystyle V}
is uniquely determined by the values
θ
α
=
T
(
e
α
)
{\displaystyle \theta _{\alpha }=T(\mathbf {e} _{\alpha })}
it takes on the basis of
V
{\displaystyle V}
, and any function
θ
:
A
→
F
{\displaystyle \theta :A\to F}
(with
θ
(
α
)
=
θ
α
{\displaystyle \theta (\alpha )=\theta _{\alpha }}
) defines a linear functional
T
{\displaystyle T}
on
V
{\displaystyle V}
by
T
(
∑
α
∈
A
f
α
e
α
)
=
∑
α
∈
A
f
α
T
(
e
α
)
=
∑
α
∈
A
f
α
θ
α
.
{\displaystyle T\left(\sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }\right)=\sum _{\alpha \in A}f_{\alpha }T(e_{\alpha })=\sum _{\alpha \in A}f_{\alpha }\theta _{\alpha }.}
Again, the sum is finite because
f
α
{\displaystyle f_{\alpha }}
is nonzero for only finitely many
α
{\displaystyle \alpha }
.
The set
(
F
A
)
0
{\displaystyle (F^{A})_{0}}
may be identified (essentially by definition) with the direct sum of infinitely many copies of
F
{\displaystyle F}
(viewed as a 1-dimensional vector space over itself) indexed by
A
{\displaystyle A}
, i.e. there are linear isomorphisms
V
≅
(
F
A
)
0
≅
⨁
α
∈
A
F
.
{\displaystyle V\cong (F^{A})_{0}\cong \bigoplus _{\alpha \in A}F.}
On the other hand,
F
A
{\displaystyle F^{A}}
is (again by definition), the direct product of infinitely many copies of
F
{\displaystyle F}
indexed by
A
{\displaystyle A}
, and so the identification
V
∗
≅
(
⨁
α
∈
A
F
)
∗
≅
∏
α
∈
A
F
∗
≅
∏
α
∈
A
F
≅
F
A
{\displaystyle V^{*}\cong \left(\bigoplus _{\alpha \in A}F\right)^{*}\cong \prod _{\alpha \in A}F^{*}\cong \prod _{\alpha \in A}F\cong F^{A}}
is a special case of a general result relating direct sums (of modules) to direct products.
If a vector space is not finite-dimensional, then its (algebraic) dual space is always of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.
The proof of this inequality between dimensions results from the following.
If
V
{\displaystyle V}
is an infinite-dimensional
F
{\displaystyle F}
-vector space, the arithmetical properties of cardinal numbers implies that
d
i
m
(
V
)
=
|
A
|
<
|
F
|
|
A
|
=
|
V
∗
|
=
m
a
x
(
|
d
i
m
(
V
∗
)
|
,
|
F
|
)
,
{\displaystyle \mathrm {dim} (V)=|A|<|F|^{|A|}=|V^{\ast }|=\mathrm {max} (|\mathrm {dim} (V^{\ast })|,|F|),}
where cardinalities are denoted as absolute values. For proving that
d
i
m
(
V
)
<
d
i
m
(
V
∗
)
,
{\displaystyle \mathrm {dim} (V)<\mathrm {dim} (V^{*}),}
it suffices to prove that
|
F
|
≤
|
d
i
m
(
V
∗
)
|
,
{\displaystyle |F|\leq |\mathrm {dim} (V^{\ast })|,}
which can be done with an argument similar to Cantor's diagonal argument. The exact dimension of the dual is given by the Erdős–Kaplansky theorem.
=== Bilinear products and dual spaces ===
If V is finite-dimensional, then V is isomorphic to V∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form ⟨·,·⟩ on V gives a mapping of V into its dual space via
v
↦
⟨
v
,
⋅
⟩
{\displaystyle v\mapsto \langle v,\cdot \rangle }
where the right hand side is defined as the functional on V taking each w ∈ V to ⟨v, w⟩. In other words, the bilinear form determines a linear mapping
Φ
⟨
⋅
,
⋅
⟩
:
V
→
V
∗
{\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to V^{*}}
defined by
[
Φ
⟨
⋅
,
⋅
⟩
(
v
)
,
w
]
=
⟨
v
,
w
⟩
.
{\displaystyle \left[\Phi _{\langle \cdot ,\cdot \rangle }(v),w\right]=\langle v,w\rangle .}
If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of V∗.
If V is finite-dimensional, then this is an isomorphism onto all of V∗. Conversely, any isomorphism
Φ
{\displaystyle \Phi }
from V to a subspace of V∗ (resp., all of V∗ if V is finite dimensional) defines a unique nondegenerate bilinear form
⟨
⋅
,
⋅
⟩
Φ
{\displaystyle \langle \cdot ,\cdot \rangle _{\Phi }}
on V by
⟨
v
,
w
⟩
Φ
=
(
Φ
(
v
)
)
(
w
)
=
[
Φ
(
v
)
,
w
]
.
{\displaystyle \langle v,w\rangle _{\Phi }=(\Phi (v))(w)=[\Phi (v),w].\,}
Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V∗ and nondegenerate bilinear forms on V.
If the vector space V is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms.
In that case, a given sesquilinear form ⟨·,·⟩ determines an isomorphism of V with the complex conjugate of the dual space
Φ
⟨
⋅
,
⋅
⟩
:
V
→
V
∗
¯
.
{\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to {\overline {V^{*}}}.}
The conjugate of the dual space
V
∗
¯
{\displaystyle {\overline {V^{*}}}}
can be identified with the set of all additive complex-valued functionals f : V → C such that
f
(
α
v
)
=
α
¯
f
(
v
)
.
{\displaystyle f(\alpha v)={\overline {\alpha }}f(v).}
=== Injection into the double-dual ===
There is a natural homomorphism
Ψ
{\displaystyle \Psi }
from
V
{\displaystyle V}
into the double dual
V
∗
∗
=
hom
(
V
∗
,
F
)
{\displaystyle V^{**}=\hom(V^{*},F)}
, defined by
(
Ψ
(
v
)
)
(
φ
)
=
φ
(
v
)
{\displaystyle (\Psi (v))(\varphi )=\varphi (v)}
for all
v
∈
V
,
φ
∈
V
∗
{\displaystyle v\in V,\varphi \in V^{*}}
. In other words, if
e
v
v
:
V
∗
→
F
{\displaystyle \mathrm {ev} _{v}:V^{*}\to F}
is the evaluation map defined by
φ
↦
φ
(
v
)
{\displaystyle \varphi \mapsto \varphi (v)}
, then
Ψ
:
V
→
V
∗
∗
{\displaystyle \Psi :V\to V^{**}}
is defined as the map
v
↦
e
v
v
{\displaystyle v\mapsto \mathrm {ev} _{v}}
. This map
Ψ
{\displaystyle \Psi }
is always injective; and it is always an isomorphism if
V
{\displaystyle V}
is finite-dimensional.
Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism.
Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals.
=== Transpose of a linear map ===
If f : V → W is a linear map, then the transpose (or dual) f∗ : W∗ → V∗ is defined by
f
∗
(
φ
)
=
φ
∘
f
{\displaystyle f^{*}(\varphi )=\varphi \circ f\,}
for every
φ
∈
W
∗
{\displaystyle \varphi \in W^{*}}
. The resulting functional
f
∗
(
φ
)
{\displaystyle f^{*}(\varphi )}
in
V
∗
{\displaystyle V^{*}}
is called the pullback of
φ
{\displaystyle \varphi }
along
f
{\displaystyle f}
.
The following identity holds for all
φ
∈
W
∗
{\displaystyle \varphi \in W^{*}}
and
v
∈
V
{\displaystyle v\in V}
:
[
f
∗
(
φ
)
,
v
]
=
[
φ
,
f
(
v
)
]
,
{\displaystyle [f^{*}(\varphi ),\,v]=[\varphi ,\,f(v)],}
where the bracket [·,·] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint.
The assignment f ↦ f∗ produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W∗ to V∗; this homomorphism is an isomorphism if and only if W is finite-dimensional.
If V = W then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that (fg)∗ = g∗f∗.
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself.
It is possible to identify (f∗)∗ with f using the natural injection into the double dual.
If the linear map f is represented by the matrix A with respect to two bases of V and W, then f∗ is represented by the transpose matrix AT with respect to the dual bases of W∗ and V∗, hence the name.
Alternatively, as f is represented by A acting on the left on column vectors, f∗ is represented by the same matrix acting on the right on row vectors.
These points of view are related by the canonical inner product on Rn, which identifies the space of column vectors with the dual space of row vectors.
=== Quotient spaces and annihilators ===
Let
S
{\displaystyle S}
be a subset of
V
{\displaystyle V}
.
The annihilator of
S
{\displaystyle S}
in
V
∗
{\displaystyle V^{*}}
, denoted here
S
0
{\displaystyle S^{0}}
, is the collection of linear functionals
f
∈
V
∗
{\displaystyle f\in V^{*}}
such that
[
f
,
s
]
=
0
{\displaystyle [f,s]=0}
for all
s
∈
S
{\displaystyle s\in S}
.
That is,
S
0
{\displaystyle S^{0}}
consists of all linear functionals
f
:
V
→
F
{\displaystyle f:V\to F}
such that the restriction to
S
{\displaystyle S}
vanishes:
f
|
S
=
0
{\displaystyle f|_{S}=0}
.
Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement.
The annihilator of a subset is itself a vector space.
The annihilator of the zero vector is the whole dual space:
{
0
}
0
=
V
∗
{\displaystyle \{0\}^{0}=V^{*}}
, and the annihilator of the whole space is just the zero covector:
V
0
=
{
0
}
⊆
V
∗
{\displaystyle V^{0}=\{0\}\subseteq V^{*}}
.
Furthermore, the assignment of an annihilator to a subset of
V
{\displaystyle V}
reverses inclusions, so that if
{
0
}
⊆
S
⊆
T
⊆
V
{\displaystyle \{0\}\subseteq S\subseteq T\subseteq V}
, then
{
0
}
⊆
T
0
⊆
S
0
⊆
V
∗
.
{\displaystyle \{0\}\subseteq T^{0}\subseteq S^{0}\subseteq V^{*}.}
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are two subsets of
V
{\displaystyle V}
then
A
0
+
B
0
⊆
(
A
∩
B
)
0
.
{\displaystyle A^{0}+B^{0}\subseteq (A\cap B)^{0}.}
If
(
A
i
)
i
∈
I
{\displaystyle (A_{i})_{i\in I}}
is any family of subsets of
V
{\displaystyle V}
indexed by
i
{\displaystyle i}
belonging to some index set
I
{\displaystyle I}
, then
(
⋃
i
∈
I
A
i
)
0
=
⋂
i
∈
I
A
i
0
.
{\displaystyle \left(\bigcup _{i\in I}A_{i}\right)^{0}=\bigcap _{i\in I}A_{i}^{0}.}
In particular if
A
{\displaystyle A}
and
B
{\displaystyle B}
are subspaces of
V
{\displaystyle V}
then
(
A
+
B
)
0
=
A
0
∩
B
0
{\displaystyle (A+B)^{0}=A^{0}\cap B^{0}}
and
(
A
∩
B
)
0
=
A
0
+
B
0
.
{\displaystyle (A\cap B)^{0}=A^{0}+B^{0}.}
If
V
{\displaystyle V}
is finite-dimensional and
W
{\displaystyle W}
is a vector subspace, then
W
00
=
W
{\displaystyle W^{00}=W}
after identifying
W
{\displaystyle W}
with its image in the second dual space under the double duality isomorphism
V
≈
V
∗
∗
{\displaystyle V\approx V^{**}}
. In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space.
If
W
{\displaystyle W}
is a subspace of
V
{\displaystyle V}
then the quotient space
V
/
W
{\displaystyle V/W}
is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional
f
:
V
→
F
{\displaystyle f:V\to F}
factors through
V
/
W
{\displaystyle V/W}
if and only if
W
{\displaystyle W}
is in the kernel of
f
{\displaystyle f}
. There is thus an isomorphism
(
V
/
W
)
∗
≅
W
0
.
{\displaystyle (V/W)^{*}\cong W^{0}.}
As a particular consequence, if
V
{\displaystyle V}
is a direct sum of two subspaces
A
{\displaystyle A}
and
B
{\displaystyle B}
, then
V
∗
{\displaystyle V^{*}}
is a direct sum of
A
0
{\displaystyle A^{0}}
and
B
0
{\displaystyle B^{0}}
.
=== Dimensional analysis ===
The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector
v
∈
V
{\displaystyle v\in V}
can be paired with a covector
φ
∈
V
∗
{\displaystyle \varphi \in V^{*}}
by the natural pairing
⟨
x
,
φ
⟩
:=
φ
(
x
)
∈
F
{\displaystyle \langle x,\varphi \rangle :=\varphi (x)\in F}
to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to reducing a fraction. Thus while the direct sum
V
⊕
V
∗
{\displaystyle V\oplus V^{*}}
is a
2
n
{\displaystyle 2n}
-dimensional space (if
V
{\displaystyle V}
is
n
{\displaystyle n}
-dimensional),
V
∗
{\displaystyle V^{*}}
behaves as an
(
−
n
)
{\displaystyle (-n)}
-dimensional space, in the sense that its dimensions can be canceled against the dimensions of
V
{\displaystyle V}
. This is formalized by tensor contraction.
This arises in physics via dimensional analysis, where the dual space has inverse units. Under the natural pairing, these units cancel, and the resulting scalar value is dimensionless, as expected. For example, in (continuous) Fourier analysis, or more broadly time–frequency analysis: given a one-dimensional vector space with a unit of time
t
{\displaystyle t}
, the dual space has units of frequency: occurrences per unit of time (units of
1
/
t
{\displaystyle 1/t}
). For example, if time is measured in seconds, the corresponding dual unit is the inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to
3
s
⋅
2
s
−
1
=
6
{\displaystyle 3s\cdot 2s^{-1}=6}
. Similarly, if the primal space measures length, the dual space measures inverse length.
== Continuous dual space ==
When dealing with topological vector spaces, the continuous linear functionals from the space into the base field
F
=
C
{\displaystyle \mathbb {F} =\mathbb {C} }
(or
R
{\displaystyle \mathbb {R} }
) are particularly important.
This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space
V
∗
{\displaystyle V^{*}}
, denoted by
V
′
{\displaystyle V'}
.
For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide.
This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps.
Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space".
For a topological vector space
V
{\displaystyle V}
its continuous dual space, or topological dual space, or just dual space (in the sense of the theory of topological vector spaces)
V
′
{\displaystyle V'}
is defined as the space of all continuous linear functionals
φ
:
V
→
F
{\displaystyle \varphi :V\to {\mathbb {F} }}
.
Important examples for continuous dual spaces are the space of compactly supported test functions
D
{\displaystyle {\mathcal {D}}}
and its dual
D
′
,
{\displaystyle {\mathcal {D}}',}
the space of arbitrary distributions (generalized functions); the space of arbitrary test functions
E
{\displaystyle {\mathcal {E}}}
and its dual
E
′
,
{\displaystyle {\mathcal {E}}',}
the space of compactly supported distributions; and the space of rapidly decreasing test functions
S
,
{\displaystyle {\mathcal {S}},}
the Schwartz space, and its dual
S
′
,
{\displaystyle {\mathcal {S}}',}
the space of tempered distributions (slowly growing distributions) in the theory of generalized functions.
=== Properties ===
If X is a Hausdorff topological vector space (TVS), then the continuous dual space of X is identical to the continuous dual space of the completion of X.
=== Topologies on the dual ===
There is a standard construction for introducing a topology on the continuous dual
V
′
{\displaystyle V'}
of a topological vector space
V
{\displaystyle V}
. Fix a collection
A
{\displaystyle {\mathcal {A}}}
of bounded subsets of
V
{\displaystyle V}
.
This gives the topology on
V
{\displaystyle V}
of uniform convergence on sets from
A
,
{\displaystyle {\mathcal {A}},}
or what is the same thing, the topology generated by seminorms of the form
‖
φ
‖
A
=
sup
x
∈
A
|
φ
(
x
)
|
,
{\displaystyle \|\varphi \|_{A}=\sup _{x\in A}|\varphi (x)|,}
where
φ
{\displaystyle \varphi }
is a continuous linear functional on
V
{\displaystyle V}
, and
A
{\displaystyle A}
runs over the class
A
.
{\displaystyle {\mathcal {A}}.}
This means that a net of functionals
φ
i
{\displaystyle \varphi _{i}}
tends to a functional
φ
{\displaystyle \varphi }
in
V
′
{\displaystyle V'}
if and only if
for all
A
∈
A
‖
φ
i
−
φ
‖
A
=
sup
x
∈
A
|
φ
i
(
x
)
−
φ
(
x
)
|
⟶
i
→
∞
0.
{\displaystyle {\text{ for all }}A\in {\mathcal {A}}\qquad \|\varphi _{i}-\varphi \|_{A}=\sup _{x\in A}|\varphi _{i}(x)-\varphi (x)|{\underset {i\to \infty }{\longrightarrow }}0.}
Usually (but not necessarily) the class
A
{\displaystyle {\mathcal {A}}}
is supposed to satisfy the following conditions:
Each point
x
{\displaystyle x}
of
V
{\displaystyle V}
belongs to some set
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
:
for all
x
∈
V
there exists some
A
∈
A
such that
x
∈
A
.
{\displaystyle {\text{ for all }}x\in V\quad {\text{ there exists some }}A\in {\mathcal {A}}\quad {\text{ such that }}x\in A.}
Each two sets
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
and
B
∈
A
{\displaystyle B\in {\mathcal {A}}}
are contained in some set
C
∈
A
{\displaystyle C\in {\mathcal {A}}}
:
for all
A
,
B
∈
A
there exists some
C
∈
A
such that
A
∪
B
⊆
C
.
{\displaystyle {\text{ for all }}A,B\in {\mathcal {A}}\quad {\text{ there exists some }}C\in {\mathcal {A}}\quad {\text{ such that }}A\cup B\subseteq C.}
A
{\displaystyle {\mathcal {A}}}
is closed under the operation of multiplication by scalars:
for all
A
∈
A
and all
λ
∈
F
such that
λ
⋅
A
∈
A
.
{\displaystyle {\text{ for all }}A\in {\mathcal {A}}\quad {\text{ and all }}\lambda \in {\mathbb {F} }\quad {\text{ such that }}\lambda \cdot A\in {\mathcal {A}}.}
If these requirements are fulfilled then the corresponding topology on
V
′
{\displaystyle V'}
is Hausdorff and the sets
U
A
=
{
φ
∈
V
′
:
‖
φ
‖
A
<
1
}
,
for
A
∈
A
{\displaystyle U_{A}~=~\left\{\varphi \in V'~:~\quad \|\varphi \|_{A}<1\right\},\qquad {\text{ for }}A\in {\mathcal {A}}}
form its local base.
Here are the three most important special cases.
The strong topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on bounded subsets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all bounded subsets in
V
{\displaystyle V}
).
If
V
{\displaystyle V}
is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on
V
′
{\displaystyle V'}
is normed (in fact a Banach space if the field of scalars is complete), with the norm
‖
φ
‖
=
sup
‖
x
‖
≤
1
|
φ
(
x
)
|
.
{\displaystyle \|\varphi \|=\sup _{\|x\|\leq 1}|\varphi (x)|.}
The stereotype topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on totally bounded sets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all totally bounded subsets in
V
{\displaystyle V}
).
The weak topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on finite subsets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all finite subsets in
V
{\displaystyle V}
).
Each of these three choices of topology on
V
′
{\displaystyle V'}
leads to a variant of reflexivity property for topological vector spaces:
If
V
′
{\displaystyle V'}
is endowed with the strong topology, then the corresponding notion of reflexivity is the standard one: the spaces reflexive in this sense are just called reflexive.
If
V
′
{\displaystyle V'}
is endowed with the stereotype dual topology, then the corresponding reflexivity is presented in the theory of stereotype spaces: the spaces reflexive in this sense are called stereotype.
If
V
′
{\displaystyle V'}
is endowed with the weak topology, then the corresponding reflexivity is presented in the theory of dual pairs: the spaces reflexive in this sense are arbitrary (Hausdorff) locally convex spaces with the weak topology.
=== Examples ===
Let 1 < p < ∞ be a real number and consider the Banach space ℓ p of all sequences a = (an) for which
‖
a
‖
p
=
(
∑
n
=
0
∞
|
a
n
|
p
)
1
p
<
∞
.
{\displaystyle \|\mathbf {a} \|_{p}=\left(\sum _{n=0}^{\infty }|a_{n}|^{p}\right)^{\frac {1}{p}}<\infty .}
Define the number q by 1/p + 1/q = 1. Then the continuous dual of ℓ p is naturally identified with ℓ q: given an element
φ
∈
(
ℓ
p
)
′
{\displaystyle \varphi \in (\ell ^{p})'}
, the corresponding element of ℓ q is the sequence
(
φ
(
e
n
)
)
{\displaystyle (\varphi (\mathbf {e} _{n}))}
where
e
n
{\displaystyle \mathbf {e} _{n}}
denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = (an) ∈ ℓ q, the corresponding continuous linear functional
φ
{\displaystyle \varphi }
on ℓ p is defined by
φ
(
b
)
=
∑
n
a
n
b
n
{\displaystyle \varphi (\mathbf {b} )=\sum _{n}a_{n}b_{n}}
for all b = (bn) ∈ ℓ p (see Hölder's inequality).
In a similar manner, the continuous dual of ℓ 1 is naturally identified with ℓ ∞ (the space of bounded sequences).
Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremum norm) and c0 (the sequences converging to zero) are both naturally identified with ℓ 1.
By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space.
This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics.
By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.
=== Transpose of a continuous linear map ===
If T : V → W is a continuous linear map between two topological vector spaces, then the (continuous) transpose T′ : W′ → V′ is defined by the same formula as before:
T
′
(
φ
)
=
φ
∘
T
,
φ
∈
W
′
.
{\displaystyle T'(\varphi )=\varphi \circ T,\quad \varphi \in W'.}
The resulting functional T′(φ) is in V′. The assignment T → T′ produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from W′ to V′.
When T and U are composable continuous linear maps, then
(
U
∘
T
)
′
=
T
′
∘
U
′
.
{\displaystyle (U\circ T)'=T'\circ U'.}
When V and W are normed spaces, the norm of the transpose in L(W′, V′) is equal to that of T in L(V, W).
Several properties of transposition depend upon the Hahn–Banach theorem.
For example, the bounded linear map T has dense range if and only if the transpose T′ is injective.
When T is a compact linear map between two Banach spaces V and W, then the transpose T′ is compact.
This can be proved using the Arzelà–Ascoli theorem.
When V is a Hilbert space, there is an antilinear isomorphism iV from V onto its continuous dual V′.
For every bounded linear map T on V, the transpose and the adjoint operators are linked by
i
V
∘
T
∗
=
T
′
∘
i
V
.
{\displaystyle i_{V}\circ T^{*}=T'\circ i_{V}.}
When T is a continuous linear map between two topological vector spaces V and W, then the transpose T′ is continuous when W′ and V′ are equipped with "compatible" topologies: for example, when for X = V and X = W, both duals X′ have the strong topology β(X′, X) of uniform convergence on bounded sets of X, or both have the weak-∗ topology σ(X′, X) of pointwise convergence on X.
The transpose T′ is continuous from β(W′, W) to β(V′, V), or from σ(W′, W) to σ(V′, V).
=== Annihilators ===
Assume that W is a closed linear subspace of a normed space V, and consider the annihilator of W in V′,
W
⊥
=
{
φ
∈
V
′
:
W
⊆
ker
φ
}
.
{\displaystyle W^{\perp }=\{\varphi \in V':W\subseteq \ker \varphi \}.}
Then, the dual of the quotient V / W can be identified with W⊥, and the dual of W can be identified with the quotient V′ / W⊥.
Indeed, let P denote the canonical surjection from V onto the quotient V / W ; then, the transpose P′ is an isometric isomorphism from (V / W )′ into V′, with range equal to W⊥.
If j denotes the injection map from W into V, then the kernel of the transpose j′ is the annihilator of W:
ker
(
j
′
)
=
W
⊥
{\displaystyle \ker(j')=W^{\perp }}
and it follows from the Hahn–Banach theorem that j′ induces an isometric isomorphism
V′ / W⊥ → W′.
=== Further properties ===
If the dual of a normed space V is separable, then so is the space V itself.
The converse is not true: for example, the space ℓ 1 is separable, but its dual ℓ ∞ is not.
=== Double dual ===
In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator Ψ : V → V′′ from a normed space V into its continuous double dual V′′, defined by
Ψ
(
x
)
(
φ
)
=
φ
(
x
)
,
x
∈
V
,
φ
∈
V
′
.
{\displaystyle \Psi (x)(\varphi )=\varphi (x),\quad x\in V,\ \varphi \in V'.}
As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning ‖ Ψ(x) ‖ = ‖ x ‖ for all x ∈ V.
Normed spaces for which the map Ψ is a bijection are called reflexive.
When V is a topological vector space then Ψ(x) can still be defined by the same formula, for every x ∈ V, however several difficulties arise.
First, when V is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial.
However, if V is Hausdorff and locally convex, the map Ψ is injective from V to the algebraic dual V′∗ of the continuous dual, again as a consequence of the Hahn–Banach theorem.
Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual V′, so that the continuous double dual V′′ is not uniquely defined as a set. Saying that Ψ maps from V to V′′, or in other words, that Ψ(x) is continuous on V′ for every x ∈ V, is a reasonable minimal requirement on the topology of V′, namely that the evaluation mappings
φ
∈
V
′
↦
φ
(
x
)
,
x
∈
V
,
{\displaystyle \varphi \in V'\mapsto \varphi (x),\quad x\in V,}
be continuous for the chosen topology on V′. Further, there is still a choice of a topology on V′′, and continuity of Ψ depends upon this choice.
As a consequence, defining reflexivity in this framework is more involved than in the normed case.
== See also ==
Covariance and contravariance of vectors
Dual module
Dual norm
Duality (mathematics)
Duality (projective geometry)
Pontryagin duality
Reciprocal lattice – dual space basis, in crystallography
== Notes ==
== References ==
== Bibliography ==
Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Bourbaki, Nicolas (1989). Elements of mathematics, Algebra I. Springer-Verlag. ISBN 3-540-64243-9.
Bourbaki, Nicolas (2003). Elements of mathematics, Topological vector spaces. Springer-Verlag.
Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001
Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN 978-1-4419-7400-6.
Mac Lane, Saunders; Birkhoff, Garrett (1999). Algebra (3rd ed.). AMS Chelsea Publishing. ISBN 0-8218-1646-2..
Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973). Gravitation. W. H. Freeman. ISBN 0-7167-0344-0.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Robertson, A.P.; Robertson, W. (1964). Topological vector spaces. Cambridge University Press.
Schaefer, Helmut H. (1966). Topological vector spaces. New York: The Macmillan Company.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
== External links ==
Weisstein, Eric W. "Dual Vector Space". MathWorld. | Wikipedia/Duality_(linear_algebra) |
In mathematics, coalgebras or cogebras are structures that are dual (in the category-theoretic sense of reversing arrows) to unital associative algebras. The axioms of unital associative algebras can be formulated in terms of commutative diagrams. Turning all arrows around, one obtains the axioms of coalgebras.
Every coalgebra, by (vector space) duality, gives rise to an algebra, but not in general the other way. In finite dimensions, this duality goes in both directions (see below).
Coalgebras occur naturally in a number of contexts (for example, representation theory, universal enveloping algebras and group schemes).
There are also F-coalgebras, with important applications in computer science.
== Informal discussion ==
One frequently recurring example of coalgebras occurs in representation theory, and in particular, in the representation theory of the rotation group. A primary task, of practical use in physics, is to obtain combinations of systems with different states of angular momentum and spin. For this purpose, one uses the Clebsch–Gordan coefficients. Given two systems
A
,
B
{\displaystyle A,B}
with angular momenta
j
A
{\displaystyle j_{A}}
and
j
B
{\displaystyle j_{B}}
, a particularly important task is to find the total angular momentum
j
A
+
j
B
{\displaystyle j_{A}+j_{B}}
given the combined state
|
A
⟩
⊗
|
B
⟩
{\displaystyle |A\rangle \otimes |B\rangle }
. This is provided by the total angular momentum operator, which extracts the needed quantity from each side of the tensor product. It can be written as an "external" tensor product
J
≡
j
⊗
1
+
1
⊗
j
{\displaystyle \mathbf {J} \equiv \mathbf {j} \otimes 1+1\otimes \mathbf {j} }
The word "external" appears here, in contrast to the "internal" tensor product of a tensor algebra. A tensor algebra comes with a tensor product (the internal one); it can also be equipped with a second tensor product, the "external" one, or the coproduct, having the form above. That they are two different products is emphasized by recalling that the internal tensor product of a vector and a scalar is just simple scalar multiplication. The external product keeps them separated. In this setting, the coproduct is the map
Δ
:
J
→
J
⊗
J
{\displaystyle \Delta :J\to J\otimes J}
that takes
Δ
:
j
↦
j
⊗
1
+
1
⊗
j
{\displaystyle \Delta :\mathbf {j} \mapsto \mathbf {j} \otimes 1+1\otimes \mathbf {j} }
For this example,
J
{\displaystyle J}
can be taken to be one of the spin representations of the rotation group, with the fundamental representation being the common-sense choice. This coproduct can be lifted to all of the tensor algebra, by a simple lemma that applies to free objects: the tensor algebra is a free algebra, therefore, any homomorphism defined on a subset can be extended to the entire algebra. Examining the lifting in detail, one observes that the coproduct behaves as the shuffle product, essentially because the two factors above, the left and right
j
{\displaystyle \mathbf {j} }
must be kept in sequential order during products of multiple angular momenta (rotations are not commutative).
The peculiar form of having the
j
{\displaystyle \mathbf {j} }
appear only once in the coproduct, rather than (for example) defining
j
↦
j
⊗
j
{\displaystyle \mathbf {j} \mapsto \mathbf {j} \otimes \mathbf {j} }
is in order to maintain linearity: for this example, (and for representation theory in general), the coproduct must be linear. As a general rule, the coproduct in representation theory is reducible; the factors are given by the Littlewood–Richardson rule. (The Littlewood–Richardson rule conveys the same idea as the Clebsch–Gordan coefficients, but in a more general setting).
The formal definition of the coalgebra, below, abstracts away this particular special case, and its requisite properties, into a general setting.
== Formal definition ==
Formally, a coalgebra over a field K is a vector space C over K together with K-linear maps Δ: C → C ⊗ C and ε: C → K such that
(
i
d
C
⊗
Δ
)
∘
Δ
=
(
Δ
⊗
i
d
C
)
∘
Δ
{\displaystyle (\mathrm {id} _{C}\otimes \Delta )\circ \Delta =(\Delta \otimes \mathrm {id} _{C})\circ \Delta }
(
i
d
C
⊗
ε
)
∘
Δ
=
i
d
C
=
(
ε
⊗
i
d
C
)
∘
Δ
{\displaystyle (\mathrm {id} _{C}\otimes \varepsilon )\circ \Delta =\mathrm {id} _{C}=(\varepsilon \otimes \mathrm {id} _{C})\circ \Delta }
.
(Here ⊗ refers to the tensor product over K and id is the identity function.)
Equivalently, the following two diagrams commute:
In the first diagram, C ⊗ (C ⊗ C) is identified with (C ⊗ C) ⊗ C; the two are naturally isomorphic. Similarly, in the second diagram the naturally isomorphic spaces C, C ⊗ K and K ⊗ C are identified.
The first diagram is the dual of the one expressing associativity of algebra multiplication (called the coassociativity of the comultiplication); the second diagram is the dual of the one expressing the existence of a multiplicative identity. Accordingly, the map Δ is called the comultiplication (or coproduct) of C and ε is the counit of C.
== Examples ==
Take an arbitrary set S and form the K-vector space C = K(S) with basis S, as follows. The elements of this vector space C are those functions from S to K that map all but finitely many elements of S to zero; identify the element s of S with the function that maps s to 1 and all other elements of S to 0. Define
Δ(s) = s ⊗ s and ε(s) = 1 for all s in S.
By linearity, both Δ and ε can then uniquely be extended to all of C. The vector space C becomes a coalgebra with comultiplication Δ and counit ε.
As a second example, consider the polynomial ring K[X] in one indeterminate X. This becomes a coalgebra (the divided power coalgebra) if for all n ≥ 0 one defines:
Δ
(
X
n
)
=
∑
k
=
0
n
(
n
k
)
X
k
⊗
X
n
−
k
,
{\displaystyle \Delta (X^{n})=\sum _{k=0}^{n}{\dbinom {n}{k}}X^{k}\otimes X^{n-k},}
ε
(
X
n
)
=
{
1
if
n
=
0
0
if
n
>
0
{\displaystyle \varepsilon (X^{n})={\begin{cases}1&{\mbox{if }}n=0\\0&{\mbox{if }}n>0\end{cases}}}
Again, because of linearity, this suffices to define Δ and ε uniquely on all of K[X]. Now K[X] is both a unital associative algebra and a coalgebra, and the two structures are compatible. Objects like this are called bialgebras, and in fact most of the important coalgebras considered in practice are bialgebras.
Examples of coalgebras include the tensor algebra, the exterior algebra, Hopf algebras and Lie bialgebras. Unlike the polynomial case above, none of these are commutative. Therefore, the coproduct becomes the shuffle product, rather than the divided power structure given above. The shuffle product is appropriate, because it preserves the order of the terms appearing in the product, as is needed by non-commutative algebras.
The singular homology of a topological space forms a graded coalgebra whenever the Künneth isomorphism holds, e.g. if the coefficients are taken to be a field.
If C is the K-vector space with basis {s, c}, consider Δ: C → C ⊗ C is given by
Δ(s) = s ⊗ c + c ⊗ s
Δ(c) = c ⊗ c − s ⊗ s
and ε: C → K is given by
ε(s) = 0
ε(c) = 1
In this situation, (C, Δ, ε) is a coalgebra known as trigonometric coalgebra.
For a locally finite poset P with set of intervals J, define the incidence coalgebra C with J as basis. The comultiplication and counit are defined as
Δ
[
x
,
z
]
=
∑
y
∈
[
x
,
z
]
[
x
,
y
]
⊗
[
y
,
z
]
for
x
≤
z
.
{\displaystyle \Delta [x,z]=\sum _{y\in [x,z]}[x,y]\otimes [y,z]{\text{ for }}x\leq z\ .}
ε
[
x
,
y
]
=
{
1
if
x
=
y
,
0
if
x
≠
y
.
{\displaystyle \varepsilon [x,y]={\begin{cases}1&{\text{if }}x=y,\\0&{\text{if }}x\neq y.\end{cases}}}
The intervals of length zero correspond to points of P and are group-like elements.
== Finite dimensions ==
In finite dimensions, the duality between algebras and coalgebras is closer: the dual of a finite-dimensional (unital associative) algebra is a coalgebra, while the dual of a finite-dimensional coalgebra is a (unital associative) algebra. In general, the dual of an algebra may not be a coalgebra.
The key point is that in finite dimensions, (A ⊗ A)∗ and A∗ ⊗ A∗ are isomorphic.
To distinguish these: in general, algebra and coalgebra are dual notions (meaning that their axioms are dual: reverse the arrows), while for finite dimensions, they are also dual objects (meaning that a coalgebra is the dual object of an algebra and conversely).
If A is a finite-dimensional unital associative K-algebra, then its K-dual A∗ consisting of all K-linear maps from A to K is a coalgebra. The multiplication of A can be viewed as a linear map A ⊗ A → A, which when dualized yields a linear map A∗ → (A ⊗ A)∗. In the finite-dimensional case, (A ⊗ A)∗ is naturally isomorphic to A∗ ⊗ A∗, so this defines a comultiplication on A∗. The counit of A∗ is given by evaluating linear functionals at 1.
== Sweedler notation ==
When working with coalgebras, a certain notation for the comultiplication simplifies the formulas considerably and has become quite popular. Given an element c of the coalgebra (C, Δ, ε), there exist elements c(i )(1) and c(i )(2) in C such that
Δ
(
c
)
=
∑
i
c
(
1
)
(
i
)
⊗
c
(
2
)
(
i
)
{\displaystyle \Delta (c)=\sum _{i}c_{(1)}^{(i)}\otimes c_{(2)}^{(i)}}
Note that neither the number of terms in this sum, nor the exact values of each
c
(
1
)
(
i
)
{\displaystyle c_{(1)}^{(i)}}
or
c
(
2
)
(
i
)
{\displaystyle c_{(2)}^{(i)}}
, are uniquely determined by
c
{\displaystyle c}
; there is only a promise that there are finitely many terms, and that the full sum of all these terms
c
(
1
)
(
i
)
⊗
c
(
2
)
(
i
)
{\displaystyle c_{(1)}^{(i)}\otimes c_{(2)}^{(i)}}
have the right value
Δ
(
c
)
{\displaystyle \Delta (c)}
.
In Sweedler's notation, (so named after Moss Sweedler), this is abbreviated to
Δ
(
c
)
=
∑
(
c
)
c
(
1
)
⊗
c
(
2
)
.
{\displaystyle \Delta (c)=\sum _{(c)}c_{(1)}\otimes c_{(2)}.}
The fact that ε is a counit can then be expressed with the following formula
c
=
∑
(
c
)
ε
(
c
(
1
)
)
c
(
2
)
=
∑
(
c
)
c
(
1
)
ε
(
c
(
2
)
)
.
{\displaystyle c=\sum _{(c)}\varepsilon (c_{(1)})c_{(2)}=\sum _{(c)}c_{(1)}\varepsilon (c_{(2)}).\;}
Here it is understood that the sums have the same number of terms, and the same lists of values for
c
(
1
)
{\displaystyle c_{(1)}}
and
c
(
2
)
{\displaystyle c_{(2)}}
, as in the previous sum for
Δ
(
c
)
{\displaystyle \Delta (c)}
.
The coassociativity of Δ can be expressed as
∑
(
c
)
c
(
1
)
⊗
(
∑
(
c
(
2
)
)
(
c
(
2
)
)
(
1
)
⊗
(
c
(
2
)
)
(
2
)
)
=
∑
(
c
)
(
∑
(
c
(
1
)
)
(
c
(
1
)
)
(
1
)
⊗
(
c
(
1
)
)
(
2
)
)
⊗
c
(
2
)
.
{\displaystyle \sum _{(c)}c_{(1)}\otimes \left(\sum _{(c_{(2)})}(c_{(2)})_{(1)}\otimes (c_{(2)})_{(2)}\right)=\sum _{(c)}\left(\sum _{(c_{(1)})}(c_{(1)})_{(1)}\otimes (c_{(1)})_{(2)}\right)\otimes c_{(2)}.}
In Sweedler's notation, both of these expressions are written as
∑
(
c
)
c
(
1
)
⊗
c
(
2
)
⊗
c
(
3
)
.
{\displaystyle \sum _{(c)}c_{(1)}\otimes c_{(2)}\otimes c_{(3)}.}
Some authors omit the summation symbols as well; in this sumless Sweedler notation, one writes
Δ
(
c
)
=
c
(
1
)
⊗
c
(
2
)
{\displaystyle \Delta (c)=c_{(1)}\otimes c_{(2)}}
and
c
=
ε
(
c
(
1
)
)
c
(
2
)
=
c
(
1
)
ε
(
c
(
2
)
)
.
{\displaystyle c=\varepsilon (c_{(1)})c_{(2)}=c_{(1)}\varepsilon (c_{(2)}).\;}
Whenever a variable with lowered and parenthesized index is encountered in an expression of this kind, a summation symbol for that variable is implied.
== Further concepts and facts ==
A coalgebra (C, Δ, ε) is called co-commutative if
σ
∘
Δ
=
Δ
{\displaystyle \sigma \circ \Delta =\Delta }
, where σ: C ⊗ C → C ⊗ C is the K-linear map defined by σ(c ⊗ d) = d ⊗ c for all c, d in C. In Sweedler's sumless notation, C is co-commutative if and only if
c
(
1
)
⊗
c
(
2
)
=
c
(
2
)
⊗
c
(
1
)
{\displaystyle c_{(1)}\otimes c_{(2)}=c_{(2)}\otimes c_{(1)}}
for all c in C. (It's important to understand that the implied summation is significant here: it is not required that all the summands are pairwise equal, only that the sums are equal, a much weaker requirement.)
A group-like element (or set-like element) is an element x such that Δ(x) = x ⊗ x and ε(x) = 1. Contrary to what this naming convention suggests the group-like elements do not always form a group and in general they only form a set. The group-like elements of a Hopf algebra do form a group. A primitive element is an element x that satisfies Δ(x) = x ⊗ 1 + 1 ⊗ x. The primitive elements of a Hopf algebra form a Lie algebra.
If (C1, Δ1, ε1) and (C2, Δ2, ε2) are two coalgebras over the same field K, then a coalgebra morphism from C1 to C2 is a K-linear map f : C1 → C2 such that
(
f
⊗
f
)
∘
Δ
1
=
Δ
2
∘
f
{\displaystyle (f\otimes f)\circ \Delta _{1}=\Delta _{2}\circ f}
and
ε
2
∘
f
=
ε
1
{\displaystyle \varepsilon _{2}\circ f=\varepsilon _{1}}
.
In Sweedler's sumless notation, the first of these properties may be written as:
f
(
c
(
1
)
)
⊗
f
(
c
(
2
)
)
=
f
(
c
)
(
1
)
⊗
f
(
c
)
(
2
)
.
{\displaystyle f(c_{(1)})\otimes f(c_{(2)})=f(c)_{(1)}\otimes f(c)_{(2)}.}
The composition of two coalgebra morphisms is again a coalgebra morphism, and the coalgebras over K together with this notion of morphism form a category.
A linear subspace I in C is called a coideal if I ⊆ ker(ε) and Δ(I) ⊆ I ⊗ C + C ⊗ I. In that case, the quotient space C/I becomes a coalgebra in a natural fashion.
A subspace D of C is called a subcoalgebra if Δ(D) ⊆ D ⊗ D; in that case, D is itself a coalgebra, with the restriction of ε to D as counit.
The kernel of every coalgebra morphism f : C1 → C2 is a coideal in C1, and the image is a subcoalgebra of C2. The common isomorphism theorems are valid for coalgebras, so for instance C1/ker(f) is isomorphic to im(f).
If A is a finite-dimensional unital associative K-algebra, then A∗ is a finite-dimensional coalgebra, and indeed every finite-dimensional coalgebra arises in this fashion from some finite-dimensional algebra (namely from the coalgebra's K-dual). Under this correspondence, the commutative finite-dimensional algebras correspond to the cocommutative finite-dimensional coalgebras. So in the finite-dimensional case, the theories of algebras and of coalgebras are dual; studying one is equivalent to studying the other. However, relations diverge in the infinite-dimensional case: while the K-dual of every coalgebra is an algebra, the K-dual of an infinite-dimensional algebra need not be a coalgebra.
Every coalgebra is the sum of its finite-dimensional subcoalgebras, something that is not true for algebras. Abstractly, coalgebras are generalizations, or duals, of finite-dimensional unital associative algebras.
Corresponding to the concept of representation for algebras is a corepresentation or comodule.
== See also ==
Cofree coalgebra
Measuring coalgebra
Dialgebra
== References ==
== Further reading ==
Block, Richard E.; Leroux, Pierre (1985), "Generalized dual coalgebras of algebras, with applications to cofree coalgebras", Journal of Pure and Applied Algebra, 36 (1): 15–21, doi:10.1016/0022-4049(85)90060-X, ISSN 0022-4049, MR 0782637, Zbl 0556.16005
Dăscălescu, Sorin; Năstăsescu, Constantin; Raianu, Șerban (2001), Hopf Algebras: An introduction, Pure and Applied Mathematics, vol. 235 (1st ed.), New York, NY: Marcel Dekker, ISBN 0-8247-0481-9, Zbl 0962.16026.
Gómez-Torrecillas, José (1998), "Coalgebras and comodules over a commutative ring", Revue Roumaine de Mathématiques Pures et Appliquées, 43: 591–603
Hazewinkel, Michiel (2003), "Cofree coalgebras and multivariable recursiveness", Journal of Pure and Applied Algebra, 183 (1): 61–103, doi:10.1016/S0022-4049(03)00013-6, ISSN 0022-4049, MR 1992043, Zbl 1048.16022
Montgomery, Susan (1993), Hopf algebras and their actions on rings, Regional Conference Series in Mathematics, vol. 82, Providence, RI: American Mathematical Society, ISBN 0-8218-0738-2, Zbl 0793.16029
Underwood, Robert G. (2011), An introduction to Hopf algebras, Berlin: Springer-Verlag, ISBN 978-0-387-72765-3, Zbl 1234.16022
Yokonuma, Takeo (1992), Tensor spaces and exterior algebra, Translations of mathematical monographs, vol. 108, American Mathematical Society, ISBN 0-8218-4564-0, Zbl 0754.15028
Chapter III, section 11 in Bourbaki, Nicolas (1989). Algebra. Springer-Verlag. ISBN 0-387-19373-1.
== External links ==
William Chin: A brief introduction to coalgebra representation theory | Wikipedia/Coalgebra |
In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i.e., the composition of morphisms) of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category.
Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications.
== Definition ==
If
F
{\displaystyle F}
and
G
{\displaystyle G}
are functors between the categories
C
{\displaystyle C}
and
D
{\displaystyle D}
(both from
C
{\displaystyle C}
to
D
{\displaystyle D}
), then a natural transformation
η
{\displaystyle \eta }
from
F
{\displaystyle F}
to
G
{\displaystyle G}
is a family of morphisms that satisfies two requirements.
The natural transformation must associate, to every object
X
{\displaystyle X}
in
C
{\displaystyle C}
, a morphism
η
X
:
F
(
X
)
→
G
(
X
)
{\displaystyle \eta _{X}:F(X)\to G(X)}
between objects of
D
{\displaystyle D}
. The morphism
η
X
{\displaystyle \eta _{X}}
is called "the component of
η
{\displaystyle \eta }
at
X
{\displaystyle X}
" or "the
X
{\displaystyle X}
component of
η
{\displaystyle \eta }
."
Components must be such that for every morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
in
C
{\displaystyle C}
we have:
η
Y
∘
F
(
f
)
=
G
(
f
)
∘
η
X
{\displaystyle \eta _{Y}\circ F(f)=G(f)\circ \eta _{X}}
The last equation can conveniently be expressed by the commutative diagram.
If both
F
{\displaystyle F}
and
G
{\displaystyle G}
are contravariant, the vertical arrows in the right diagram are reversed. If
η
{\displaystyle \eta }
is a natural transformation from
F
{\displaystyle F}
to
G
{\displaystyle G}
, we also write
η
:
F
→
G
{\displaystyle \eta :F\to G}
or
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
. This is also expressed by saying the family of morphisms
η
X
:
F
(
X
)
→
G
(
X
)
{\displaystyle \eta _{X}:F(X)\to G(X)}
is natural in
X
{\displaystyle X}
.
If, for every object
X
{\displaystyle X}
in
C
{\displaystyle C}
, the morphism
η
X
{\displaystyle \eta _{X}}
is an isomorphism in
D
{\displaystyle D}
, then
η
{\displaystyle \eta }
is said to be a natural isomorphism (or sometimes natural equivalence or isomorphism of functors). This can be intuitively thought of as an isomorphism
η
X
{\displaystyle \eta _{X}}
between objects
F
(
X
)
{\displaystyle F(X)}
and
G
(
X
)
{\displaystyle G(X)}
inside
D
{\displaystyle D}
having been created, or "generated," by a natural transformation
η
{\displaystyle \eta }
between functors
F
{\displaystyle F}
and
G
{\displaystyle G}
outside
D
{\displaystyle D}
, into
D
{\displaystyle D}
. In other words,
η
{\displaystyle \eta }
is a natural isomorphism if and only if
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
entails an isomorphic
η
X
:
F
(
X
)
→
G
(
X
)
{\displaystyle \eta _{X}:F(X)\to G(X)}
, for all objects
X
{\displaystyle X}
in
C
{\displaystyle C}
; the two statements are equivalent. Even more reductionistically, or philosophically, a natural isomorphism occurs when a natural transformation begets its own respective isomorphism (by name that is) and thus the "natural-ness" (or rather, naturality even) of the natural transformation passes from itself over into that very same isomorphism, resulting in a natural isomorphism. Two functors
F
{\displaystyle F}
and
G
{\displaystyle G}
are called naturally isomorphic or simply isomorphic if there exists a natural isomorphism from
F
{\displaystyle F}
to
G
{\displaystyle G}
in their category.
An infranatural transformation
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
is simply the family of components for all
X
{\displaystyle X}
in
C
{\displaystyle C}
. Thus, a natural transformation is a special case of an infranatural transformation for which
η
Y
∘
F
(
f
)
=
G
(
f
)
∘
η
X
{\displaystyle \eta _{Y}\circ F(f)=G(f)\circ \eta _{X}}
for every morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
in
C
{\displaystyle C}
. The naturalizer of
η
{\displaystyle \eta }
,
n
a
t
(
η
)
{\displaystyle \mathbb {nat} (\eta )}
, is the largest subcategory
C
S
⊆
C
{\displaystyle C_{S}\subseteq C}
(S for subcategory), we will denote as
C
S
L
{\displaystyle C_{SL}}
(L for largest), containing all the objects of
C
{\displaystyle C}
, on which
η
{\displaystyle \eta }
restricts to a natural transformation. Alternatively put,
n
a
t
(
η
)
{\displaystyle \mathbb {nat} (\eta )}
is the largest
C
S
⊆
C
{\displaystyle C_{S}\subseteq C}
, dubbed
C
S
L
{\displaystyle C_{SL}}
, such that
η
|
C
S
L
:
F
|
C
S
L
⟹
G
|
C
S
L
{\displaystyle \,\eta |_{C_{SL}}\ :\ F|_{C_{SL}}\implies G|_{C_{SL}}}
or
η
|
n
a
t
(
η
)
:
F
|
n
a
t
(
η
)
⟹
G
|
n
a
t
(
η
)
{\displaystyle \,\eta |_{\mathbb {nat} {(\eta )}}\ :\ F|_{\mathbb {nat} {(\eta )}}\implies G|_{\mathbb {nat} {(\eta )}}}
for every object
X
{\displaystyle X}
in
n
a
t
(
η
)
=
C
S
L
⊆
C
{\displaystyle nat{(\eta )}=C_{SL}\subseteq C}
.
== Examples ==
=== Opposite group ===
Statements such as
"Every group is naturally isomorphic to its opposite group"
abound in modern mathematics. We will now give the precise meaning of this statement as well as its proof. Consider the category
Grp
{\displaystyle {\textbf {Grp}}}
of all groups with group homomorphisms as morphisms. If
(
G
,
∗
)
{\displaystyle (G,*)}
is a group, we define
its opposite group
(
G
op
,
∗
op
)
{\displaystyle (G^{\text{op}},{*}^{\text{op}})}
as follows:
G
op
{\displaystyle G^{\text{op}}}
is the same set as
G
{\displaystyle G}
, and the operation
∗
op
{\displaystyle *^{\text{op}}}
is defined
by
a
∗
op
b
=
b
∗
a
{\displaystyle a*^{\text{op}}b=b*a}
. All multiplications in
G
op
{\displaystyle G^{\text{op}}}
are thus "turned around". Forming the opposite group becomes
a (covariant) functor from
Grp
{\displaystyle {\textbf {Grp}}}
to
Grp
{\displaystyle {\textbf {Grp}}}
if we define
f
op
=
f
{\displaystyle f^{\text{op}}=f}
for any group homomorphism
f
:
G
→
H
{\displaystyle f:G\to H}
. Note that
f
op
{\displaystyle f^{\text{op}}}
is indeed a group homomorphism from
G
op
{\displaystyle G^{\text{op}}}
to
H
op
{\displaystyle H^{\text{op}}}
:
f
op
(
a
∗
op
b
)
=
f
(
b
∗
a
)
=
f
(
b
)
∗
f
(
a
)
=
f
op
(
a
)
∗
op
f
op
(
b
)
.
{\displaystyle f^{\text{op}}(a*^{\text{op}}b)=f(b*a)=f(b)*f(a)=f^{\text{op}}(a)*^{\text{op}}f^{\text{op}}(b).}
The content of the above statement is:
"The identity functor
Id
Grp
:
Grp
→
Grp
{\displaystyle {\text{Id}}_{\textbf {Grp}}:{\textbf {Grp}}\to {\textbf {Grp}}}
is naturally isomorphic to the opposite functor
op
:
Grp
→
Grp
{\displaystyle {\text{op}}:{\textbf {Grp}}\to {\textbf {Grp}}}
"
To prove this, we need to provide isomorphisms
η
G
:
G
→
G
op
{\displaystyle \eta _{G}:G\to G^{\text{op}}}
for every group
G
{\displaystyle G}
, such that the above diagram commutes.
Set
η
G
(
a
)
=
a
−
1
{\displaystyle \eta _{G}(a)=a^{-1}}
.
The formulas
(
a
∗
b
)
−
1
=
b
−
1
∗
a
−
1
=
a
−
1
∗
op
b
−
1
{\displaystyle (a*b)^{-1}=b^{-1}*a^{-1}=a^{-1}*^{\text{op}}b^{-1}}
and
(
a
−
1
)
−
1
=
a
{\displaystyle (a^{-1})^{-1}=a}
show that
η
G
{\displaystyle \eta _{G}}
is a group homomorphism with inverse
η
G
op
{\displaystyle \eta _{G^{\text{op}}}}
. To prove the naturality, we start with a group homomorphism
f
:
G
→
H
{\displaystyle f:G\to H}
and show
η
H
∘
f
=
f
op
∘
η
G
{\displaystyle \eta _{H}\circ f=f^{\text{op}}\circ \eta _{G}}
, i.e.
(
f
(
a
)
)
−
1
=
f
op
(
a
−
1
)
{\displaystyle (f(a))^{-1}=f^{\text{op}}(a^{-1})}
for all
a
{\displaystyle a}
in
G
{\displaystyle G}
. This is true since
f
op
=
f
{\displaystyle f^{\text{op}}=f}
and every group homomorphism has the property
(
f
(
a
)
)
−
1
=
f
(
a
−
1
)
{\displaystyle (f(a))^{-1}=f(a^{-1})}
.
=== Modules ===
Let
φ
:
M
⟶
M
′
{\displaystyle \varphi :M\longrightarrow M^{\prime }}
be an
R
{\displaystyle R}
-module homomorphism of right modules. For every left module
N
{\displaystyle N}
there is a natural map
φ
⊗
N
:
M
⊗
R
N
⟶
M
′
⊗
R
N
{\displaystyle \varphi \otimes N:M\otimes _{R}N\longrightarrow M^{\prime }\otimes _{R}N}
, form a natural transformation
η
:
M
⊗
R
−
⟹
M
′
⊗
R
−
{\displaystyle \eta :M\otimes _{R}-\implies M'\otimes _{R}-}
. For every right module
N
{\displaystyle N}
there is a natural map
η
N
:
Hom
R
(
M
′
,
N
)
⟶
Hom
R
(
M
,
N
)
{\displaystyle \eta _{N}:{\text{Hom}}_{R}(M',N)\longrightarrow {\text{Hom}}_{R}(M,N)}
defined by
η
N
(
f
)
=
f
φ
{\displaystyle \eta _{N}(f)=f\varphi }
, form a natural transformation
η
:
Hom
R
(
M
′
,
−
)
⟹
Hom
R
(
M
,
−
)
{\displaystyle \eta :{\text{Hom}}_{R}(M',-)\implies {\text{Hom}}_{R}(M,-)}
.
=== Abelianization ===
Given a group
G
{\displaystyle G}
, we can define its abelianization
G
ab
=
G
/
{\displaystyle G^{\text{ab}}=G/}
[
G
,
G
]
{\displaystyle [G,G]}
. Let
π
G
:
G
→
G
ab
{\displaystyle \pi _{G}:G\to G^{\text{ab}}}
denote the projection map onto the cosets of
[
G
,
G
]
{\displaystyle [G,G]}
. This homomorphism is "natural in
G
{\displaystyle G}
", i.e., it defines a natural transformation, which we now check. Let
H
{\displaystyle H}
be a group. For any homomorphism
f
:
G
→
H
{\displaystyle f:G\to H}
, we have that
[
G
,
G
]
{\displaystyle [G,G]}
is contained in the kernel of
π
H
∘
f
{\displaystyle \pi _{H}\circ f}
, because any homomorphism into an abelian group kills the commutator subgroup. Then
π
H
∘
f
{\displaystyle \pi _{H}\circ f}
factors through
G
ab
{\displaystyle G^{\text{ab}}}
as
f
ab
∘
π
G
=
π
H
∘
f
{\displaystyle f^{\text{ab}}\circ \pi _{G}=\pi _{H}\circ f}
for the unique homomorphism
f
ab
:
G
ab
→
H
ab
{\displaystyle f^{\text{ab}}:G^{\text{ab}}\to H^{\text{ab}}}
. This makes
ab
:
Grp
→
Grp
{\displaystyle {\text{ab}}:{\textbf {Grp}}\to {\textbf {Grp}}}
a functor and
π
{\displaystyle \pi }
a natural transformation, but not a natural isomorphism, from the identity functor to
ab
{\displaystyle {\text{ab}}}
.
=== Hurewicz homomorphism ===
Functors and natural transformations abound in algebraic topology, with the Hurewicz homomorphisms serving as examples. For any pointed topological space
(
X
,
x
)
{\displaystyle (X,x)}
and positive integer
n
{\displaystyle n}
there exists a group homomorphism
h
n
:
π
n
(
X
,
x
)
→
H
n
(
X
)
{\displaystyle h_{n}\colon \pi _{n}(X,x)\to H_{n}(X)}
from the
n
{\displaystyle n}
-th homotopy group of
(
X
,
x
)
{\displaystyle (X,x)}
to the
n
{\displaystyle n}
-th homology group of
X
{\displaystyle X}
. Both
π
n
{\displaystyle \pi _{n}}
and
H
n
{\displaystyle H_{n}}
are functors from the category Top* of pointed topological spaces to the category Grp of groups, and
h
n
{\displaystyle h_{n}}
is a natural transformation from
π
n
{\displaystyle \pi _{n}}
to
H
n
{\displaystyle H_{n}}
.
=== Determinant ===
Given commutative rings
R
{\displaystyle R}
and
S
{\displaystyle S}
with a ring homomorphism
f
:
R
→
S
{\displaystyle f:R\to S}
, the respective groups of invertible
n
×
n
{\displaystyle n\times n}
matrices
GL
n
(
R
)
{\displaystyle {\text{GL}}_{n}(R)}
and
GL
n
(
S
)
{\displaystyle {\text{GL}}_{n}(S)}
inherit a homomorphism which we denote by
GL
n
(
f
)
{\displaystyle {\text{GL}}_{n}(f)}
, obtained by applying
f
{\displaystyle f}
to each matrix entry. Similarly,
f
{\displaystyle f}
restricts to a group homomorphism
f
∗
:
R
∗
→
S
∗
{\displaystyle f^{*}:R^{*}\to S^{*}}
, where
R
∗
{\displaystyle R^{*}}
denotes the group of units of
R
{\displaystyle R}
. In fact,
GL
n
{\displaystyle {\text{GL}}_{n}}
and
∗
{\displaystyle *}
are functors from the category of commutative rings
CRing
{\displaystyle {\textbf {CRing}}}
to
Grp
{\displaystyle {\textbf {Grp}}}
.
The determinant on the group
GL
n
(
R
)
{\displaystyle {\text{GL}}_{n}(R)}
, denoted by
det
R
{\displaystyle {\text{det}}_{R}}
, is a group homomorphism
det
R
:
GL
n
(
R
)
→
R
∗
{\displaystyle {\mbox{det}}_{R}\colon {\mbox{GL}}_{n}(R)\to R^{*}}
which is natural in
R
{\displaystyle R}
: because the determinant is defined by the same formula for every ring,
f
∗
∘
det
R
=
det
S
∘
GL
n
(
f
)
{\displaystyle f^{*}\circ {\text{det}}_{R}={\text{det}}_{S}\circ {\text{GL}}_{n}(f)}
holds. This makes the determinant a natural transformation from
GL
n
{\displaystyle {\text{GL}}_{n}}
to
∗
{\displaystyle *}
.
=== Double dual of a vector space ===
For example, if
K
{\displaystyle K}
is a field, then for every vector space
V
{\displaystyle V}
over
K
{\displaystyle K}
we have a "natural" injective linear map
V
→
V
∗
∗
{\displaystyle V\to V^{**}}
from the vector space into its double dual. These maps are "natural" in the following sense: the double dual operation is a functor, and the maps are the components of a natural transformation from the identity functor to the double dual functor.
=== Finite calculus ===
For every abelian group
G
{\displaystyle G}
, the set
Hom
Set
(
Z
,
U
(
G
)
)
{\displaystyle {\text{Hom}}_{\textbf {Set}}(\mathbb {Z} ,U(G))}
of functions from the integers to the underlying set of
G
{\displaystyle G}
forms an abelian group
V
Z
(
G
)
{\displaystyle V_{\mathbb {Z} }(G)}
under pointwise addition. (Here
U
{\displaystyle U}
is the standard forgetful functor
U
:
Ab
→
Set
{\displaystyle U:{\textbf {Ab}}\to {\textbf {Set}}}
.)
Given an
Ab
{\displaystyle {\textbf {Ab}}}
morphism
φ
:
G
→
G
′
{\displaystyle \varphi :G\to G'}
, the map
V
Z
(
φ
)
:
V
Z
(
G
)
→
V
Z
(
G
′
)
{\displaystyle V_{\mathbb {Z} }(\varphi ):V_{\mathbb {Z} }(G)\to V_{\mathbb {Z} }(G')}
given by left composing
φ
{\displaystyle \varphi }
with the elements of the former is itself a homomorphism of abelian groups; in this way we
obtain a functor
V
Z
:
Ab
→
Ab
{\displaystyle V_{\mathbb {Z} }:{\textbf {Ab}}\to {\textbf {Ab}}}
. The finite difference operator
Δ
G
{\displaystyle \Delta _{G}}
taking each function
f
:
Z
→
U
(
G
)
{\displaystyle f:\mathbb {Z} \to U(G)}
to
Δ
(
f
)
:
n
↦
f
(
n
+
1
)
−
f
(
n
)
{\displaystyle \Delta (f):n\mapsto f(n+1)-f(n)}
is a map from
V
Z
(
G
)
{\displaystyle V_{\mathbb {Z} }(G)}
to itself, and the collection
Δ
{\displaystyle \Delta }
of such maps gives a natural transformation
Δ
:
V
Z
→
V
Z
{\displaystyle \Delta :V_{\mathbb {Z} }\to V_{\mathbb {Z} }}
.
=== Tensor-hom adjunction ===
Consider the category
Ab
{\displaystyle {\textbf {Ab}}}
of abelian groups and group homomorphisms. For all abelian groups
X
{\displaystyle X}
,
Y
{\displaystyle Y}
and
Z
{\displaystyle Z}
we have a group isomorphism
Hom
(
X
⊗
Y
,
Z
)
→
Hom
(
X
,
Hom
(
Y
,
Z
)
)
{\displaystyle {\text{Hom}}(X\otimes Y,Z)\to {\text{Hom}}(X,{\text{Hom}}(Y,Z))}
.
These isomorphisms are "natural" in the sense that they define a natural transformation between the two involved functors
Ab
op
×
Ab
op
×
Ab
→
Ab
{\displaystyle {\textbf {Ab}}^{\text{op}}\times {\textbf {Ab}}^{\text{op}}\times {\textbf {Ab}}\to {\textbf {Ab}}}
.
(Here "op" is the opposite category of
Ab
{\displaystyle {\textbf {Ab}}}
, not to be confused with the trivial opposite group functor on
Ab
{\displaystyle {\textbf {Ab}}}
!)
This is formally the tensor-hom adjunction, and is an archetypal example of a pair of adjoint functors. Natural transformations arise frequently in conjunction with adjoint functors, and indeed, adjoint functors are defined by a certain natural isomorphism. Additionally, every pair of adjoint functors comes equipped with two natural transformations (generally not isomorphisms) called the unit and counit.
== Unnatural isomorphism ==
The notion of a natural transformation is categorical, and states (informally) that a particular map between functors can be done consistently over an entire category. Informally, a particular map (esp. an isomorphism) between individual objects (not entire categories) is referred to as a "natural isomorphism", meaning implicitly that it is actually defined on the entire category, and defines a natural transformation of functors; formalizing this intuition was a motivating factor in the development of category theory.
Conversely, a particular map between particular objects may be called an unnatural isomorphism (or "an isomorphism that is not natural") if the map cannot be extended to a natural transformation on the entire category. Given an object
X
,
{\displaystyle X,}
a functor
G
{\displaystyle G}
(taking for simplicity the first functor to be the identity) and an isomorphism
η
:
X
→
G
(
X
)
,
{\displaystyle \eta \colon X\to G(X),}
proof of unnaturality is most easily shown by giving an automorphism
A
:
X
→
X
{\displaystyle A\colon X\to X}
that does not commute with this isomorphism (so
η
∘
A
≠
G
(
A
)
∘
η
{\displaystyle \eta \circ A\neq G(A)\circ \eta }
). More strongly, if one wishes to prove that
X
{\displaystyle X}
and
G
(
X
)
{\displaystyle G(X)}
are not naturally isomorphic, without reference to a particular isomorphism, this requires showing that for any isomorphism
η
{\displaystyle \eta }
, there is some
A
{\displaystyle A}
with which it does not commute; in some cases a single automorphism
A
{\displaystyle A}
works for all candidate isomorphisms
η
{\displaystyle \eta }
while in other cases one must show how to construct a different
A
η
{\displaystyle A_{\eta }}
for each isomorphism. The maps of the category play a crucial role – any infranatural transform is natural if the only maps are the identity map, for instance.
This is similar (but more categorical) to concepts in group theory or module theory, where a given decomposition of an object into a direct sum is "not natural", or rather "not unique", as automorphisms exist that do not preserve the direct sum decomposition – see Structure theorem for finitely generated modules over a principal ideal domain § Uniqueness for example.
Some authors distinguish notationally, using
≅
{\displaystyle \cong }
for a natural isomorphism and
≈
{\displaystyle \approx }
for an unnatural isomorphism, reserving
=
{\displaystyle =}
for equality (usually equality of maps).
=== Example: fundamental group of torus ===
As an example of the distinction between the functorial statement and individual objects, consider homotopy groups of a product space, specifically the fundamental group of the torus.
The homotopy groups of a product space are naturally the product of the homotopy groups of the components,
π
n
(
(
X
,
x
0
)
×
(
Y
,
y
0
)
)
≅
π
n
(
(
X
,
x
0
)
)
×
π
n
(
(
Y
,
y
0
)
)
,
{\displaystyle \pi _{n}((X,x_{0})\times (Y,y_{0}))\cong \pi _{n}((X,x_{0}))\times \pi _{n}((Y,y_{0})),}
with the isomorphism given by projection onto the two factors, fundamentally because maps into a product space are exactly products of maps into the components – this is a functorial statement.
However, the torus (which is abstractly a product of two circles) has fundamental group isomorphic to
Z
2
{\displaystyle Z^{2}}
, but the splitting
π
1
(
T
,
t
0
)
≈
Z
×
Z
{\displaystyle \pi _{1}(T,t_{0})\approx \mathbf {Z} \times \mathbf {Z} }
is not natural. Note the use of
≈
{\displaystyle \approx }
,
≅
{\displaystyle \cong }
, and
=
{\displaystyle =}
:
π
1
(
T
,
t
0
)
≈
π
1
(
S
1
,
x
0
)
×
π
1
(
S
1
,
y
0
)
≅
Z
×
Z
=
Z
2
.
{\displaystyle \pi _{1}(T,t_{0})\approx \pi _{1}(S^{1},x_{0})\times \pi _{1}(S^{1},y_{0})\cong \mathbf {Z} \times \mathbf {Z} =\mathbf {Z} ^{2}.}
This abstract isomorphism with a product is not natural, as some isomorphisms of
T
{\displaystyle T}
do not preserve the product: the self-homeomorphism of
T
{\displaystyle T}
(thought of as the quotient space
R
2
/
Z
2
{\displaystyle R^{2}/\mathbb {Z} ^{2}}
) given by
(
1
1
0
1
)
{\displaystyle \left({\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right)}
(geometrically a Dehn twist about one of the generating curves) acts as this matrix on
Z
2
{\displaystyle \mathbb {Z} ^{2}}
(it's in the general linear group
GL
(
Z
,
2
)
{\displaystyle {\text{GL}}(\mathbb {Z} ,2)}
of invertible integer matrices), which does not preserve the decomposition as a product because it is not diagonal. However, if one is given the torus as a product
(
T
,
t
0
)
=
(
S
1
,
x
0
)
×
(
S
1
,
y
0
)
{\displaystyle (T,t_{0})=(S^{1},x_{0})\times (S^{1},y_{0})}
– equivalently, given a decomposition of the space – then the splitting of the group follows from the general statement earlier. In categorical terms, the relevant category (preserving the structure of a product space) is "maps of product spaces, namely a pair of maps between the respective components".
Naturality is a categorical notion, and requires being very precise about exactly what data is given – the torus as a space that happens to be a product (in the category of spaces and continuous maps) is different from the torus presented as a product (in the category of products of two spaces and continuous maps between the respective components).
=== Example: dual of a finite-dimensional vector space ===
Every finite-dimensional vector space is isomorphic to its dual space, but there may be many different isomorphisms between the two spaces. There is in general no natural isomorphism between a finite-dimensional vector space and its dual space. However, related categories (with additional structure and restrictions on the maps) do have a natural isomorphism, as described below.
The dual space of a finite-dimensional vector space is again a finite-dimensional vector space of the same dimension, and these are thus isomorphic, since dimension is the only invariant of finite-dimensional vector spaces over a given field. However, in the absence of additional constraints (such as a requirement that maps preserve the chosen basis), the map from a space to its dual is not unique, and thus such an isomorphism requires a choice, and is "not natural". On the category of finite-dimensional vector spaces and linear maps, one can define an infranatural isomorphism from vector spaces to their dual by choosing an isomorphism for each space (say, by choosing a basis for every vector space and taking the corresponding isomorphism), but this will not define a natural transformation. Intuitively this is because it required a choice, rigorously because any such choice of isomorphisms will not commute with, say, the zero map; see (Mac Lane & Birkhoff 1999, §VI.4) for detailed discussion.
Starting from finite-dimensional vector spaces (as objects) and the identity and dual functors, one can define a natural isomorphism, but this requires first adding additional structure, then restricting the maps from "all linear maps" to "linear maps that respect this structure". Explicitly, for each vector space, require that it comes with the data of an isomorphism to its dual,
η
V
:
V
→
V
∗
{\displaystyle \eta _{V}\colon V\to V^{*}}
. In other words, take as objects vector spaces with a nondegenerate bilinear form
b
V
:
V
×
V
→
K
{\displaystyle b_{V}\colon V\times V\to K}
. This defines an infranatural isomorphism (isomorphism for each object). One then restricts the maps to only those maps
T
:
V
→
U
{\displaystyle T\colon V\to U}
that commute with the isomorphisms:
T
∗
(
η
U
(
T
(
v
)
)
)
=
η
V
(
v
)
{\displaystyle T^{*}(\eta _{U}(T(v)))=\eta _{V}(v)}
or in other words, preserve the bilinear form:
b
U
(
T
(
v
)
,
T
(
w
)
)
=
b
V
(
v
,
w
)
{\displaystyle b_{U}(T(v),T(w))=b_{V}(v,w)}
. (These maps define the naturalizer of the isomorphisms.) The resulting category, with objects finite-dimensional vector spaces with a nondegenerate bilinear form, and maps linear transforms that respect the bilinear form, by construction has a natural isomorphism from the identity to the dual (each space has an isomorphism to its dual, and the maps in the category are required to commute). Viewed in this light, this construction (add transforms for each object, restrict maps to commute with these) is completely general, and does not depend on any particular properties of vector spaces.
In this category (finite-dimensional vector spaces with a nondegenerate bilinear form, maps linear transforms that respect the bilinear form), the dual of a map between vector spaces can be identified as a transpose. Often for reasons of geometric interest this is specialized to a subcategory, by requiring that the nondegenerate bilinear forms have additional properties, such as being symmetric (orthogonal matrices), symmetric and positive definite (inner product space), symmetric sesquilinear (Hermitian spaces), skew-symmetric and totally isotropic (symplectic vector space), etc. – in all these categories a vector space is naturally identified with its dual, by the nondegenerate bilinear form.
== Operations with natural transformations ==
=== Vertical composition ===
If
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
and
ϵ
:
G
⇒
H
{\displaystyle \epsilon :G\Rightarrow H}
are natural transformations between functors
F
,
G
,
H
:
C
→
D
{\displaystyle F,G,H:C\to D}
, then we can compose them to get a natural transformation
ϵ
∘
η
:
F
⇒
H
{\displaystyle \epsilon \circ \eta :F\Rightarrow H}
.
This is done component-wise:
(
ϵ
∘
η
)
X
=
ϵ
X
∘
η
X
{\displaystyle (\epsilon \circ \eta )_{X}=\epsilon _{X}\circ \eta _{X}}
.
This vertical composition of natural transformations is associative and has an identity, and allows one to consider the collection of all functors
C
→
D
{\displaystyle C\to D}
itself as a category (see below under Functor categories).
The identity natural transformation
i
d
F
{\displaystyle \mathrm {id} _{F}}
on functor
F
{\displaystyle F}
has components
(
i
d
F
)
X
=
i
d
F
(
X
)
{\displaystyle (\mathrm {id} _{F})_{X}=\mathrm {id} _{F(X)}}
.
For
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
,
i
d
G
∘
η
=
η
=
η
∘
i
d
F
{\displaystyle \mathrm {id} _{G}\circ \eta =\eta =\eta \circ \mathrm {id} _{F}}
.
=== Horizontal composition ===
If
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
is a natural transformation between functors
F
,
G
:
C
→
D
{\displaystyle F,G:C\to D}
and
ϵ
:
J
⇒
K
{\displaystyle \epsilon :J\Rightarrow K}
is a natural transformation between functors
J
,
K
:
D
→
E
{\displaystyle J,K:D\to E}
, then the composition of functors allows a composition of natural transformations
ϵ
∗
η
:
J
∘
F
⇒
K
∘
G
{\displaystyle \epsilon *\eta :J\circ F\Rightarrow K\circ G}
with components
(
ϵ
∗
η
)
X
=
ϵ
G
(
X
)
∘
J
(
η
X
)
=
K
(
η
X
)
∘
ϵ
F
(
X
)
{\displaystyle (\epsilon *\eta )_{X}=\epsilon _{G(X)}\circ J(\eta _{X})=K(\eta _{X})\circ \epsilon _{F(X)}}
.
By using whiskering (see below), we can write
(
ϵ
∗
η
)
X
=
(
ϵ
G
)
X
∘
(
J
η
)
X
=
(
K
η
)
X
∘
(
ϵ
F
)
X
{\displaystyle (\epsilon *\eta )_{X}=(\epsilon G)_{X}\circ (J\eta )_{X}=(K\eta )_{X}\circ (\epsilon F)_{X}}
,
hence
ϵ
∗
η
=
ϵ
G
∘
J
η
=
K
η
∘
ϵ
F
{\displaystyle \epsilon *\eta =\epsilon G\circ J\eta =K\eta \circ \epsilon F}
.
This horizontal composition of natural transformations is also associative with identity.
This identity is the identity natural transformation on the identity functor, i.e., the natural transformation that associate to each object its identity morphism: for object
X
{\displaystyle X}
in category
C
{\displaystyle C}
,
(
i
d
i
d
C
)
X
=
i
d
i
d
C
(
X
)
=
i
d
X
{\displaystyle (\mathrm {id} _{\mathrm {id} _{C}})_{X}=\mathrm {id} _{\mathrm {id} _{C}(X)}=\mathrm {id} _{X}}
.
For
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
with
F
,
G
:
C
→
D
{\displaystyle F,G:C\to D}
,
i
d
i
d
D
∗
η
=
η
=
η
∗
i
d
i
d
C
{\displaystyle \mathrm {id} _{\mathrm {id} _{D}}*\eta =\eta =\eta *\mathrm {id} _{\mathrm {id} _{C}}}
.
As identity functors
i
d
C
{\displaystyle \mathrm {id} _{C}}
and
i
d
D
{\displaystyle \mathrm {id} _{D}}
are functors, the identity for horizontal composition is also the identity for vertical composition, but not vice versa.
=== Whiskering ===
Whiskering is an external binary operation between a functor and a natural transformation.
If
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
is a natural transformation between functors
F
,
G
:
C
→
D
{\displaystyle F,G:C\to D}
, and
H
:
D
→
E
{\displaystyle H:D\to E}
is another functor, then we can form the natural transformation
H
η
:
H
∘
F
⇒
H
∘
G
{\displaystyle H\eta :H\circ F\Rightarrow H\circ G}
by defining
(
H
η
)
X
=
H
(
η
X
)
{\displaystyle (H\eta )_{X}=H(\eta _{X})}
.
If on the other hand
K
:
B
→
C
{\displaystyle K:B\to C}
is a functor, the natural transformation
η
K
:
F
∘
K
⇒
G
∘
K
{\displaystyle \eta K:F\circ K\Rightarrow G\circ K}
is defined by
(
η
K
)
X
=
η
K
(
X
)
{\displaystyle (\eta K)_{X}=\eta _{K(X)}}
.
It's also a horizontal composition where one of the natural transformations is the identity natural transformation:
H
η
=
i
d
H
∗
η
{\displaystyle H\eta =\mathrm {id} _{H}*\eta }
and
η
K
=
η
∗
i
d
K
{\displaystyle \eta K=\eta *\mathrm {id} _{K}}
.
Note that
i
d
H
{\displaystyle \mathrm {id} _{H}}
(resp.
i
d
K
{\displaystyle \mathrm {id} _{K}}
) is generally not the left (resp. right) identity of horizontal composition
∗
{\displaystyle *}
(
H
η
≠
η
{\displaystyle H\eta \neq \eta }
and
η
K
≠
η
{\displaystyle \eta K\neq \eta }
in general), except if
H
{\displaystyle H}
(resp.
K
{\displaystyle K}
) is the identity functor of the category
D
{\displaystyle D}
(resp.
C
{\displaystyle C}
).
=== Interchange law ===
The two operations are related by an identity which exchanges vertical composition with horizontal composition: if we have four natural transformations
α
,
α
′
,
β
,
β
′
{\displaystyle \alpha ,\alpha ',\beta ,\beta '}
as shown on the image to the right, then the following identity holds:
(
β
′
∘
α
′
)
∗
(
β
∘
α
)
=
(
β
′
∗
β
)
∘
(
α
′
∗
α
)
{\displaystyle (\beta '\circ \alpha ')*(\beta \circ \alpha )=(\beta '*\beta )\circ (\alpha '*\alpha )}
.
Vertical and horizontal compositions are also linked through identity natural transformations:
for
F
:
C
→
D
{\displaystyle F:C\to D}
and
G
:
D
→
E
{\displaystyle G:D\to E}
,
i
d
G
∗
i
d
F
=
i
d
G
∘
F
{\displaystyle \mathrm {id} _{G}*\mathrm {id} _{F}=\mathrm {id} _{G\circ F}}
.
As whiskering is horizontal composition with an identity, the interchange law gives immediately the compact formulas of horizontal composition of
η
:
F
⇒
G
{\displaystyle \eta :F\Rightarrow G}
and
ϵ
:
J
⇒
K
{\displaystyle \epsilon :J\Rightarrow K}
without having to analyze components and the commutative diagram:
ϵ
∗
η
=
(
ϵ
∘
i
d
J
)
∗
(
i
d
G
∘
η
)
=
(
ϵ
∗
i
d
G
)
∘
(
i
d
J
∗
η
)
=
ϵ
G
∘
J
η
=
(
i
d
K
∘
ϵ
)
∗
(
η
∘
i
d
F
)
=
(
i
d
K
∗
η
)
∘
(
ϵ
∗
i
d
F
)
=
K
η
∘
ϵ
F
{\displaystyle {\begin{aligned}\epsilon *\eta &=(\epsilon \circ \mathrm {id} _{J})*(\mathrm {id} _{G}\circ \eta )=(\epsilon *\mathrm {id} _{G})\circ (\mathrm {id} _{J}*\eta )=\epsilon G\circ J\eta \\&=(\mathrm {id} _{K}\circ \epsilon )*(\eta \circ \mathrm {id} _{F})=(\mathrm {id} _{K}*\eta )\circ (\epsilon *\mathrm {id} _{F})=K\eta \circ \epsilon F\end{aligned}}}
.
== Functor categories ==
If
C
{\displaystyle C}
is any category and
I
{\displaystyle I}
is a small category, we can form the functor category
C
I
{\displaystyle C^{I}}
having as objects all functors from
I
{\displaystyle I}
to
C
{\displaystyle C}
and as morphisms the natural transformations between those functors. This forms a category since for any functor
F
{\displaystyle F}
there is an identity natural transformation
1
F
:
F
→
F
{\displaystyle 1_{F}:F\to F}
(which assigns to every object
X
{\displaystyle X}
the identity morphism on
F
(
X
)
{\displaystyle F(X)}
) and the composition of two natural transformations (the "vertical composition" above) is again a natural transformation.
The isomorphisms in
C
I
{\displaystyle C^{I}}
are precisely the natural isomorphisms. That is, a natural transformation
η
:
F
→
G
{\displaystyle \eta :F\to G}
is a natural isomorphism if and only if there exists a natural transformation
ϵ
:
G
→
F
{\displaystyle \epsilon :G\to F}
such that
η
ϵ
=
1
G
{\displaystyle \eta \epsilon =1_{G}}
and
ϵ
η
=
1
F
{\displaystyle \epsilon \eta =1_{F}}
.
The functor category
C
I
{\displaystyle C^{I}}
is especially useful if
I
{\displaystyle I}
arises from a directed graph. For instance, if
I
{\displaystyle I}
is the category of the directed graph • → •, then
C
I
{\displaystyle C^{I}}
has as objects the morphisms of
C
{\displaystyle C}
, and a morphism between
ϕ
:
U
→
V
{\displaystyle \phi :U\to V}
and
ψ
:
X
→
Y
{\displaystyle \psi :X\to Y}
in
C
I
{\displaystyle C^{I}}
is a pair of morphisms
f
:
U
→
X
{\displaystyle f:U\to X}
and
g
:
V
→
Y
{\displaystyle g:V\to Y}
in
C
{\displaystyle C}
such that the "square commutes", i.e.
ψ
∘
f
=
g
∘
ϕ
{\displaystyle \psi \circ f=g\circ \phi }
.
More generally, one can build the 2-category
Cat
{\displaystyle {\textbf {Cat}}}
whose
0-cells (objects) are the small categories,
1-cells (arrows) between two objects
C
{\displaystyle C}
and
D
{\displaystyle D}
are the functors from
C
{\displaystyle C}
to
D
{\displaystyle D}
,
2-cells between two 1-cells (functors)
F
:
C
→
D
{\displaystyle F:C\to D}
and
G
:
C
→
D
{\displaystyle G:C\to D}
are the natural transformations from
F
{\displaystyle F}
to
G
{\displaystyle G}
.
The horizontal and vertical compositions are the compositions between natural transformations described previously. A functor category
C
I
{\displaystyle C^{I}}
is then simply a hom-category in this category (smallness issues aside).
=== More examples ===
Every limit and colimit provides an example for a simple natural transformation, as a cone amounts to a natural transformation with the diagonal functor as domain. Indeed, if limits and colimits are defined directly in terms of their universal property, they are universal morphisms in a functor category.
== Yoneda lemma ==
If
X
{\displaystyle X}
is an object of a locally small category
C
{\displaystyle C}
, then the assignment
Y
↦
Hom
C
(
X
,
Y
)
{\displaystyle Y\mapsto {\text{Hom}}_{C}(X,Y)}
defines a covariant functor
F
X
:
C
→
Set
{\displaystyle F_{X}:C\to {\textbf {Set}}}
. This functor is called representable (more generally, a representable functor is any functor naturally isomorphic to this functor for an appropriate choice of
X
{\displaystyle X}
). The natural transformations from a representable functor to an arbitrary functor
F
:
C
→
Set
{\displaystyle F:C\to {\textbf {Set}}}
are completely known and easy to describe; this is the content of the Yoneda lemma.
== Historical notes ==
Saunders Mac Lane, one of the founders of category theory, is said to have remarked, "I didn't invent categories to study functors; I invented them to study natural transformations." Just as the study of groups is not complete without a study of homomorphisms, so the study of categories is not complete without the study of functors. The reason for Mac Lane's comment is that the study of functors is itself not complete without the study of natural transformations.
The context of Mac Lane's remark was the axiomatic theory of homology. Different ways of constructing homology could be shown to coincide: for example in the case of a simplicial complex the groups defined directly would be isomorphic to those of the singular theory. What cannot easily be expressed without the language of natural transformations is how homology groups are compatible with morphisms between objects, and how two equivalent homology theories not only have the same homology groups, but also the same morphisms between those groups.
== See also ==
Extranatural transformation
Universal property
Higher category theory
Modification (mathematics)
== Notes ==
== References ==
== External links ==
nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view
J. Adamek, H. Herrlich, G. Strecker, Abstract and Concrete Categories-The Joy of Cats
Stanford Encyclopedia of Philosophy: "Category Theory"—by Jean-Pierre Marquis. Extensive bibliography.
Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher categories. | Wikipedia/Natural_transformation |
In mathematics, the order of a finite group is the number of its elements. If a group is not finite, one says that its order is infinite. The order of an element of a group (also called period length or period) is the order of the subgroup generated by the element. If the group operation is denoted as a multiplication, the order of an element a of a group, is thus the smallest positive integer m such that am = e, where e denotes the identity element of the group, and am denotes the product of m copies of a. If no such m exists, the order of a is infinite.
The order of a group G is denoted by ord(G) or |G|, and the order of an element a is denoted by ord(a) or |a|, instead of
ord
(
⟨
a
⟩
)
,
{\displaystyle \operatorname {ord} (\langle a\rangle ),}
where the brackets denote the generated group.
Lagrange's theorem states that for any subgroup H of a finite group G, the order of the subgroup divides the order of the group; that is, |H| is a divisor of |G|. In particular, the order |a| of any element is a divisor of |G|.
== Example ==
The symmetric group S3 has the following multiplication table.
This group has six elements, so ord(S3) = 6. By definition, the order of the identity, e, is one, since e 1 = e. Each of s, t, and w squares to e, so these group elements have order two: |s| = |t| = |w| = 2. Finally, u and v have order 3, since u3 = vu = e, and v3 = uv = e.
== Order and structure ==
The order of a group G and the orders of its elements give much information about the structure of the group. Roughly speaking, the more complicated the factorization of |G|, the more complicated the structure of G.
For |G| = 1, the group is trivial. In any group, only the identity element a = e has ord(a) = 1. If every non-identity element in G is equal to its inverse (so that a2 = e), then ord(a) = 2; this implies G is abelian since
a
b
=
(
a
b
)
−
1
=
b
−
1
a
−
1
=
b
a
{\displaystyle ab=(ab)^{-1}=b^{-1}a^{-1}=ba}
. The converse is not true; for example, the (additive) cyclic group Z6 of integers modulo 6 is abelian, but the number 2 has order 3:
2
+
2
+
2
=
6
≡
0
(
mod
6
)
{\displaystyle 2+2+2=6\equiv 0{\pmod {6}}}
.
The relationship between the two concepts of order is the following: if we write
⟨
a
⟩
=
{
a
k
:
k
∈
Z
}
{\displaystyle \langle a\rangle =\{a^{k}\colon k\in \mathbb {Z} \}}
for the subgroup generated by a, then
ord
(
a
)
=
ord
(
⟨
a
⟩
)
.
{\displaystyle \operatorname {ord} (a)=\operatorname {ord} (\langle a\rangle ).}
For any integer k, we have
ak = e if and only if ord(a) divides k.
In general, the order of any subgroup of G divides the order of G. More precisely: if H is a subgroup of G, then
ord(G) / ord(H) = [G : H], where [G : H] is called the index of H in G, an integer. This is Lagrange's theorem. (This is, however, only true when G has finite order. If ord(G) = ∞, the quotient ord(G) / ord(H) does not make sense.)
As an immediate consequence of the above, we see that the order of every element of a group divides the order of the group. For example, in the symmetric group shown above, where ord(S3) = 6, the possible orders of the elements are 1, 2, 3 or 6.
The following partial converse is true for finite groups: if d divides the order of a group G and d is a prime number, then there exists an element of order d in G (this is sometimes called Cauchy's theorem). The statement does not hold for composite orders, e.g. the Klein four-group does not have an element of order four. This can be shown by inductive proof. The consequences of the theorem include: the order of a group G is a power of a prime p if and only if ord(a) is some power of p for every a in G.
If a has infinite order, then all non-zero powers of a have infinite order as well. If a has finite order, we have the following formula for the order of the powers of a:
ord(ak) = ord(a) / gcd(ord(a), k)
for every integer k. In particular, a and its inverse a−1 have the same order.
In any group,
ord
(
a
b
)
=
ord
(
b
a
)
{\displaystyle \operatorname {ord} (ab)=\operatorname {ord} (ba)}
There is no general formula relating the order of a product ab to the orders of a and b. In fact, it is possible that both a and b have finite order while ab has infinite order, or that both a and b have infinite order while ab has finite order. An example of the former is a(x) = 2−x, b(x) = 1−x with ab(x) = x−1 in the group
S
y
m
(
Z
)
{\displaystyle Sym(\mathbb {Z} )}
. An example of the latter is a(x) = x+1, b(x) = x−1 with ab(x) = x. If ab = ba, we can at least say that ord(ab) divides lcm(ord(a), ord(b)). As a consequence, one can prove that in a finite abelian group, if m denotes the maximum of all the orders of the group's elements, then every element's order divides m.
== Counting by order of elements ==
Suppose G is a finite group of order n, and d is a divisor of n. The number of order d elements in G is a multiple of φ(d) (possibly zero), where φ is Euler's totient function, giving the number of positive integers no larger than d and coprime to it. For example, in the case of S3, φ(3) = 2, and we have exactly two elements of order 3. The theorem provides no useful information about elements of order 2, because φ(2) = 1, and is only of limited utility for composite d such as d = 6, since φ(6) = 2, and there are zero elements of order 6 in S3.
== In relation to homomorphisms ==
Group homomorphisms tend to reduce the orders of elements: if f: G → H is a homomorphism, and a is an element of G of finite order, then ord(f(a)) divides ord(a). If f is injective, then ord(f(a)) = ord(a). This can often be used to prove that there are no homomorphisms or no injective homomorphisms, between two explicitly given groups. (For example, there can be no nontrivial homomorphism h: S3 → Z5, because every number except zero in Z5 has order 5, which does not divide the orders 1, 2, and 3 of elements in S3.) A further consequence is that conjugate elements have the same order.
== Class equation ==
An important result about orders is the class equation; it relates the order of a finite group G to the order of its center Z(G) and the sizes of its non-trivial conjugacy classes:
|
G
|
=
|
Z
(
G
)
|
+
∑
i
d
i
{\displaystyle |G|=|Z(G)|+\sum _{i}d_{i}\;}
where the di are the sizes of the non-trivial conjugacy classes; these are proper divisors of |G| bigger than one, and they are also equal to the indices of the centralizers in G of the representatives of the non-trivial conjugacy classes. For example, the center of S3 is just the trivial group with the single element e, and the equation reads |S3| = 1+2+3.
== See also ==
Torsion subgroup
== Notes ==
== References ==
Dummit, David; Foote, Richard. Abstract Algebra, ISBN 978-0471433347, pp. 20, 54–59, 90
Artin, Michael. Algebra, ISBN 0-13-004763-5, pp. 46–47 | Wikipedia/Order_(group_theory) |
In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures, which are sets with specific operations acting on their elements. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, and algebras over a field. The term abstract algebra was coined in the early 20th century to distinguish it from older parts of algebra, and more specifically from elementary algebra, the use of variables to represent numbers in computation and reasoning. The abstract perspective on algebra has become so fundamental to advanced mathematics that it is simply called "algebra", while the term "abstract algebra" is seldom used except in pedagogy.
Algebraic structures, with their associated homomorphisms, form mathematical categories. Category theory gives a unified framework to study properties and constructions that are similar for various structures.
Universal algebra is a related subject that studies types of algebraic structures as single objects. For example, the structure of groups is a single object in universal algebra, which is called the variety of groups.
== History ==
Before the nineteenth century, algebra was defined as the study of polynomials. Abstract algebra came into existence during the nineteenth century as more complex problems and solution methods developed. Concrete problems and examples came from number theory, geometry, analysis, and the solutions of algebraic equations. Most theories that are now recognized as parts of abstract algebra started as collections of disparate facts from various branches of mathematics, acquired a common theme that served as a core around which various results were grouped, and finally became unified on a basis of a common set of concepts. This unification occurred in the early decades of the 20th century and resulted in the formal axiomatic definitions of various algebraic structures such as groups, rings, and fields. This historical development is almost the opposite of the treatment found in popular textbooks, such as van der Waerden's Moderne Algebra, which start each chapter with a formal definition of a structure and then follow it with concrete examples.
=== Elementary algebra ===
The study of polynomial equations or algebraic equations has a long history. c. 1700 BC, the Babylonians were able to solve quadratic equations specified as word problems. This word problem stage is classified as rhetorical algebra and was the dominant approach up to the 16th century. Al-Khwarizmi originated the word "algebra" in 830 AD, but his work was entirely rhetorical algebra. Fully symbolic algebra did not appear until François Viète's 1591 New Algebra, and even this had some spelled out words that were given symbols in Descartes's 1637 La Géométrie. The formal study of solving symbolic equations led Leonhard Euler to accept what were then considered "nonsense" roots such as negative numbers and imaginary numbers, in the late 18th century. However, European mathematicians, for the most part, resisted these concepts until the middle of the 19th century.
George Peacock's 1830 Treatise of Algebra was the first attempt to place algebra on a strictly symbolic basis. He distinguished a new symbolical algebra, distinct from the old arithmetical algebra. Whereas in arithmetical algebra
a
−
b
{\displaystyle a-b}
is restricted to
a
≥
b
{\displaystyle a\geq b}
, in symbolical algebra all rules of operations hold with no restrictions. Using this Peacock could show laws such as
(
−
a
)
(
−
b
)
=
a
b
{\displaystyle (-a)(-b)=ab}
, by letting
a
=
0
,
c
=
0
{\displaystyle a=0,c=0}
in
(
a
−
b
)
(
c
−
d
)
=
a
c
+
b
d
−
a
d
−
b
c
{\displaystyle (a-b)(c-d)=ac+bd-ad-bc}
. Peacock used what he termed the principle of the permanence of equivalent forms to justify his argument, but his reasoning suffered from the problem of induction. For example,
a
b
=
a
b
{\displaystyle {\sqrt {a}}{\sqrt {b}}={\sqrt {ab}}}
holds for the nonnegative real numbers, but not for general complex numbers.
=== Early group theory ===
Several areas of mathematics led to the study of groups. Lagrange's 1770 study of the solutions of the quintic equation led to the Galois group of a polynomial. Gauss's 1801 study of Fermat's little theorem led to the ring of integers modulo n, the multiplicative group of integers modulo n, and the more general concepts of cyclic groups and abelian groups. Klein's 1872 Erlangen program studied geometry and led to symmetry groups such as the Euclidean group and the group of projective transformations. In 1874 Lie introduced the theory of Lie groups, aiming for "the Galois theory of differential equations". In 1876 Poincaré and Klein introduced the group of Möbius transformations, and its subgroups such as the modular group and Fuchsian group, based on work on automorphic functions in analysis.
The abstract concept of group emerged slowly over the middle of the nineteenth century. Galois in 1832 was the first to use the term "group", signifying a collection of permutations closed under composition. Arthur Cayley's 1854 paper On the theory of groups defined a group as a set with an associative composition operation and the identity 1, today called a monoid. In 1870 Kronecker defined an abstract binary operation that was closed, commutative, associative, and had the left cancellation property
b
≠
c
→
a
⋅
b
≠
a
⋅
c
{\displaystyle b\neq c\to a\cdot b\neq a\cdot c}
, similar to the modern laws for a finite abelian group. Weber's 1882 definition of a group was a closed binary operation that was associative and had left and right cancellation. Walther von Dyck in 1882 was the first to require inverse elements as part of the definition of a group.
Once this abstract group concept emerged, results were reformulated in this abstract setting. For example, Sylow's theorem was reproven by Frobenius in 1887 directly from the laws of a finite group, although Frobenius remarked that the theorem followed from Cauchy's theorem on permutation groups and the fact that every finite group is a subgroup of a permutation group. Otto Hölder was particularly prolific in this area, defining quotient groups in 1889, group automorphisms in 1893, as well as simple groups. He also completed the Jordan–Hölder theorem. Dedekind and Miller independently characterized Hamiltonian groups and introduced the notion of the commutator of two elements. Burnside, Frobenius, and Molien created the representation theory of finite groups at the end of the nineteenth century. J. A. de Séguier's 1905 monograph Elements of the Theory of Abstract Groups presented many of these results in an abstract, general form, relegating "concrete" groups to an appendix, although it was limited to finite groups. The first monograph on both finite and infinite abstract groups was O. K. Schmidt's 1916 Abstract Theory of Groups.
=== Early ring theory ===
Noncommutative ring theory began with extensions of the complex numbers to hypercomplex numbers, specifically William Rowan Hamilton's quaternions in 1843. Many other number systems followed shortly. In 1844, Hamilton presented biquaternions, Cayley introduced octonions, and Grassman introduced exterior algebras. James Cockle presented tessarines in 1848 and coquaternions in 1849. William Kingdon Clifford introduced split-biquaternions in 1873. In addition Cayley introduced group algebras over the real and complex numbers in 1854 and square matrices in two papers of 1855 and 1858.
Once there were sufficient examples, it remained to classify them. In an 1870 monograph, Benjamin Peirce classified the more than 150 hypercomplex number systems of dimension below 6, and gave an explicit definition of an associative algebra. He defined nilpotent and idempotent elements and proved that any algebra contains one or the other. He also defined the Peirce decomposition. Frobenius in 1878 and Charles Sanders Peirce in 1881 independently proved that the only finite-dimensional division algebras over
R
{\displaystyle \mathbb {R} }
were the real numbers, the complex numbers, and the quaternions. In the 1880s Killing and Cartan showed that semisimple Lie algebras could be decomposed into simple ones, and classified all simple Lie algebras. Inspired by this, in the 1890s Cartan, Frobenius, and Molien proved (independently) that a finite-dimensional associative algebra over
R
{\displaystyle \mathbb {R} }
or
C
{\displaystyle \mathbb {C} }
uniquely decomposes into the direct sums of a nilpotent algebra and a semisimple algebra that is the product of some number of simple algebras, square matrices over division algebras. Cartan was the first to define concepts such as direct sum and simple algebra, and these concepts proved quite influential. In 1907 Wedderburn extended Cartan's results to an arbitrary field, in what are now called the Wedderburn principal theorem and Artin–Wedderburn theorem.
For commutative rings, several areas together led to commutative ring theory. In two papers in 1828 and 1832, Gauss formulated the Gaussian integers and showed that they form a unique factorization domain (UFD) and proved the biquadratic reciprocity law. Jacobi and Eisenstein at around the same time proved a cubic reciprocity law for the Eisenstein integers. The study of Fermat's last theorem led to the algebraic integers. In 1847, Gabriel Lamé thought he had proven FLT, but his proof was faulty as he assumed all the cyclotomic fields were UFDs, yet as Kummer pointed out,
Q
(
ζ
23
)
)
{\displaystyle \mathbb {Q} (\zeta _{23}))}
was not a UFD. In 1846 and 1847 Kummer introduced ideal numbers and proved unique factorization into ideal primes for cyclotomic fields. Dedekind extended this in 1871 to show that every nonzero ideal in the domain of integers of an algebraic number field is a unique product of prime ideals, a precursor of the theory of Dedekind domains. Overall, Dedekind's work created the subject of algebraic number theory.
In the 1850s, Riemann introduced the fundamental concept of a Riemann surface. Riemann's methods relied on an assumption he called Dirichlet's principle, which in 1870 was questioned by Weierstrass. Much later, in 1900, Hilbert justified Riemann's approach by developing the direct method in the calculus of variations. In the 1860s and 1870s, Clebsch, Gordan, Brill, and especially M. Noether studied algebraic functions and curves. In particular, Noether studied what conditions were required for a polynomial to be an element of the ideal generated by two algebraic curves in the polynomial ring
R
[
x
,
y
]
{\displaystyle \mathbb {R} [x,y]}
, although Noether did not use this modern language. In 1882 Dedekind and Weber, in analogy with Dedekind's earlier work on algebraic number theory, created a theory of algebraic function fields which allowed the first rigorous definition of a Riemann surface and a rigorous proof of the Riemann–Roch theorem. Kronecker in the 1880s, Hilbert in 1890, Lasker in 1905, and Macaulay in 1913 further investigated the ideals of polynomial rings implicit in E. Noether's work. Lasker proved a special case of the Lasker-Noether theorem, namely that every ideal in a polynomial ring is a finite intersection of primary ideals. Macaulay proved the uniqueness of this decomposition. Overall, this work led to the development of algebraic geometry.
In 1801 Gauss introduced binary quadratic forms over the integers and defined their equivalence. He further defined the discriminant of these forms, which is an invariant of a binary form. Between the 1860s and 1890s invariant theory developed and became a major field of algebra. Cayley, Sylvester, Gordan and others found the Jacobian and the Hessian for binary quartic forms and cubic forms. In 1868 Gordan proved that the graded algebra of invariants of a binary form over the complex numbers was finitely generated, i.e., has a basis. Hilbert wrote a thesis on invariants in 1885 and in 1890 showed that any form of any degree or number of variables has a basis. He extended this further in 1890 to Hilbert's basis theorem.
Once these theories had been developed, it was still several decades until an abstract ring concept emerged. The first axiomatic definition was given by Abraham Fraenkel in 1914. His definition was mainly the standard axioms: a set with two operations addition, which forms a group (not necessarily commutative), and multiplication, which is associative, distributes over addition, and has an identity element. In addition, he had two axioms on "regular elements" inspired by work on the p-adic numbers, which excluded now-common rings such as the ring of integers. These allowed Fraenkel to prove that addition was commutative. Fraenkel's work aimed to transfer Steinitz's 1910 definition of fields over to rings, but it was not connected with the existing work on concrete systems. Masazo Sono's 1917 definition was the first equivalent to the present one.
In 1920, Emmy Noether, in collaboration with W. Schmeidler, published a paper about the theory of ideals in which they defined left and right ideals in a ring. The following year she published a landmark paper called Idealtheorie in Ringbereichen (Ideal theory in rings'), analyzing ascending chain conditions with regard to (mathematical) ideals. The publication gave rise to the term "Noetherian ring", and several other mathematical objects being called Noetherian. Noted algebraist Irving Kaplansky called this work "revolutionary"; results which seemed inextricably connected to properties of polynomial rings were shown to follow from a single axiom. Artin, inspired by Noether's work, came up with the descending chain condition. These definitions marked the birth of abstract ring theory.
=== Early field theory ===
In 1801 Gauss introduced the integers mod p, where p is a prime number. Galois extended this in 1830 to finite fields with
p
n
{\displaystyle p^{n}}
elements. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by Moore in 1893. In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. The first clear definition of an abstract field was due to Heinrich Martin Weber in 1893. It was missing the associative law for multiplication, but covered finite fields and the fields of algebraic number theory and algebraic geometry. In 1910 Steinitz synthesized the knowledge of abstract field theory accumulated so far. He axiomatically defined fields with the modern definition, classified them by their characteristic, and proved many theorems commonly seen today.
=== Other major areas ===
Solving of systems of linear equations, which led to linear algebra
=== Modern algebra ===
The end of the 19th and the beginning of the 20th century saw a shift in the methodology of mathematics. Abstract algebra emerged around the start of the 20th century, under the name modern algebra. Its study was part of the drive for more intellectual rigor in mathematics. Initially, the assumptions in classical algebra, on which the whole of mathematics (and major parts of the natural sciences) depend, took the form of axiomatic systems. No longer satisfied with establishing properties of concrete objects, mathematicians started to turn their attention to general theory. Formal definitions of certain algebraic structures began to emerge in the 19th century. For example, results about various groups of permutations came to be seen as instances of general theorems that concern a general notion of an abstract group. Questions of structure and classification of various mathematical objects came to the forefront.
These processes were occurring throughout all of mathematics but became especially pronounced in algebra. Formal definitions through primitive operations and axioms were proposed for many basic algebraic structures, such as groups, rings, and fields. Hence such things as group theory and ring theory took their places in pure mathematics. The algebraic investigations of general fields by Ernst Steinitz and of commutative and then general rings by David Hilbert, Emil Artin and Emmy Noether, building on the work of Ernst Kummer, Leopold Kronecker and Richard Dedekind, who had considered ideals in commutative rings, and of Georg Frobenius and Issai Schur, concerning representation theory of groups, came to define abstract algebra. These developments of the last quarter of the 19th century and the first quarter of the 20th century were systematically exposed in Bartel van der Waerden's Moderne Algebra, the two-volume monograph published in 1930–1931 that reoriented the idea of algebra from the theory of equations to the theory of algebraic structures.
== Basic concepts ==
By abstracting away various amounts of detail, mathematicians have defined various algebraic structures that are used in many areas of mathematics. For instance, almost all systems studied are sets, to which the theorems of set theory apply. Those sets that have a certain binary operation defined on them form magmas, to which the concepts concerning magmas, as well those concerning sets, apply. We can add additional constraints on the algebraic structure, such as associativity (to form semigroups); identity, and inverses (to form groups); and other more complex structures. With additional structure, more theorems could be proved, but the generality is reduced. The "hierarchy" of algebraic objects (in terms of generality) creates a hierarchy of the corresponding theories: for instance, the theorems of group theory may be used when studying rings (algebraic objects that have two binary operations with certain axioms) since a ring is a group over one of its operations. In general there is a balance between the amount of generality and the richness of the theory: more general structures have usually fewer nontrivial theorems and fewer applications.
Examples of algebraic structures with a single binary operation are:
Magma
Quasigroup
Monoid
Semigroup
Group
Examples involving several operations include:
== Branches of abstract algebra ==
=== Group theory ===
A group is a set
G
{\displaystyle G}
together with a "group product", a binary operation
⋅
:
G
×
G
→
G
{\displaystyle \cdot :G\times G\rightarrow G}
. The group satisfies the following defining axioms (cf. Group (mathematics) § Definition):
Identity: there exists an element
e
{\displaystyle e}
such that, for each element
a
{\displaystyle a}
in
G
{\displaystyle G}
, it holds that
e
⋅
a
=
a
⋅
e
=
a
{\displaystyle e\cdot a=a\cdot e=a}
.
Inverse: for each element
a
{\displaystyle a}
of
G
{\displaystyle G}
, there exists an element
b
{\displaystyle b}
so that
a
⋅
b
=
b
⋅
a
=
e
{\displaystyle a\cdot b=b\cdot a=e}
.
Associativity: for each triplet of elements
a
,
b
,
c
{\displaystyle a,b,c}
in
G
{\displaystyle G}
, it holds that
(
a
⋅
b
)
⋅
c
=
a
⋅
(
b
⋅
c
)
{\displaystyle (a\cdot b)\cdot c=a\cdot (b\cdot c)}
.
=== Ring theory ===
A ring is a set
R
{\displaystyle R}
with two binary operations, addition:
(
x
,
y
)
↦
x
+
y
,
{\displaystyle (x,y)\mapsto x+y,}
and multiplication:
(
x
,
y
)
↦
x
y
{\displaystyle (x,y)\mapsto xy}
satisfying the following axioms.
R
{\displaystyle R}
is a commutative group under addition.
R
{\displaystyle R}
is a monoid under multiplication.
Multiplication is distributive with respect to addition.
== Applications ==
Because of its generality, abstract algebra is used in many fields of mathematics and science. For instance, algebraic topology uses algebraic objects to study topologies. The Poincaré conjecture, proved in 2003, asserts that the fundamental group of a manifold, which encodes information about connectedness, can be used to determine whether a manifold is a sphere or not. Algebraic number theory studies various number rings that generalize the set of integers. Using tools of algebraic number theory, Andrew Wiles proved Fermat's Last Theorem.
In physics, groups are used to represent symmetry operations, and the usage of group theory could simplify differential equations. In gauge theory, the requirement of local symmetry can be used to deduce the equations describing a system. The groups that describe those symmetries are Lie groups, and the study of Lie groups and Lie algebras reveals much about the physical system; for instance, the number of force carriers in a theory is equal to the dimension of the Lie algebra, and these bosons interact with the force they mediate if the Lie algebra is nonabelian.
== See also ==
Coding theory
Group theory
List of publications in abstract algebra
== References ==
=== Bibliography ===
== Further reading ==
Allenby, R. B. J. T. (1991), Rings, Fields and Groups, Butterworth-Heinemann, ISBN 978-0-340-54440-2
Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1
Burris, Stanley N.; Sankappanavar, H. P. (1999) [1981], A Course in Universal Algebra
Gilbert, Jimmie; Gilbert, Linda (2005), Elements of Modern Algebra, Thomson Brooks/Cole, ISBN 978-0-534-40264-8
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556
Sethuraman, B. A. (1996), Rings, Fields, Vector Spaces, and Group Theory: An Introduction to Abstract Algebra via Geometric Constructibility, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94848-5
Whitehead, C. (2002), Guide to Abstract Algebra (2nd ed.), Houndmills: Palgrave, ISBN 978-0-333-79447-0
W. Keith Nicholson (2012) Introduction to Abstract Algebra, 4th edition, John Wiley & Sons ISBN 978-1-118-13535-8 .
John R. Durbin (1992) Modern Algebra : an introduction, John Wiley & Sons
== External links ==
Charles C. Pinter (1990) [1982] A Book of Abstract Algebra, second edition, from University of Maryland | Wikipedia/Modern_algebra |
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter.
A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function; in this case, they are also called indefinite integrals. The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations.
Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.
Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.
== History ==
=== Pre-calculus integration ===
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by
y
=
x
k
{\displaystyle y=x^{k}}
(which translates to the integral
∫
x
k
d
x
{\displaystyle \int x^{k}\,dx}
in contemporary notation), for any given non-negative integer value of
k
{\displaystyle k}
. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. The case n = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647.
Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
=== Leibniz and Newton ===
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
=== Formalization ===
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
=== Historical notation ===
The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–1820, reprinted in his book of 1822.
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with .x or x′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
=== First use of the term ===
The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur".
== Terminology and notation ==
In general, the integral of a real-valued function f(x) with respect to a real variable x on an interval [a, b] is written as
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx.}
The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) is called the integrand, the points a and b are called the limits (or bounds) of integration, and the integral is said to be over the interval [a, b], called the interval of integration.
A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral.
When the limits are omitted, as in
∫
f
(
x
)
d
x
,
{\displaystyle \int f(x)\,dx,}
the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).
In advanced settings, it is not uncommon to leave out dx when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write
∫
a
b
(
c
1
f
+
c
2
g
)
=
c
1
∫
a
b
f
+
c
2
∫
a
b
g
{\textstyle \int _{a}^{b}(c_{1}f+c_{2}g)=c_{1}\int _{a}^{b}f+c_{2}\int _{a}^{b}g}
to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.
== Interpretations ==
Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation.
As another example, to find the area of the region bounded by the graph of the function f(x) =
x
{\textstyle {\sqrt {x}}}
between x = 0 and x = 1, one can divide the interval into five pieces (0, 1/5, 2/5, ..., 1), then construct rectangles using the right end height of each piece (thus √0, √1/5, √2/5, ..., √1) and sum their areas to get the approximation
1
5
(
1
5
−
0
)
+
2
5
(
2
5
−
1
5
)
+
⋯
+
5
5
(
5
5
−
4
5
)
≈
0.7497
,
{\displaystyle \textstyle {\sqrt {\frac {1}{5}}}\left({\frac {1}{5}}-0\right)+{\sqrt {\frac {2}{5}}}\left({\frac {2}{5}}-{\frac {1}{5}}\right)+\cdots +{\sqrt {\frac {5}{5}}}\left({\frac {5}{5}}-{\frac {4}{5}}\right)\approx 0.7497,}
which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes
∫
0
1
x
d
x
=
2
3
,
{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx={\frac {2}{3}},}
which means 2/3 is the result of a weighted sum of function values, √x, multiplied by the infinitesimal step widths, denoted by dx, on the interval [0, 1].
== Formal definitions ==
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals.
=== Riemann integral ===
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval [a, b] on the real line is a finite sequence
a
=
x
0
≤
t
1
≤
x
1
≤
t
2
≤
x
2
≤
⋯
≤
x
n
−
1
≤
t
n
≤
x
n
=
b
.
{\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!}
This partitions the interval [a, b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a specific point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
∑
i
=
1
n
f
(
t
i
)
Δ
i
;
{\displaystyle \sum _{i=1}^{n}f(t_{i})\,\Delta _{i};}
thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, Δi = xi−xi−1. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1...n Δi. The Riemann integral of a function f over the interval [a, b] is equal to S if:
For all
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that, for any tagged partition
[
a
,
b
]
{\displaystyle [a,b]}
with mesh less than
δ
{\displaystyle \delta }
,
|
S
−
∑
i
=
1
n
f
(
t
i
)
Δ
i
|
<
ε
.
{\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\,\Delta _{i}\right|<\varepsilon .}
When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
=== Lebesgue integral ===
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
As Folland puts it, "To compute the Riemann integral of f, one partitions the domain [a, b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a, b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f " philosophy, the integral of a non-negative function f : R → R should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f∗(t) = μ{ x : f(x) > t }. The Lebesgue integral of f is then defined by
∫
f
=
∫
0
∞
f
∗
(
t
)
d
t
{\displaystyle \int f=\int _{0}^{\infty }f^{*}(t)\,dt}
where the integral on the right is an ordinary improper Riemann integral (f∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:
∫
E
|
f
|
d
μ
<
+
∞
.
{\displaystyle \int _{E}|f|\,d\mu <+\infty .}
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
∫
E
f
d
μ
=
∫
E
f
+
d
μ
−
∫
E
f
−
d
μ
{\displaystyle \int _{E}f\,d\mu =\int _{E}f^{+}\,d\mu -\int _{E}f^{-}\,d\mu }
where
f
+
(
x
)
=
max
{
f
(
x
)
,
0
}
=
{
f
(
x
)
,
if
f
(
x
)
>
0
,
0
,
otherwise,
f
−
(
x
)
=
max
{
−
f
(
x
)
,
0
}
=
{
−
f
(
x
)
,
if
f
(
x
)
<
0
,
0
,
otherwise.
{\displaystyle {\begin{alignedat}{3}&f^{+}(x)&&{}={}\max\{f(x),0\}&&{}={}{\begin{cases}f(x),&{\text{if }}f(x)>0,\\0,&{\text{otherwise,}}\end{cases}}\\&f^{-}(x)&&{}={}\max\{-f(x),0\}&&{}={}{\begin{cases}-f(x),&{\text{if }}f(x)<0,\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}
=== Other integrals ===
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral. A function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals.
The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable.
The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals.
The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures.
The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933.
The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
The Khinchin integral, named after Aleksandr Khinchin.
The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion.
The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation.
The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion.
The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953.
The Bochner integral, a generalization of the Lebesgue integral to functions that take values in a Banach space.
== Properties ==
=== Linearity ===
The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
f
↦
∫
a
b
f
(
x
)
d
x
{\displaystyle f\mapsto \int _{a}^{b}f(x)\;dx}
is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals:
∫
a
b
(
α
f
+
β
g
)
(
x
)
d
x
=
α
∫
a
b
f
(
x
)
d
x
+
β
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}(\alpha f+\beta g)(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.\,}
Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
f
↦
∫
E
f
d
μ
{\displaystyle f\mapsto \int _{E}f\,d\mu }
is a linear functional on this vector space, so that:
∫
E
(
α
f
+
β
g
)
d
μ
=
α
∫
E
f
d
μ
+
β
∫
E
g
d
μ
.
{\displaystyle \int _{E}(\alpha f+\beta g)\,d\mu =\alpha \int _{E}f\,d\mu +\beta \int _{E}g\,d\mu .}
More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
f
↦
∫
E
f
d
μ
,
{\displaystyle f\mapsto \int _{E}f\,d\mu ,\,}
that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K = C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See Hildebrandt 1953 for an axiomatic characterization of the integral.
=== Inequalities ===
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
m
(
b
−
a
)
≤
∫
a
b
f
(
x
)
d
x
≤
M
(
b
−
a
)
.
{\displaystyle m(b-a)\leq \int _{a}^{b}f(x)\,dx\leq M(b-a).}
Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
∫
a
b
f
(
x
)
d
x
≤
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx\leq \int _{a}^{b}g(x)\,dx.}
This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
∫
a
b
f
(
x
)
d
x
<
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx<\int _{a}^{b}g(x)\,dx.}
Subintervals. If [c, d] is a subinterval of [a, b] and f (x) is non-negative for all x, then
∫
c
d
f
(
x
)
d
x
≤
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{c}^{d}f(x)\,dx\leq \int _{a}^{b}f(x)\,dx.}
Products and absolute values of functions. If f and g are two functions, then we may consider their pointwise products and powers, and absolute values:
(
f
g
)
(
x
)
=
f
(
x
)
g
(
x
)
,
f
2
(
x
)
=
(
f
(
x
)
)
2
,
|
f
|
(
x
)
=
|
f
(
x
)
|
.
{\displaystyle (fg)(x)=f(x)g(x),\;f^{2}(x)=(f(x))^{2},\;|f|(x)=|f(x)|.}
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
|
∫
a
b
f
(
x
)
d
x
|
≤
∫
a
b
|
f
(
x
)
|
d
x
.
{\displaystyle \left|\int _{a}^{b}f(x)\,dx\right|\leq \int _{a}^{b}|f(x)|\,dx.}
Moreover, if f and g are both Riemann-integrable then fg is also Riemann-integrable, and
(
∫
a
b
(
f
g
)
(
x
)
d
x
)
2
≤
(
∫
a
b
f
(
x
)
2
d
x
)
(
∫
a
b
g
(
x
)
2
d
x
)
.
{\displaystyle \left(\int _{a}^{b}(fg)(x)\,dx\right)^{2}\leq \left(\int _{a}^{b}f(x)^{2}\,dx\right)\left(\int _{a}^{b}g(x)^{2}\,dx\right).}
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
|
∫
f
(
x
)
g
(
x
)
d
x
|
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
(
∫
|
g
(
x
)
|
q
d
x
)
1
/
q
.
{\displaystyle \left|\int f(x)g(x)\,dx\right|\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}\left(\int \left|g(x)\right|^{q}\,dx\right)^{1/q}.}
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then | f |p, | g |p and | f + g |p are also Riemann-integrable and the following Minkowski inequality holds:
(
∫
|
f
(
x
)
+
g
(
x
)
|
p
d
x
)
1
/
p
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
+
(
∫
|
g
(
x
)
|
p
d
x
)
1
/
p
.
{\displaystyle \left(\int \left|f(x)+g(x)\right|^{p}\,dx\right)^{1/p}\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}+\left(\int \left|g(x)\right|^{p}\,dx\right)^{1/p}.}
An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
=== Conventions ===
In this section, f is a real-valued Riemann-integrable function. The integral
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0 ≤ x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [x i , x i +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
∫
a
b
f
(
x
)
d
x
=
−
∫
b
a
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=-\int _{b}^{a}f(x)\,dx.}
With a = b, this implies:
∫
a
a
f
(
x
)
d
x
=
0.
{\displaystyle \int _{a}^{a}f(x)\,dx=0.}
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that if c is any element of [a, b], then:
∫
a
b
f
(
x
)
d
x
=
∫
a
c
f
(
x
)
d
x
+
∫
c
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\int _{a}^{c}f(x)\,dx+\int _{c}^{b}f(x)\,dx.}
With the first convention, the resulting relation
∫
a
c
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
−
∫
c
b
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
+
∫
b
c
f
(
x
)
d
x
{\displaystyle {\begin{aligned}\int _{a}^{c}f(x)\,dx&{}=\int _{a}^{b}f(x)\,dx-\int _{c}^{b}f(x)\,dx\\&{}=\int _{a}^{b}f(x)\,dx+\int _{b}^{c}f(x)\,dx\end{aligned}}}
is then well-defined for any cyclic permutation of a, b, and c.
== Fundamental theorem of calculus ==
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
=== First theorem ===
Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
F
′
(
x
)
=
f
(
x
)
{\displaystyle F'(x)=f(x)}
for all x in (a, b).
=== Second theorem ===
Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a, b]. That is, f and F are functions such that for all x in [a, b],
f
(
x
)
=
F
′
(
x
)
.
{\displaystyle f(x)=F'(x).}
If f is integrable on [a, b] then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
== Extensions ==
=== Improper integrals ===
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity:
∫
a
∞
f
(
x
)
d
x
=
lim
b
→
∞
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{\infty }f(x)\,dx=\lim _{b\to \infty }\int _{a}^{b}f(x)\,dx.}
If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result:
∫
a
b
f
(
x
)
d
x
=
lim
ε
→
0
∫
a
+
ϵ
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\varepsilon \to 0}\int _{a+\epsilon }^{b}f(x)\,dx.}
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
=== Multiple integration ===
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals
R
=
[
a
,
b
]
×
[
c
,
d
]
{\displaystyle R=[a,b]\times [c,d]}
can be written
∫
R
f
(
x
,
y
)
d
A
{\displaystyle \int _{R}f(x,y)\,dA}
where the differential dA indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of z = f(x,y) over the domain R. Under suitable conditions (e.g., if f is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral
∫
a
b
[
∫
c
d
f
(
x
,
y
)
d
y
]
d
x
.
{\displaystyle \int _{a}^{b}\left[\int _{c}^{d}f(x,y)\,dy\right]\,dx.}
This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign:
∬
R
f
(
x
,
y
)
d
A
.
{\displaystyle \iint _{R}f(x,y)\,dA.}
Integration over more general domains is possible. The integral of a function f, with respect to volume, over an n-dimensional region D of
R
n
{\displaystyle \mathbb {R} ^{n}}
is denoted by symbols such as:
∫
D
f
(
x
)
d
n
x
=
∫
D
f
d
V
.
{\displaystyle \int _{D}f(\mathbf {x} )d^{n}\mathbf {x} \ =\int _{D}f\,dV.}
=== Line integrals and surface integrals ===
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
W
=
F
⋅
s
.
{\displaystyle W=\mathbf {F} \cdot \mathbf {s} .}
For an object moving along a path C in a vector field F such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + ds. This gives the line integral
W
=
∫
C
F
⋅
d
s
.
{\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} .}
A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that a fluid flows through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, one need to take the dot product of v with the unit surface normal to S at each point, which will give a scalar field, which is integrated over the surface:
∫
S
v
⋅
d
S
.
{\displaystyle \int _{S}{\mathbf {v} }\cdot \,d{\mathbf {S} }.}
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
=== Contour integrals ===
In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve
γ
{\displaystyle \gamma }
in the complex plane, the integral is denoted as follows
∫
γ
f
(
z
)
d
z
.
{\displaystyle \int _{\gamma }f(z)\,dz.}
This is known as a contour integral.
=== Integrals of differential forms ===
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:
E
(
x
,
y
,
z
)
d
x
+
F
(
x
,
y
,
z
)
d
y
+
G
(
x
,
y
,
z
)
d
z
{\displaystyle E(x,y,z)\,dx+F(x,y,z)\,dy+G(x,y,z)\,dz}
where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes.
A differential two-form is a sum of the form
G
(
x
,
y
,
z
)
d
x
∧
d
y
+
E
(
x
,
y
,
z
)
d
y
∧
d
z
+
F
(
x
,
y
,
z
)
d
z
∧
d
x
.
{\displaystyle G(x,y,z)\,dx\wedge dy+E(x,y,z)\,dy\wedge dz+F(x,y,z)\,dz\wedge dx.}
Here the basic two-forms
d
x
∧
d
y
,
d
z
∧
d
x
,
d
y
∧
d
z
{\displaystyle dx\wedge dy,dz\wedge dx,dy\wedge dz}
measure oriented areas parallel to the coordinate two-planes. The symbol
∧
{\displaystyle \wedge }
denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of
E
i
+
F
j
+
G
k
{\displaystyle E\mathbf {i} +F\mathbf {j} +G\mathbf {k} }
.
Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.
=== Summations ===
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus.
=== Functional integrals ===
An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral.
== Applications ==
Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.
Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder,
π
r
2
h
{\displaystyle \pi r^{2}h}
, where
r
{\displaystyle r}
is the radius. In the case of a simple disc created by rotating a curve about the x-axis, the radius is given by f(x), and its height is the differential dx. Using an integral with bounds a and b, the volume of the disc is equal to:
π
∫
a
b
f
2
(
x
)
d
x
.
{\displaystyle \pi \int _{a}^{b}f^{2}(x)\,dx.}
Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval
[
a
,
b
]
{\displaystyle [a,b]}
is given by
x
(
b
)
−
x
(
a
)
=
∫
a
b
v
(
t
)
d
t
,
{\displaystyle x(b)-x(a)=\int _{a}^{b}v(t)\,dt,}
where
v
(
t
)
{\displaystyle v(t)}
is the velocity expressed as a function of time. The work done by a force
F
(
x
)
{\displaystyle F(x)}
(given as a function of position) from an initial position
A
{\displaystyle A}
to a final position
B
{\displaystyle B}
is:
W
A
→
B
=
∫
A
B
F
(
x
)
d
x
.
{\displaystyle W_{A\rightarrow B}=\int _{A}^{B}F(x)\,dx.}
Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.
== Computation ==
=== Analytical ===
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F′ = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions.
Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
=== Symbolic ===
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple.
A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient.
Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform.
=== Numerical ===
Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function.
Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree n polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials.
Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to T(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2n − 1.
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
=== Mechanical ===
The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.
=== Geometrical ===
Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square.
=== Integration by differentiation ===
Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator
∂
x
{\displaystyle \partial _{x}}
. This can also be applied to functional integrals, allowing them to be computed by functional differentiation.
== Examples ==
=== Using the fundamental theorem of calculus ===
The fundamental theorem of calculus allows straightforward calculations of basic functions:
∫
0
π
sin
(
x
)
d
x
=
−
cos
(
x
)
|
x
=
0
x
=
π
=
−
cos
(
π
)
−
(
−
cos
(
0
)
)
=
2.
{\displaystyle \int _{0}^{\pi }\sin(x)\,dx=-\cos(x){\big |}_{x=0}^{x=\pi }=-\cos(\pi )-{\big (}-\cos(0){\big )}=2.}
== See also ==
Integral equation – Equations with an unknown function under an integral sign
Integral symbol – Mathematical symbol used to denote integrals and antiderivatives
Lists of integrals
== Notes ==
== References ==
== Bibliography ==
== External links ==
"Integral", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Online Integral Calculator, Wolfram Alpha.
=== Online books ===
Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus
Crowell, Benjamin, Calculus, Fullerton College, an online textbook
Garrett, Paul, Notes on First-Year Calculus
Hussain, Faraz, Understanding Calculus, an online textbook
Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust.
Kowalk, W. P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook
Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
Numerical Methods of Integration at Holistic Numerical Methods Institute
P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques | Wikipedia/Integral_calculus |
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function
f
(
z
)
{\displaystyle f(z)}
has a
root at
w
{\displaystyle w}
, then
f
(
z
)
/
(
z
−
w
)
{\displaystyle f(z)/(z-w)}
, taking the limit value at
w
{\displaystyle w}
, is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
A transcendental entire function is an entire function that is not a polynomial.
Just as meromorphic functions can be viewed as a generalization of rational fractions, entire functions can be viewed as a generalization of polynomials. In particular, if for meromorphic functions one can generalize the factorization into simple fractions (the Mittag-Leffler theorem on the decomposition of a meromorphic function), then for entire functions there is a generalization of the factorization — the Weierstrass theorem on entire functions.
== Properties ==
Every entire function
f
(
z
)
{\displaystyle f(z)}
can be represented as a single power series:
f
(
z
)
=
∑
n
=
0
∞
a
n
z
n
{\displaystyle \ f(z)=\sum _{n=0}^{\infty }a_{n}z^{n}\ }
that converges everywhere in the complex plane, hence uniformly on compact sets. The radius of convergence is infinite, which implies that
lim
n
→
∞
|
a
n
|
1
n
=
0
{\displaystyle \ \lim _{n\to \infty }|a_{n}|^{\frac {1}{n}}=0\ }
or, equivalently,
lim
n
→
∞
ln
|
a
n
|
n
=
−
∞
.
{\displaystyle \ \lim _{n\to \infty }{\frac {\ln |a_{n}|}{n}}=-\infty ~.}
Any power series satisfying this criterion will represent an entire function.
If (and only if) the coefficients of the power series are all real then the function evidently takes real values for real arguments, and the value of the function at the complex conjugate of
z
{\displaystyle z}
will be the complex conjugate of the value at
z
.
{\displaystyle z~.}
Such functions are sometimes called self-conjugate (the conjugate function,
F
∗
(
z
)
,
{\displaystyle F^{*}(z),}
being given by
F
¯
(
z
¯
)
{\displaystyle {\bar {F}}({\bar {z}})}
).
If the real part of an entire function is known in a neighborhood of a point then both the real and imaginary parts are known for the whole complex plane, up to an imaginary constant. For instance, if the real part is known in a neighborhood of zero, then we can find the coefficients for
n
>
0
{\displaystyle n>0}
from the following derivatives with respect to a real variable
r
{\displaystyle r}
:
R
e
{
a
n
}
=
1
n
!
d
n
d
r
n
R
e
{
f
(
r
)
}
a
t
r
=
0
I
m
{
a
n
}
=
1
n
!
d
n
d
r
n
R
e
{
f
(
r
e
−
i
π
2
n
)
}
a
t
r
=
0
{\displaystyle {\begin{aligned}\operatorname {\mathcal {Re}} \left\{\ a_{n}\ \right\}&={\frac {1}{n!}}{\frac {d^{n}}{dr^{n}}}\ \operatorname {\mathcal {Re}} \left\{\ f(r)\ \right\}&&\quad \mathrm {at} \quad r=0\\\operatorname {\mathcal {Im}} \left\{\ a_{n}\ \right\}&={\frac {1}{n!}}{\frac {d^{n}}{dr^{n}}}\ \operatorname {\mathcal {Re}} \left\{\ f\left(r\ e^{-{\frac {i\pi }{2n}}}\right)\ \right\}&&\quad \mathrm {at} \quad r=0\end{aligned}}}
(Likewise, if the imaginary part is known in a neighborhood then the function is determined up to a real constant.) In fact, if the real part is known just on an arc of a circle, then the function is determined up to an imaginary constant.}
Note however that an entire function is not determined by its real part on all curves. In particular, if the real part is given on any curve in the complex plane where the real part of some other entire function is zero, then any multiple of that function can be added to the function we are trying to determine. For example, if the curve where the real part is known is the real line, then we can add
i
{\displaystyle i}
times any self-conjugate function. If the curve forms a loop, then it is determined by the real part of the function on the loop since the only functions whose real part is zero on the curve are those that are everywhere equal to some imaginary number.
The Weierstrass factorization theorem asserts that any entire function can be represented by a product involving its zeroes (or "roots").
The entire functions on the complex plane form an integral domain (in fact a Prüfer domain). They also form a commutative unital associative algebra over the complex numbers.
Liouville's theorem states that any bounded entire function must be constant.
As a consequence of Liouville's theorem, any function that is entire on the whole Riemann sphere
is constant. Thus any non-constant entire function must have a singularity at the complex point at infinity, either a pole for a polynomial or an essential singularity for a transcendental entire function. Specifically, by the Casorati–Weierstrass theorem, for any transcendental entire function
f
{\displaystyle f}
and any complex
w
{\displaystyle w}
there is a sequence
(
z
m
)
m
∈
N
{\displaystyle (z_{m})_{m\in \mathbb {N} }}
such that
lim
m
→
∞
|
z
m
|
=
∞
,
and
lim
m
→
∞
f
(
z
m
)
=
w
.
{\displaystyle \ \lim _{m\to \infty }|z_{m}|=\infty ,\qquad {\text{and}}\qquad \lim _{m\to \infty }f(z_{m})=w~.}
Picard's little theorem is a much stronger result: Any non-constant entire function takes on every complex number as value, possibly with a single exception. When an exception exists, it is called a lacunary value of the function. The possibility of a lacunary value is illustrated by the exponential function, which never takes on the value
0
{\displaystyle 0}
. One can take a suitable branch of the logarithm of an entire function that never hits
0
{\displaystyle 0}
, so that this will also be an entire function (according to the Weierstrass factorization theorem). The logarithm hits every complex number except possibly one number, which implies that the first function will hit any value other than
0
{\displaystyle 0}
an infinite number of times. Similarly, a non-constant, entire function that does not hit a particular value will hit every other value an infinite number of times.
Liouville's theorem is a special case of the following statement:
== Growth ==
Entire functions may grow as fast as any increasing function: for any increasing function
g
:
[
0
,
∞
)
→
[
0
,
∞
)
{\displaystyle g:[0,\infty )\to [0,\infty )}
there exists an entire function
f
{\displaystyle f}
such that
f
(
x
)
>
g
(
|
x
|
)
{\displaystyle f(x)>g(|x|)}
for all real
x
{\displaystyle x}
. Such a function
f
{\displaystyle f}
may be easily found of the form:
f
(
z
)
=
c
+
∑
k
=
1
∞
(
z
k
)
n
k
{\displaystyle f(z)=c+\sum _{k=1}^{\infty }\left({\frac {z}{k}}\right)^{n_{k}}}
for a constant
c
{\displaystyle c}
and a strictly increasing sequence of positive integers
n
k
{\displaystyle n_{k}}
. Any such sequence defines an entire function
f
(
z
)
{\displaystyle f(z)}
, and if the powers are chosen appropriately we may satisfy the inequality
f
(
x
)
>
g
(
|
x
|
)
{\displaystyle f(x)>g(|x|)}
for all real
x
{\displaystyle x}
. (For instance, it certainly holds if one chooses
c
:=
g
(
2
)
{\displaystyle c:=g(2)}
and, for any integer
k
≥
1
{\displaystyle k\geq 1}
one chooses an even exponent
n
k
{\displaystyle n_{k}}
such that
(
k
+
1
k
)
n
k
≥
g
(
k
+
2
)
{\displaystyle \left({\frac {k+1}{k}}\right)^{n_{k}}\geq g(k+2)}
).
== Order and type ==
The order (at infinity) of an entire function
f
(
z
)
{\displaystyle f(z)}
is defined using the limit superior as:
ρ
=
lim sup
r
→
∞
ln
(
ln
‖
f
‖
∞
,
B
r
)
ln
r
,
{\displaystyle \rho =\limsup _{r\to \infty }{\frac {\ln \left(\ln \|f\|_{\infty ,B_{r}}\right)}{\ln r}},}
where
B
r
{\displaystyle B_{r}}
is the disk of radius
r
{\displaystyle r}
and
‖
f
‖
∞
,
B
r
{\displaystyle \|f\|_{\infty ,B_{r}}}
denotes the supremum norm of
f
(
z
)
{\displaystyle f(z)}
on
B
r
{\displaystyle B_{r}}
. The order is a non-negative real number or infinity (except when
f
(
z
)
=
0
{\displaystyle f(z)=0}
for all
z
{\displaystyle z}
). In other words, the order of
f
(
z
)
{\displaystyle f(z)}
is the infimum of all
m
{\displaystyle m}
such that:
f
(
z
)
=
O
(
exp
(
|
z
|
m
)
)
,
as
z
→
∞
.
{\displaystyle f(z)=O\left(\exp \left(|z|^{m}\right)\right),\quad {\text{as }}z\to \infty .}
The example of
f
(
z
)
=
exp
(
2
z
2
)
{\displaystyle f(z)=\exp(2z^{2})}
shows that this does not mean
f
(
z
)
=
O
(
exp
(
|
z
|
m
)
)
{\displaystyle f(z)=O(\exp(|z|^{m}))}
if
f
(
z
)
{\displaystyle f(z)}
is of order
m
{\displaystyle m}
.
If
0
<
ρ
<
∞
,
{\displaystyle 0<\rho <\infty ,}
one can also define the type:
σ
=
lim sup
r
→
∞
ln
‖
f
‖
∞
,
B
r
r
ρ
.
{\displaystyle \sigma =\limsup _{r\to \infty }{\frac {\ln \|f\|_{\infty ,B_{r}}}{r^{\rho }}}.}
If the order is 1 and the type is
σ
{\displaystyle \sigma }
, the function is said to be "of exponential type
σ
{\displaystyle \sigma }
". If it is of order less than 1 it is said to be of exponential type 0.
If
f
(
z
)
=
∑
n
=
0
∞
a
n
z
n
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }a_{n}z^{n},}
then the order and type can be found by the formulas
ρ
=
lim sup
n
→
∞
n
ln
n
−
ln
|
a
n
|
(
e
ρ
σ
)
1
ρ
=
lim sup
n
→
∞
n
1
ρ
|
a
n
|
1
n
{\displaystyle {\begin{aligned}\rho &=\limsup _{n\to \infty }{\frac {n\ln n}{-\ln |a_{n}|}}\\[6pt](e\rho \sigma )^{\frac {1}{\rho }}&=\limsup _{n\to \infty }n^{\frac {1}{\rho }}|a_{n}|^{\frac {1}{n}}\end{aligned}}}
Let
f
(
n
)
{\displaystyle f^{(n)}}
denote the
n
{\displaystyle n}
-th derivative of
f
{\displaystyle f}
. Then we may restate these formulas in terms of the derivatives at any arbitrary point
z
0
{\displaystyle z_{0}}
:
ρ
=
lim sup
n
→
∞
n
ln
n
n
ln
n
−
ln
|
f
(
n
)
(
z
0
)
|
=
(
1
−
lim sup
n
→
∞
ln
|
f
(
n
)
(
z
0
)
|
n
ln
n
)
−
1
(
ρ
σ
)
1
ρ
=
e
1
−
1
ρ
lim sup
n
→
∞
|
f
(
n
)
(
z
0
)
|
1
n
n
1
−
1
ρ
{\displaystyle {\begin{aligned}\rho &=\limsup _{n\to \infty }{\frac {n\ln n}{n\ln n-\ln |f^{(n)}(z_{0})|}}=\left(1-\limsup _{n\to \infty }{\frac {\ln |f^{(n)}(z_{0})|}{n\ln n}}\right)^{-1}\\[6pt](\rho \sigma )^{\frac {1}{\rho }}&=e^{1-{\frac {1}{\rho }}}\limsup _{n\to \infty }{\frac {|f^{(n)}(z_{0})|^{\frac {1}{n}}}{n^{1-{\frac {1}{\rho }}}}}\end{aligned}}}
The type may be infinite, as in the case of the reciprocal gamma function, or zero (see example below under § Order 1).
Another way to find out the order and type is Matsaev's theorem.
=== Examples ===
Here are some examples of functions of various orders:
==== Order ρ ====
For arbitrary positive numbers
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
one can construct an example of an entire function of order
ρ
{\displaystyle \rho }
and type
σ
{\displaystyle \sigma }
using:
f
(
z
)
=
∑
n
=
1
∞
(
e
ρ
σ
n
)
n
ρ
z
n
{\displaystyle f(z)=\sum _{n=1}^{\infty }\left({\frac {e\rho \sigma }{n}}\right)^{\frac {n}{\rho }}z^{n}}
==== Order 0 ====
Non-zero polynomials
∑
n
=
0
∞
2
−
n
2
z
n
{\displaystyle \sum _{n=0}^{\infty }2^{-n^{2}}z^{n}}
==== Order 1/4 ====
f
(
z
4
)
{\displaystyle f({\sqrt[{4}]{z}})}
where
f
(
u
)
=
cos
(
u
)
+
cosh
(
u
)
{\displaystyle f(u)=\cos(u)+\cosh(u)}
==== Order 1/3 ====
f
(
z
3
)
{\displaystyle f({\sqrt[{3}]{z}})}
where
f
(
u
)
=
e
u
+
e
ω
u
+
e
ω
2
u
=
e
u
+
2
e
−
u
2
cos
(
3
u
2
)
,
with
ω
a complex cube root of 1
.
{\displaystyle f(u)=e^{u}+e^{\omega u}+e^{\omega ^{2}u}=e^{u}+2e^{-{\frac {u}{2}}}\cos \left({\frac {{\sqrt {3}}u}{2}}\right),\quad {\text{with }}\omega {\text{ a complex cube root of 1}}.}
==== Order 1/2 ====
cos
(
a
z
)
{\displaystyle \cos \left(a{\sqrt {z}}\right)}
with
a
≠
0
{\displaystyle a\neq 0}
(for which the type is given by
σ
=
|
a
|
{\displaystyle \sigma =|a|}
)
==== Order 1 ====
exp
(
a
z
)
{\displaystyle \exp(az)}
with
a
≠
0
{\displaystyle a\neq 0}
(
σ
=
|
a
|
{\displaystyle \sigma =|a|}
)
sin
(
z
)
{\displaystyle \sin(z)}
cosh
(
z
)
{\displaystyle \cosh(z)}
the Bessel functions
J
n
(
z
)
{\displaystyle J_{n}(z)}
and spherical Bessel functions
j
n
(
z
)
{\displaystyle j_{n}(z)}
for integer values of
n
{\displaystyle n}
the reciprocal gamma function
1
/
Γ
(
z
)
{\displaystyle 1/\Gamma (z)}
(
σ
{\displaystyle \sigma }
is infinite)
∑
n
=
2
∞
z
n
(
n
ln
n
)
n
.
(
σ
=
0
)
{\displaystyle \sum _{n=2}^{\infty }{\frac {z^{n}}{(n\ln n)^{n}}}.\quad (\sigma =0)}
==== Order 3/2 ====
Airy function
A
i
(
z
)
{\displaystyle Ai(z)}
==== Order 2 ====
exp
(
a
z
2
)
{\displaystyle \exp(az^{2})}
with
a
≠
0
{\displaystyle a\neq 0}
(
σ
=
|
a
|
{\displaystyle \sigma =|a|}
)
The Barnes G-function (
σ
{\displaystyle \sigma }
is infinite).
==== Order infinity ====
exp
(
exp
(
z
)
)
{\displaystyle \exp(\exp(z))}
== Genus ==
Entire functions of finite order have Hadamard's canonical representation (Hadamard factorization theorem):
f
(
z
)
=
z
m
e
P
(
z
)
∏
n
=
1
∞
(
1
−
z
z
n
)
exp
(
z
z
n
+
⋯
+
1
p
(
z
z
n
)
p
)
,
{\displaystyle f(z)=z^{m}e^{P(z)}\prod _{n=1}^{\infty }\left(1-{\frac {z}{z_{n}}}\right)\exp \left({\frac {z}{z_{n}}}+\cdots +{\frac {1}{p}}\left({\frac {z}{z_{n}}}\right)^{p}\right),}
where
z
k
{\displaystyle z_{k}}
are those roots of
f
{\displaystyle f}
that are not zero (
z
k
≠
0
{\displaystyle z_{k}\neq 0}
),
m
{\displaystyle m}
is the order of the zero of
f
{\displaystyle f}
at
z
=
0
{\displaystyle z=0}
(the case
m
=
0
{\displaystyle m=0}
being taken to mean
f
(
0
)
≠
0
{\displaystyle f(0)\neq 0}
),
P
{\displaystyle P}
a polynomial (whose degree we shall call
q
{\displaystyle q}
), and
p
{\displaystyle p}
is the smallest non-negative integer such that the series
∑
n
=
1
∞
1
|
z
n
|
p
+
1
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{|z_{n}|^{p+1}}}}
converges. The non-negative integer
g
=
max
{
p
,
q
}
{\displaystyle g=\max\{p,q\}}
is called the genus of the entire function
f
{\displaystyle f}
.
If the order
ρ
{\displaystyle \rho }
is not an integer, then
g
=
[
ρ
]
{\displaystyle g=[\rho ]}
is the integer part of
ρ
{\displaystyle \rho }
. If the order is a positive integer, then there are two possibilities:
g
=
ρ
−
1
{\displaystyle g=\rho -1}
or
g
=
ρ
{\displaystyle g=\rho }
.
For example,
sin
{\displaystyle \sin }
,
cos
{\displaystyle \cos }
and
exp
{\displaystyle \exp }
are entire functions of genus
g
=
ρ
=
1
{\displaystyle g=\rho =1}
.
== Other examples ==
According to J. E. Littlewood, the Weierstrass sigma function is a 'typical' entire function. This statement can be made precise in the theory of random entire functions: the asymptotic behavior of almost all entire functions is similar to that of the sigma function. Other examples include the Fresnel integrals, the Jacobi theta function, and the reciprocal Gamma function. The exponential function and the error function are special cases of the Mittag-Leffler function. According to the fundamental theorem of Paley and Wiener, Fourier transforms of functions (or distributions) with bounded support are entire functions of order
1
{\displaystyle 1}
and finite type.
Other examples are solutions of linear differential equations with polynomial coefficients. If the coefficient at the highest derivative is constant, then all solutions of such equations are entire functions. For example, the exponential function, sine, cosine, Airy functions and Parabolic cylinder functions arise in this way. The class of entire functions is closed with respect to compositions. This makes it possible to study dynamics of entire functions.
An entire function of the square root of a complex number is entire if the original function is even, for example
cos
(
z
)
{\displaystyle \cos({\sqrt {z}})}
.
If a sequence of polynomials all of whose roots are real converges in a neighborhood of the origin to a limit which is not identically equal to zero, then this limit is an entire function. Such entire functions form the Laguerre–Pólya class, which can also be characterized in terms of the Hadamard product, namely,
f
{\displaystyle f}
belongs to this class if and only if in the Hadamard representation all
z
n
{\displaystyle z_{n}}
are real,
ρ
≤
1
{\displaystyle \rho \leq 1}
, and
P
(
z
)
=
a
+
b
z
+
c
z
2
{\displaystyle P(z)=a+bz+cz^{2}}
, where
b
{\displaystyle b}
and
c
{\displaystyle c}
are real, and
c
≤
0
{\displaystyle c\leq 0}
. For example, the sequence of polynomials
(
1
−
(
z
−
d
)
2
n
)
n
{\displaystyle \left(1-{\frac {(z-d)^{2}}{n}}\right)^{n}}
converges, as
n
{\displaystyle n}
increases, to
exp
(
−
(
z
−
d
)
2
)
{\displaystyle \exp(-(z-d)^{2})}
. The polynomials
1
2
(
(
1
+
i
z
n
)
n
+
(
1
−
i
z
n
)
n
)
{\displaystyle {\frac {1}{2}}\left(\left(1+{\frac {iz}{n}}\right)^{n}+\left(1-{\frac {iz}{n}}\right)^{n}\right)}
have all real roots, and converge to
cos
(
z
)
{\displaystyle \cos(z)}
. The polynomials
∏
m
=
1
n
(
1
−
z
2
(
(
m
−
1
2
)
π
)
2
)
{\displaystyle \prod _{m=1}^{n}\left(1-{\frac {z^{2}}{\left(\left(m-{\frac {1}{2}}\right)\pi \right)^{2}}}\right)}
also converge to
cos
(
z
)
{\displaystyle \cos(z)}
, showing the buildup of the Hadamard product for cosine.
== See also ==
Jensen's formula
Carlson's theorem
Exponential type
Paley–Wiener theorem
Wiman-Valiron theory
== Notes ==
== References ==
== Sources == | Wikipedia/Entire_function |
In linear algebra, an eigenvector ( EYE-gən-) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector
v
{\displaystyle \mathbf {v} }
of a linear transformation
T
{\displaystyle T}
is scaled by a constant factor
λ
{\displaystyle \lambda }
when the linear transformation is applied to it:
T
v
=
λ
v
{\displaystyle T\mathbf {v} =\lambda \mathbf {v} }
. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor
λ
{\displaystyle \lambda }
(possibly negative).
Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed.
The eigenvectors and eigenvalues of a linear transformation serve to characterize it, and so they play important roles in all areas where linear algebra is applied, from geology to quantum mechanics. In particular, it is often the case that a system is represented by a linear transformation whose outputs are fed as inputs to the same transformation (feedback). In such an application, the largest eigenvalue is of particular importance, because it governs the long-term behavior of the system after many applications of the linear transformation, and the associated eigenvector is the steady state of the system.
== Matrices ==
For an
n
×
n
{\displaystyle n{\times }n}
matrix A and a nonzero vector
v
{\displaystyle \mathbf {v} }
of length
n
{\displaystyle n}
, if multiplying A by
v
{\displaystyle \mathbf {v} }
(denoted
A
v
{\displaystyle A\mathbf {v} }
) simply scales
v
{\displaystyle \mathbf {v} }
by a factor λ, where λ is a scalar, then
v
{\displaystyle \mathbf {v} }
is called an eigenvector of A, and λ is the corresponding eigenvalue. This relationship can be expressed as:
A
v
=
λ
v
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }
.
Given an n-dimensional vector space and a choice of basis, there is a direct correspondence between linear transformations from the vector space into itself and n-by-n square matrices. Hence, in a finite-dimensional vector space, it is equivalent to define eigenvalues and eigenvectors using either the language of linear transformations, or the language of matrices.
== Overview ==
Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for 'proper', 'characteristic', 'own'. Originally used to study principal axes of the rotational motion of rigid bodies, eigenvalues and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, and matrix diagonalization.
In essence, an eigenvector v of a linear transformation T is a nonzero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue. This condition can be written as the equation
T
(
v
)
=
λ
v
,
{\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} ,}
referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex.
The example here, based on the Mona Lisa, provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping. Points in the top half are moved to the right, and points in the bottom half are moved to the left, proportional to how far they are from the horizontal axis that goes through the middle of the painting. The vectors pointing to each point in the original image are therefore tilted right or left, and made longer or shorter by the transformation. Points along the horizontal axis do not move at all when this transformation is applied. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation, because the mapping does not change its direction. Moreover, these eigenvectors all have an eigenvalue equal to one, because the mapping does not change their length either.
Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can also take many forms. For example, the linear transformation could be a differential operator like
d
d
x
{\displaystyle {\tfrac {d}{dx}}}
, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as
d
d
x
e
λ
x
=
λ
e
λ
x
.
{\displaystyle {\frac {d}{dx}}e^{\lambda x}=\lambda e^{\lambda x}.}
Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices. If the linear transformation is expressed in the form of an n by n matrix A, then the eigenvalue equation for a linear transformation above can be rewritten as the matrix multiplication
A
v
=
λ
v
,
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} ,}
where the eigenvector v is an n by 1 matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix—for example by diagonalizing it.
Eigenvalues and eigenvectors give rise to many closely related mathematical concepts, and the prefix eigen- is applied liberally when naming them:
The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace, or the characteristic space of T associated with that eigenvalue.
If a set of eigenvectors of T forms a basis of the domain of T, then this basis is called an eigenbasis.
== History ==
Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms and differential equations.
In the 18th century, Leonhard Euler studied the rotational motion of a rigid body, and discovered the importance of the principal axes. Joseph-Louis Lagrange realized that the principal axes are the eigenvectors of the inertia matrix.
In the early 19th century, Augustin-Louis Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root), for what is now called eigenvalue; his term survives in characteristic equation.
Later, Joseph Fourier used the work of Lagrange and Pierre-Simon Laplace to solve the heat equation by separation of variables in his 1822 treatise The Analytic Theory of Heat (Théorie analytique de la chaleur). Charles-François Sturm elaborated on Fourier's ideas further, and brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that real symmetric matrices have real eigenvalues. This was extended by Charles Hermite in 1855 to what are now called Hermitian matrices.
Around the same time, Francesco Brioschi proved that the eigenvalues of orthogonal matrices lie on the unit circle, and Alfred Clebsch found the corresponding result for skew-symmetric matrices. Finally, Karl Weierstrass clarified an important aspect in the stability theory started by Laplace, by realizing that defective matrices can cause instability.
In the meantime, Joseph Liouville studied eigenvalue problems similar to those of Sturm; the discipline that grew out of their work is now called Sturm–Liouville theory. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, David Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen, which means "own", to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Hermann von Helmholtz. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is the standard today.
The first numerical algorithm for computing eigenvalues and eigenvectors appeared in 1929, when Richard von Mises published the power method. One of the most popular methods today, the QR algorithm, was proposed independently by John G. F. Francis and Vera Kublanovskaya in 1961.
== Eigenvalues and eigenvectors of matrices ==
Eigenvalues and eigenvectors are often introduced to students in the context of linear algebra courses focused on matrices.
Furthermore, linear transformations over a finite-dimensional vector space can be represented using matrices, which is especially common in numerical and computational applications.
Consider n-dimensional vectors that are formed as a list of n scalars, such as the three-dimensional vectors
x
=
[
1
−
3
4
]
and
y
=
[
−
20
60
−
80
]
.
{\displaystyle \mathbf {x} ={\begin{bmatrix}1\\-3\\4\end{bmatrix}}\quad {\mbox{and}}\quad \mathbf {y} ={\begin{bmatrix}-20\\60\\-80\end{bmatrix}}.}
These vectors are said to be scalar multiples of each other, or parallel or collinear, if there is a scalar λ such that
x
=
λ
y
.
{\displaystyle \mathbf {x} =\lambda \mathbf {y} .}
In this case,
λ
=
−
1
20
{\displaystyle \lambda =-{\frac {1}{20}}}
.
Now consider the linear transformation of n-dimensional vectors defined by an n by n matrix A,
A
v
=
w
,
{\displaystyle A\mathbf {v} =\mathbf {w} ,}
or
[
A
11
A
12
⋯
A
1
n
A
21
A
22
⋯
A
2
n
⋮
⋮
⋱
⋮
A
n
1
A
n
2
⋯
A
n
n
]
[
v
1
v
2
⋮
v
n
]
=
[
w
1
w
2
⋮
w
n
]
{\displaystyle {\begin{bmatrix}A_{11}&A_{12}&\cdots &A_{1n}\\A_{21}&A_{22}&\cdots &A_{2n}\\\vdots &\vdots &\ddots &\vdots \\A_{n1}&A_{n2}&\cdots &A_{nn}\\\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n}\end{bmatrix}}={\begin{bmatrix}w_{1}\\w_{2}\\\vdots \\w_{n}\end{bmatrix}}}
where, for each row,
w
i
=
A
i
1
v
1
+
A
i
2
v
2
+
⋯
+
A
i
n
v
n
=
∑
j
=
1
n
A
i
j
v
j
.
{\displaystyle w_{i}=A_{i1}v_{1}+A_{i2}v_{2}+\cdots +A_{in}v_{n}=\sum _{j=1}^{n}A_{ij}v_{j}.}
If it occurs that v and w are scalar multiples, that is if
then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. Equation (1) is the eigenvalue equation for the matrix A.
Equation (1) can be stated equivalently as
where I is the n by n identity matrix and 0 is the zero vector.
=== Eigenvalues and the characteristic polynomial ===
Equation (2) has a nonzero solution v if and only if the determinant of the matrix (A − λI) is zero. Therefore, the eigenvalues of A are values of λ that satisfy the equation
Using the Leibniz formula for determinants, the left-hand side of equation (3) is a polynomial function of the variable λ and the degree of this polynomial is n, the order of the matrix A. Its coefficients depend on the entries of A, except that its term of degree n is always (−1)nλn. This polynomial is called the characteristic polynomial of A. Equation (3) is called the characteristic equation or the secular equation of A.
The fundamental theorem of algebra implies that the characteristic polynomial of an n-by-n matrix A, being a polynomial of degree n, can be factored into the product of n linear terms,
where each λi may be real but in general is a complex number. The numbers λ1, λ2, ..., λn, which may not all have distinct values, are roots of the polynomial and are the eigenvalues of A.
As a brief example, which is described in more detail in the examples section later, consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
Taking the determinant of (A − λI), the characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}=3-4\lambda +\lambda ^{2}.}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A. The eigenvectors corresponding to each eigenvalue can be found by solving for the components of v in the equation
(
A
−
λ
I
)
v
=
0
{\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} }
. In this example, the eigenvectors are any nonzero scalar multiples of
v
λ
=
1
=
[
1
−
1
]
,
v
λ
=
3
=
[
1
1
]
.
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}1\\-1\end{bmatrix}},\quad \mathbf {v} _{\lambda =3}={\begin{bmatrix}1\\1\end{bmatrix}}.}
If the entries of the matrix A are all real numbers, then the coefficients of the characteristic polynomial will also be real numbers, but the eigenvalues may still have nonzero imaginary parts. The entries of the corresponding eigenvectors therefore may also have nonzero imaginary parts. Similarly, the eigenvalues may be irrational numbers even if all the entries of A are rational numbers or even if they are all integers. However, if the entries of A are all algebraic numbers, which include the rationals, the eigenvalues must also be algebraic numbers.
The non-real roots of a real polynomial with real coefficients can be grouped into pairs of complex conjugates, namely with the two members of each pair having imaginary parts that differ only in sign and the same real part. If the degree is odd, then by the intermediate value theorem at least one of the roots is real. Therefore, any real matrix with odd order has at least one real eigenvalue, whereas a real matrix with even order may not have any real eigenvalues. The eigenvectors associated with these complex eigenvalues are also complex and also appear in complex conjugate pairs.
=== Spectrum of a matrix ===
The spectrum of a matrix is the list of eigenvalues, repeated according to multiplicity; in an alternative notation the set of eigenvalues with their multiplicities.
An important quantity associated with the spectrum is the maximum absolute value of any eigenvalue. This is known as the spectral radius of the matrix.
=== Algebraic multiplicity ===
Let λi be an eigenvalue of an n by n matrix A. The algebraic multiplicity μA(λi) of the eigenvalue is its multiplicity as a root of the characteristic polynomial, that is, the largest integer k such that (λ − λi)k divides evenly that polynomial.
Suppose a matrix A has dimension n and d ≤ n distinct eigenvalues. Whereas equation (4) factors the characteristic polynomial of A into the product of n linear terms with some terms potentially repeating, the characteristic polynomial can also be written as the product of d terms each corresponding to a distinct eigenvalue and raised to the power of the algebraic multiplicity,
det
(
A
−
λ
I
)
=
(
λ
1
−
λ
)
μ
A
(
λ
1
)
(
λ
2
−
λ
)
μ
A
(
λ
2
)
⋯
(
λ
d
−
λ
)
μ
A
(
λ
d
)
.
{\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )^{\mu _{A}(\lambda _{1})}(\lambda _{2}-\lambda )^{\mu _{A}(\lambda _{2})}\cdots (\lambda _{d}-\lambda )^{\mu _{A}(\lambda _{d})}.}
If d = n then the right-hand side is the product of n linear terms and this is the same as equation (4). The size of each eigenvalue's algebraic multiplicity is related to the dimension n as
1
≤
μ
A
(
λ
i
)
≤
n
,
μ
A
=
∑
i
=
1
d
μ
A
(
λ
i
)
=
n
.
{\displaystyle {\begin{aligned}1&\leq \mu _{A}(\lambda _{i})\leq n,\\\mu _{A}&=\sum _{i=1}^{d}\mu _{A}\left(\lambda _{i}\right)=n.\end{aligned}}}
If μA(λi) = 1, then λi is said to be a simple eigenvalue. If μA(λi) equals the geometric multiplicity of λi, γA(λi), defined in the next section, then λi is said to be a semisimple eigenvalue.
=== Eigenspaces, geometric multiplicity, and the eigenbasis for matrices ===
Given a particular eigenvalue λ of the n by n matrix A, define the set E to be all vectors v that satisfy equation (2),
E
=
{
v
:
(
A
−
λ
I
)
v
=
0
}
.
{\displaystyle E=\left\{\mathbf {v} :\left(A-\lambda I\right)\mathbf {v} =\mathbf {0} \right\}.}
On one hand, this set is precisely the kernel or nullspace of the matrix (A − λI). On the other hand, by definition, any nonzero vector that satisfies this condition is an eigenvector of A associated with λ. So, the set E is the union of the zero vector with the set of all eigenvectors of A associated with λ, and E equals the nullspace of (A − λI). E is called the eigenspace or characteristic space of A associated with λ. In general λ is a complex number and the eigenvectors are complex n by 1 matrices. A property of the nullspace is that it is a linear subspace, so E is a linear subspace of
C
n
{\displaystyle \mathbb {C} ^{n}}
.
Because the eigenspace E is a linear subspace, it is closed under addition. That is, if two vectors u and v belong to the set E, written u, v ∈ E, then (u + v) ∈ E or equivalently A(u + v) = λ(u + v). This can be checked using the distributive property of matrix multiplication. Similarly, because E is a linear subspace, it is closed under scalar multiplication. That is, if v ∈ E and α is a complex number, (αv) ∈ E or equivalently A(αv) = λ(αv). This can be checked by noting that multiplication of complex matrices by complex numbers is commutative. As long as u + v and αv are not zero, they are also eigenvectors of A associated with λ.
The dimension of the eigenspace E associated with λ, or equivalently the maximum number of linearly independent eigenvectors associated with λ, is referred to as the eigenvalue's geometric multiplicity
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
. Because E is also the nullspace of (A − λI), the geometric multiplicity of λ is the dimension of the nullspace of (A − λI), also called the nullity of (A − λI), which relates to the dimension and rank of (A − λI) as
γ
A
(
λ
)
=
n
−
rank
(
A
−
λ
I
)
.
{\displaystyle \gamma _{A}(\lambda )=n-\operatorname {rank} (A-\lambda I).}
Because of the definition of eigenvalues and eigenvectors, an eigenvalue's geometric multiplicity must be at least one, that is, each eigenvalue has at least one associated eigenvector. Furthermore, an eigenvalue's geometric multiplicity cannot exceed its algebraic multiplicity. Additionally, recall that an eigenvalue's algebraic multiplicity cannot exceed n.
1
≤
γ
A
(
λ
)
≤
μ
A
(
λ
)
≤
n
{\displaystyle 1\leq \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )\leq n}
To prove the inequality
γ
A
(
λ
)
≤
μ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )\leq \mu _{A}(\lambda )}
, consider how the definition of geometric multiplicity implies the existence of
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
orthonormal eigenvectors
v
1
,
…
,
v
γ
A
(
λ
)
{\displaystyle {\boldsymbol {v}}_{1},\,\ldots ,\,{\boldsymbol {v}}_{\gamma _{A}(\lambda )}}
, such that
A
v
k
=
λ
v
k
{\displaystyle A{\boldsymbol {v}}_{k}=\lambda {\boldsymbol {v}}_{k}}
. We can therefore find a (unitary) matrix V whose first
γ
A
(
λ
)
{\displaystyle \gamma _{A}(\lambda )}
columns are these eigenvectors, and whose remaining columns can be any orthonormal set of
n
−
γ
A
(
λ
)
{\displaystyle n-\gamma _{A}(\lambda )}
vectors orthogonal to these eigenvectors of A. Then V has full rank and is therefore invertible. Evaluating
D
:=
V
T
A
V
{\displaystyle D:=V^{T}AV}
, we get a matrix whose top left block is the diagonal matrix
λ
I
γ
A
(
λ
)
{\displaystyle \lambda I_{\gamma _{A}(\lambda )}}
. This can be seen by evaluating what the left-hand side does to the first column basis vectors. By reorganizing and adding
−
ξ
V
{\displaystyle -\xi V}
on both sides, we get
(
A
−
ξ
I
)
V
=
V
(
D
−
ξ
I
)
{\displaystyle (A-\xi I)V=V(D-\xi I)}
since I commutes with V. In other words,
A
−
ξ
I
{\displaystyle A-\xi I}
is similar to
D
−
ξ
I
{\displaystyle D-\xi I}
, and
det
(
A
−
ξ
I
)
=
det
(
D
−
ξ
I
)
{\displaystyle \det(A-\xi I)=\det(D-\xi I)}
. But from the definition of D, we know that
det
(
D
−
ξ
I
)
{\displaystyle \det(D-\xi I)}
contains a factor
(
ξ
−
λ
)
γ
A
(
λ
)
{\displaystyle (\xi -\lambda )^{\gamma _{A}(\lambda )}}
, which means that the algebraic multiplicity of
λ
{\displaystyle \lambda }
must satisfy
μ
A
(
λ
)
≥
γ
A
(
λ
)
{\displaystyle \mu _{A}(\lambda )\geq \gamma _{A}(\lambda )}
.
Suppose A has
d
≤
n
{\displaystyle d\leq n}
distinct eigenvalues
λ
1
,
…
,
λ
d
{\displaystyle \lambda _{1},\ldots ,\lambda _{d}}
, where the geometric multiplicity of
λ
i
{\displaystyle \lambda _{i}}
is
γ
A
(
λ
i
)
{\displaystyle \gamma _{A}(\lambda _{i})}
. The total geometric multiplicity of A,
γ
A
=
∑
i
=
1
d
γ
A
(
λ
i
)
,
d
≤
γ
A
≤
n
,
{\displaystyle {\begin{aligned}\gamma _{A}&=\sum _{i=1}^{d}\gamma _{A}(\lambda _{i}),\\d&\leq \gamma _{A}\leq n,\end{aligned}}}
is the dimension of the sum of all the eigenspaces of A's eigenvalues, or equivalently the maximum number of linearly independent eigenvectors of A. If
γ
A
=
n
{\displaystyle \gamma _{A}=n}
, then
The direct sum of the eigenspaces of all of A's eigenvalues is the entire vector space
C
n
{\displaystyle \mathbb {C} ^{n}}
.
A basis of
C
n
{\displaystyle \mathbb {C} ^{n}}
can be formed from n linearly independent eigenvectors of A; such a basis is called an eigenbasis
Any vector in
C
n
{\displaystyle \mathbb {C} ^{n}}
can be written as a linear combination of eigenvectors of A.
=== Additional properties ===
Let
A
{\displaystyle A}
be an arbitrary
n
×
n
{\displaystyle n\times n}
matrix of complex numbers with eigenvalues
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
. Each eigenvalue appears
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
times in this list, where
μ
A
(
λ
i
)
{\displaystyle \mu _{A}(\lambda _{i})}
is the eigenvalue's algebraic multiplicity. The following are properties of this matrix and its eigenvalues:
The trace of
A
{\displaystyle A}
, defined as the sum of its diagonal elements, is also the sum of all eigenvalues,
tr
(
A
)
=
∑
i
=
1
n
a
i
i
=
∑
i
=
1
n
λ
i
=
λ
1
+
λ
2
+
⋯
+
λ
n
.
{\displaystyle \operatorname {tr} (A)=\sum _{i=1}^{n}a_{ii}=\sum _{i=1}^{n}\lambda _{i}=\lambda _{1}+\lambda _{2}+\cdots +\lambda _{n}.}
The determinant of
A
{\displaystyle A}
is the product of all its eigenvalues,
det
(
A
)
=
∏
i
=
1
n
λ
i
=
λ
1
λ
2
⋯
λ
n
.
{\displaystyle \det(A)=\prod _{i=1}^{n}\lambda _{i}=\lambda _{1}\lambda _{2}\cdots \lambda _{n}.}
The eigenvalues of the
k
{\displaystyle k}
th power of
A
{\displaystyle A}
; i.e., the eigenvalues of
A
k
{\displaystyle A^{k}}
, for any positive integer
k
{\displaystyle k}
, are
λ
1
k
,
…
,
λ
n
k
{\displaystyle \lambda _{1}^{k},\ldots ,\lambda _{n}^{k}}
.
The matrix
A
{\displaystyle A}
is invertible if and only if every eigenvalue is nonzero.
If
A
{\displaystyle A}
is invertible, then the eigenvalues of
A
−
1
{\displaystyle A^{-1}}
are
1
λ
1
,
…
,
1
λ
n
{\textstyle {\frac {1}{\lambda _{1}}},\ldots ,{\frac {1}{\lambda _{n}}}}
and each eigenvalue's geometric multiplicity coincides. Moreover, since the characteristic polynomial of the inverse is the reciprocal polynomial of the original, the eigenvalues share the same algebraic multiplicity.
If
A
{\displaystyle A}
is equal to its conjugate transpose
A
∗
{\displaystyle A^{*}}
, or equivalently if
A
{\displaystyle A}
is Hermitian, then every eigenvalue is real. The same is true of any symmetric real matrix.
If
A
{\displaystyle A}
is not only Hermitian but also positive-definite, positive-semidefinite, negative-definite, or negative-semidefinite, then every eigenvalue is positive, non-negative, negative, or non-positive, respectively.
If
A
{\displaystyle A}
is unitary, every eigenvalue has absolute value
|
λ
i
|
=
1
{\displaystyle |\lambda _{i}|=1}
.
If
A
{\displaystyle A}
is a
n
×
n
{\displaystyle n\times n}
matrix and
{
λ
1
,
…
,
λ
k
}
{\displaystyle \{\lambda _{1},\ldots ,\lambda _{k}\}}
are its eigenvalues, then the eigenvalues of matrix
I
+
A
{\displaystyle I+A}
(where
I
{\displaystyle I}
is the identity matrix) are
{
λ
1
+
1
,
…
,
λ
k
+
1
}
{\displaystyle \{\lambda _{1}+1,\ldots ,\lambda _{k}+1\}}
. Moreover, if
α
∈
C
{\displaystyle \alpha \in \mathbb {C} }
, the eigenvalues of
α
I
+
A
{\displaystyle \alpha I+A}
are
{
λ
1
+
α
,
…
,
λ
k
+
α
}
{\displaystyle \{\lambda _{1}+\alpha ,\ldots ,\lambda _{k}+\alpha \}}
. More generally, for a polynomial
P
{\displaystyle P}
the eigenvalues of matrix
P
(
A
)
{\displaystyle P(A)}
are
{
P
(
λ
1
)
,
…
,
P
(
λ
k
)
}
{\displaystyle \{P(\lambda _{1}),\ldots ,P(\lambda _{k})\}}
.
=== Left and right eigenvectors ===
Many disciplines traditionally represent vectors as matrices with a single column rather than as matrices with a single row. For that reason, the word "eigenvector" in the context of matrices almost always refers to a right eigenvector, namely a column vector that right multiplies the
n
×
n
{\displaystyle n\times n}
matrix
A
{\displaystyle A}
in the defining equation, equation (1),
A
v
=
λ
v
.
{\displaystyle A\mathbf {v} =\lambda \mathbf {v} .}
The eigenvalue and eigenvector problem can also be defined for row vectors that left multiply matrix
A
{\displaystyle A}
. In this formulation, the defining equation is
u
A
=
κ
u
,
{\displaystyle \mathbf {u} A=\kappa \mathbf {u} ,}
where
κ
{\displaystyle \kappa }
is a scalar and
u
{\displaystyle u}
is a
1
×
n
{\displaystyle 1\times n}
matrix. Any row vector
u
{\displaystyle u}
satisfying this equation is called a left eigenvector of
A
{\displaystyle A}
and
κ
{\displaystyle \kappa }
is its associated eigenvalue. Taking the transpose of this equation,
A
T
u
T
=
κ
u
T
.
{\displaystyle A^{\textsf {T}}\mathbf {u} ^{\textsf {T}}=\kappa \mathbf {u} ^{\textsf {T}}.}
Comparing this equation to equation (1), it follows immediately that a left eigenvector of
A
{\displaystyle A}
is the same as the transpose of a right eigenvector of
A
T
{\displaystyle A^{\textsf {T}}}
, with the same eigenvalue. Furthermore, since the characteristic polynomial of
A
T
{\displaystyle A^{\textsf {T}}}
is the same as the characteristic polynomial of
A
{\displaystyle A}
, the left and right eigenvectors of
A
{\displaystyle A}
are associated with the same eigenvalues.
=== Diagonalization and the eigendecomposition ===
Suppose the eigenvectors of A form a basis, or equivalently A has n linearly independent eigenvectors v1, v2, ..., vn with associated eigenvalues λ1, λ2, ..., λn. The eigenvalues need not be distinct. Define a square matrix Q whose columns are the n linearly independent eigenvectors of A,
Q
=
[
v
1
v
2
⋯
v
n
]
.
{\displaystyle Q={\begin{bmatrix}\mathbf {v} _{1}&\mathbf {v} _{2}&\cdots &\mathbf {v} _{n}\end{bmatrix}}.}
Since each column of Q is an eigenvector of A, right multiplying A by Q scales each column of Q by its associated eigenvalue,
A
Q
=
[
λ
1
v
1
λ
2
v
2
⋯
λ
n
v
n
]
.
{\displaystyle AQ={\begin{bmatrix}\lambda _{1}\mathbf {v} _{1}&\lambda _{2}\mathbf {v} _{2}&\cdots &\lambda _{n}\mathbf {v} _{n}\end{bmatrix}}.}
With this in mind, define a diagonal matrix Λ where each diagonal element Λii is the eigenvalue associated with the ith column of Q. Then
A
Q
=
Q
Λ
.
{\displaystyle AQ=Q\Lambda .}
Because the columns of Q are linearly independent, Q is invertible. Right multiplying both sides of the equation by Q−1,
A
=
Q
Λ
Q
−
1
,
{\displaystyle A=Q\Lambda Q^{-1},}
or by instead left multiplying both sides by Q−1,
Q
−
1
A
Q
=
Λ
.
{\displaystyle Q^{-1}AQ=\Lambda .}
A can therefore be decomposed into a matrix composed of its eigenvectors, a diagonal matrix with its eigenvalues along the diagonal, and the inverse of the matrix of eigenvectors. This is called the eigendecomposition and it is a similarity transformation. Such a matrix A is said to be similar to the diagonal matrix Λ or diagonalizable. The matrix Q is the change of basis matrix of the similarity transformation. Essentially, the matrices A and Λ represent the same linear transformation expressed in two different bases. The eigenvectors are used as the basis when representing the linear transformation as Λ.
Conversely, suppose a matrix A is diagonalizable. Let P be a non-singular square matrix such that P−1AP is some diagonal matrix D. Left multiplying both by P, AP = PD. Each column of P must therefore be an eigenvector of A whose eigenvalue is the corresponding diagonal element of D. Since the columns of P must be linearly independent for P to be invertible, there exist n linearly independent eigenvectors of A. It then follows that the eigenvectors of A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective. For defective matrices, the notion of eigenvectors generalizes to generalized eigenvectors and the diagonal matrix of eigenvalues generalizes to the Jordan normal form. Over an algebraically closed field, any matrix A has a Jordan normal form and therefore admits a basis of generalized eigenvectors and a decomposition into generalized eigenspaces.
=== Variational characterization ===
In the Hermitian case, eigenvalues can be given a variational characterization. The largest eigenvalue of
H
{\displaystyle H}
is the maximum value of the quadratic form
x
T
H
x
/
x
T
x
{\displaystyle \mathbf {x} ^{\textsf {T}}H\mathbf {x} /\mathbf {x} ^{\textsf {T}}\mathbf {x} }
. A value of
x
{\displaystyle \mathbf {x} }
that realizes that maximum is an eigenvector.
=== Matrix examples ===
==== Two-dimensional matrix example ====
Consider the matrix
A
=
[
2
1
1
2
]
.
{\displaystyle A={\begin{bmatrix}2&1\\1&2\end{bmatrix}}.}
The figure on the right shows the effect of this transformation on point coordinates in the plane. The eigenvectors v of this transformation satisfy equation (1), and the values of λ for which the determinant of the matrix (A − λI) equals zero are the eigenvalues.
Taking the determinant to find characteristic polynomial of A,
det
(
A
−
λ
I
)
=
|
[
2
1
1
2
]
−
λ
[
1
0
0
1
]
|
=
|
2
−
λ
1
1
2
−
λ
|
=
3
−
4
λ
+
λ
2
=
(
λ
−
3
)
(
λ
−
1
)
.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&1\\1&2\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &1\\1&2-\lambda \end{vmatrix}}\\[6pt]&=3-4\lambda +\lambda ^{2}\\[6pt]&=(\lambda -3)(\lambda -1).\end{aligned}}}
Setting the characteristic polynomial equal to zero, it has roots at λ=1 and λ=3, which are the two eigenvalues of A.
For λ=1, equation (2) becomes,
(
A
−
I
)
v
λ
=
1
=
[
1
1
1
1
]
[
v
1
v
2
]
=
[
0
0
]
{\displaystyle (A-I)\mathbf {v} _{\lambda =1}={\begin{bmatrix}1&1\\1&1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}}
1
v
1
+
1
v
2
=
0
{\displaystyle 1v_{1}+1v_{2}=0}
Any nonzero vector with v1 = −v2 solves this equation. Therefore,
v
λ
=
1
=
[
v
1
−
v
1
]
=
[
1
−
1
]
{\displaystyle \mathbf {v} _{\lambda =1}={\begin{bmatrix}v_{1}\\-v_{1}\end{bmatrix}}={\begin{bmatrix}1\\-1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 1, as is any scalar multiple of this vector.
For λ=3, equation (2) becomes
(
A
−
3
I
)
v
λ
=
3
=
[
−
1
1
1
−
1
]
[
v
1
v
2
]
=
[
0
0
]
−
1
v
1
+
1
v
2
=
0
;
1
v
1
−
1
v
2
=
0
{\displaystyle {\begin{aligned}(A-3I)\mathbf {v} _{\lambda =3}&={\begin{bmatrix}-1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}}\\-1v_{1}+1v_{2}&=0;\\1v_{1}-1v_{2}&=0\end{aligned}}}
Any nonzero vector with v1 = v2 solves this equation. Therefore,
v
λ
=
3
=
[
v
1
v
1
]
=
[
1
1
]
{\displaystyle \mathbf {v} _{\lambda =3}={\begin{bmatrix}v_{1}\\v_{1}\end{bmatrix}}={\begin{bmatrix}1\\1\end{bmatrix}}}
is an eigenvector of A corresponding to λ = 3, as is any scalar multiple of this vector.
Thus, the vectors vλ=1 and vλ=3 are eigenvectors of A associated with the eigenvalues λ=1 and λ=3, respectively.
==== Three-dimensional matrix example ====
Consider the matrix
A
=
[
2
0
0
0
3
4
0
4
9
]
.
{\displaystyle A={\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
|
[
2
0
0
0
3
4
0
4
9
]
−
λ
[
1
0
0
0
1
0
0
0
1
]
|
=
|
2
−
λ
0
0
0
3
−
λ
4
0
4
9
−
λ
|
,
=
(
2
−
λ
)
[
(
3
−
λ
)
(
9
−
λ
)
−
16
]
=
−
λ
3
+
14
λ
2
−
35
λ
+
22.
{\displaystyle {\begin{aligned}\det(A-\lambda I)&=\left|{\begin{bmatrix}2&0&0\\0&3&4\\0&4&9\end{bmatrix}}-\lambda {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}\right|={\begin{vmatrix}2-\lambda &0&0\\0&3-\lambda &4\\0&4&9-\lambda \end{vmatrix}},\\[6pt]&=(2-\lambda ){\bigl [}(3-\lambda )(9-\lambda )-16{\bigr ]}=-\lambda ^{3}+14\lambda ^{2}-35\lambda +22.\end{aligned}}}
The roots of the characteristic polynomial are 2, 1, and 11, which are the only three eigenvalues of A. These eigenvalues correspond to the eigenvectors
[
1
0
0
]
T
{\displaystyle {\begin{bmatrix}1&0&0\end{bmatrix}}^{\textsf {T}}}
,
[
0
−
2
1
]
T
{\displaystyle {\begin{bmatrix}0&-2&1\end{bmatrix}}^{\textsf {T}}}
, and
[
0
1
2
]
T
{\displaystyle {\begin{bmatrix}0&1&2\end{bmatrix}}^{\textsf {T}}}
, or any nonzero multiple thereof.
==== Three-dimensional matrix example with complex eigenvalues ====
Consider the cyclic permutation matrix
A
=
[
0
1
0
0
0
1
1
0
0
]
.
{\displaystyle A={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\end{bmatrix}}.}
This matrix shifts the coordinates of the vector up by one position and moves the first coordinate to the bottom. Its characteristic polynomial is 1 − λ3, whose roots are
λ
1
=
1
λ
2
=
−
1
2
+
i
3
2
λ
3
=
λ
2
∗
=
−
1
2
−
i
3
2
{\displaystyle {\begin{aligned}\lambda _{1}&=1\\\lambda _{2}&=-{\frac {1}{2}}+i{\frac {\sqrt {3}}{2}}\\\lambda _{3}&=\lambda _{2}^{*}=-{\frac {1}{2}}-i{\frac {\sqrt {3}}{2}}\end{aligned}}}
where
i
{\displaystyle i}
is an imaginary unit with
i
2
=
−
1
{\displaystyle i^{2}=-1}
.
For the real eigenvalue λ1 = 1, any vector with three equal nonzero entries is an eigenvector. For example,
A
[
5
5
5
]
=
[
5
5
5
]
=
1
⋅
[
5
5
5
]
.
{\displaystyle A{\begin{bmatrix}5\\5\\5\end{bmatrix}}={\begin{bmatrix}5\\5\\5\end{bmatrix}}=1\cdot {\begin{bmatrix}5\\5\\5\end{bmatrix}}.}
For the complex conjugate pair of imaginary eigenvalues,
λ
2
λ
3
=
1
,
λ
2
2
=
λ
3
,
λ
3
2
=
λ
2
.
{\displaystyle \lambda _{2}\lambda _{3}=1,\quad \lambda _{2}^{2}=\lambda _{3},\quad \lambda _{3}^{2}=\lambda _{2}.}
Then
A
[
1
λ
2
λ
3
]
=
[
λ
2
λ
3
1
]
=
λ
2
⋅
[
1
λ
2
λ
3
]
,
{\displaystyle A{\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}}={\begin{bmatrix}\lambda _{2}\\\lambda _{3}\\1\end{bmatrix}}=\lambda _{2}\cdot {\begin{bmatrix}1\\\lambda _{2}\\\lambda _{3}\end{bmatrix}},}
and
A
[
1
λ
3
λ
2
]
=
[
λ
3
λ
2
1
]
=
λ
3
⋅
[
1
λ
3
λ
2
]
.
{\displaystyle A{\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}={\begin{bmatrix}\lambda _{3}\\\lambda _{2}\\1\end{bmatrix}}=\lambda _{3}\cdot {\begin{bmatrix}1\\\lambda _{3}\\\lambda _{2}\end{bmatrix}}.}
Therefore, the other two eigenvectors of A are complex and are
v
λ
2
=
[
1
λ
2
λ
3
]
T
{\displaystyle \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}1&\lambda _{2}&\lambda _{3}\end{bmatrix}}^{\textsf {T}}}
and
v
λ
3
=
[
1
λ
3
λ
2
]
T
{\displaystyle \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}1&\lambda _{3}&\lambda _{2}\end{bmatrix}}^{\textsf {T}}}
with eigenvalues λ2 and λ3, respectively. The two complex eigenvectors also appear in a complex conjugate pair,
v
λ
2
=
v
λ
3
∗
.
{\displaystyle \mathbf {v} _{\lambda _{2}}=\mathbf {v} _{\lambda _{3}}^{*}.}
==== Diagonal matrix example ====
Matrices with entries only along the main diagonal are called diagonal matrices. The eigenvalues of a diagonal matrix are the diagonal elements themselves. Consider the matrix
A
=
[
1
0
0
0
2
0
0
0
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\0&2&0\\0&0&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
Each diagonal element corresponds to an eigenvector whose only nonzero component is in the same row as that diagonal element. In the example, the eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
0
0
]
,
v
λ
2
=
[
0
1
0
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\0\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\0\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Triangular matrix example ====
A matrix whose elements above the main diagonal are all zero is called a lower triangular matrix, while a matrix whose elements below the main diagonal are all zero is called an upper triangular matrix. As with diagonal matrices, the eigenvalues of triangular matrices are the elements of the main diagonal.
Consider the lower triangular matrix,
A
=
[
1
0
0
1
2
0
2
3
3
]
.
{\displaystyle A={\begin{bmatrix}1&0&0\\1&2&0\\2&3&3\end{bmatrix}}.}
The characteristic polynomial of A is
det
(
A
−
λ
I
)
=
(
1
−
λ
)
(
2
−
λ
)
(
3
−
λ
)
,
{\displaystyle \det(A-\lambda I)=(1-\lambda )(2-\lambda )(3-\lambda ),}
which has the roots λ1 = 1, λ2 = 2, and λ3 = 3. These roots are the diagonal elements as well as the eigenvalues of A.
These eigenvalues correspond to the eigenvectors,
v
λ
1
=
[
1
−
1
1
2
]
,
v
λ
2
=
[
0
1
−
3
]
,
v
λ
3
=
[
0
0
1
]
,
{\displaystyle \mathbf {v} _{\lambda _{1}}={\begin{bmatrix}1\\-1\\{\frac {1}{2}}\end{bmatrix}},\quad \mathbf {v} _{\lambda _{2}}={\begin{bmatrix}0\\1\\-3\end{bmatrix}},\quad \mathbf {v} _{\lambda _{3}}={\begin{bmatrix}0\\0\\1\end{bmatrix}},}
respectively, as well as scalar multiples of these vectors.
==== Matrix with repeated eigenvalues example ====
As in the previous example, the lower triangular matrix
A
=
[
2
0
0
0
1
2
0
0
0
1
3
0
0
0
1
3
]
,
{\displaystyle A={\begin{bmatrix}2&0&0&0\\1&2&0&0\\0&1&3&0\\0&0&1&3\end{bmatrix}},}
has a characteristic polynomial that is the product of its diagonal elements,
det
(
A
−
λ
I
)
=
|
2
−
λ
0
0
0
1
2
−
λ
0
0
0
1
3
−
λ
0
0
0
1
3
−
λ
|
=
(
2
−
λ
)
2
(
3
−
λ
)
2
.
{\displaystyle \det(A-\lambda I)={\begin{vmatrix}2-\lambda &0&0&0\\1&2-\lambda &0&0\\0&1&3-\lambda &0\\0&0&1&3-\lambda \end{vmatrix}}=(2-\lambda )^{2}(3-\lambda )^{2}.}
The roots of this polynomial, and hence the eigenvalues, are 2 and 3. The algebraic multiplicity of each eigenvalue is 2; in other words they are both double roots. The sum of the algebraic multiplicities of all distinct eigenvalues is μA = 4 = n, the order of the characteristic polynomial and the dimension of A.
On the other hand, the geometric multiplicity of the eigenvalue 2 is only 1, because its eigenspace is spanned by just one vector
[
0
1
−
1
1
]
T
{\displaystyle {\begin{bmatrix}0&1&-1&1\end{bmatrix}}^{\textsf {T}}}
and is therefore 1-dimensional. Similarly, the geometric multiplicity of the eigenvalue 3 is 1 because its eigenspace is spanned by just one vector
[
0
0
0
1
]
T
{\displaystyle {\begin{bmatrix}0&0&0&1\end{bmatrix}}^{\textsf {T}}}
. The total geometric multiplicity γA is 2, which is the smallest it could be for a matrix with two distinct eigenvalues. Geometric multiplicities are defined in a later section.
=== Eigenvector-eigenvalue identity ===
For a Hermitian matrix, the norm squared of the jth component of a normalized eigenvector can be calculated using only the matrix eigenvalues and the eigenvalues of the corresponding minor matrix,
|
v
i
,
j
|
2
=
∏
k
(
λ
i
−
λ
k
(
M
j
)
)
∏
k
≠
i
(
λ
i
−
λ
k
)
,
{\displaystyle |v_{i,j}|^{2}={\frac {\prod _{k}{(\lambda _{i}-\lambda _{k}(M_{j}))}}{\prod _{k\neq i}{(\lambda _{i}-\lambda _{k})}}},}
where
M
j
{\textstyle M_{j}}
is the submatrix formed by removing the jth row and column from the original matrix. This identity also extends to diagonalizable matrices, and has been rediscovered many times in the literature.
== Eigenvalues and eigenfunctions of differential operators ==
The definitions of eigenvalue and eigenvectors of a linear transformation T remains valid even if the underlying vector space is an infinite-dimensional Hilbert or Banach space. A widely used class of linear transformations acting on infinite-dimensional spaces are the differential operators on function spaces. Let D be a linear differential operator on the space C∞ of infinitely differentiable real functions of a real argument t. The eigenvalue equation for D is the differential equation
D
f
(
t
)
=
λ
f
(
t
)
{\displaystyle Df(t)=\lambda f(t)}
The functions that satisfy this equation are eigenvectors of D and are commonly called eigenfunctions.
=== Derivative operator example ===
Consider the derivative operator
d
d
t
{\displaystyle {\tfrac {d}{dt}}}
with eigenvalue equation
d
d
t
f
(
t
)
=
λ
f
(
t
)
.
{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides by dt/f(t) and integrating. Its solution, the exponential function
f
(
t
)
=
f
(
0
)
e
λ
t
,
{\displaystyle f(t)=f(0)e^{\lambda t},}
is the eigenfunction of the derivative operator. In this case the eigenfunction is itself a function of its associated eigenvalue. In particular, for λ = 0 the eigenfunction f(t) is a constant.
The main eigenfunction article gives other examples.
== General definition ==
The concept of eigenvalues and eigenvectors extends naturally to arbitrary linear transformations on arbitrary vector spaces. Let V be any vector space over some field K of scalars, and let T be a linear transformation mapping V into V,
T
:
V
→
V
.
{\displaystyle T:V\to V.}
We say that a nonzero vector v ∈ V is an eigenvector of T if and only if there exists a scalar λ ∈ K such that
This equation is called the eigenvalue equation for T, and the scalar λ is the eigenvalue of T corresponding to the eigenvector v. T(v) is the result of applying the transformation T to the vector v, while λv is the product of the scalar λ with v.
=== Eigenspaces, geometric multiplicity, and the eigenbasis ===
Given an eigenvalue λ, consider the set
E
=
{
v
:
T
(
v
)
=
λ
v
}
,
{\displaystyle E=\left\{\mathbf {v} :T(\mathbf {v} )=\lambda \mathbf {v} \right\},}
which is the union of the zero vector with the set of all eigenvectors associated with λ. E is called the eigenspace or characteristic space of T associated with λ.
By definition of a linear transformation,
T
(
x
+
y
)
=
T
(
x
)
+
T
(
y
)
,
T
(
α
x
)
=
α
T
(
x
)
,
{\displaystyle {\begin{aligned}T(\mathbf {x} +\mathbf {y} )&=T(\mathbf {x} )+T(\mathbf {y} ),\\T(\alpha \mathbf {x} )&=\alpha T(\mathbf {x} ),\end{aligned}}}
for x, y ∈ V and α ∈ K. Therefore, if u and v are eigenvectors of T associated with eigenvalue λ, namely u, v ∈ E, then
T
(
u
+
v
)
=
λ
(
u
+
v
)
,
T
(
α
v
)
=
λ
(
α
v
)
.
{\displaystyle {\begin{aligned}T(\mathbf {u} +\mathbf {v} )&=\lambda (\mathbf {u} +\mathbf {v} ),\\T(\alpha \mathbf {v} )&=\lambda (\alpha \mathbf {v} ).\end{aligned}}}
So, both u + v and αv are either zero or eigenvectors of T associated with λ, namely u + v, αv ∈ E, and E is closed under addition and scalar multiplication. The eigenspace E associated with λ is therefore a linear subspace of V.
If that subspace has dimension 1, it is sometimes called an eigenline.
The geometric multiplicity γT(λ) of an eigenvalue λ is the dimension of the eigenspace associated with λ, i.e., the maximum number of linearly independent eigenvectors associated with that eigenvalue. By the definition of eigenvalues and eigenvectors, γT(λ) ≥ 1 because every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a consequence, eigenvectors of different eigenvalues are always linearly independent. Therefore, the sum of the dimensions of the eigenspaces cannot exceed the dimension n of the vector space on which T operates, and there cannot be more than n distinct eigenvalues.
Any subspace spanned by eigenvectors of T is an invariant subspace of T, and the restriction of T to such a subspace is diagonalizable. Moreover, if the entire vector space V can be spanned by the eigenvectors of T, or equivalently if the direct sum of the eigenspaces associated with all the eigenvalues of T is the entire vector space V, then a basis of V called an eigenbasis can be formed from linearly independent eigenvectors of T. When T admits an eigenbasis, T is diagonalizable.
=== Spectral theory ===
If λ is an eigenvalue of T, then the operator (T − λI) is not one-to-one, and therefore its inverse (T − λI)−1 does not exist. The converse is true for finite-dimensional vector spaces, but not for infinite-dimensional vector spaces. In general, the operator (T − λI) may not have an inverse even if λ is not an eigenvalue.
For this reason, in functional analysis eigenvalues can be generalized to the spectrum of a linear operator T as the set of all scalars λ for which the operator (T − λI) has no bounded inverse. The spectrum of an operator always contains all its eigenvalues but is not limited to them.
=== Associative algebras and representation theory ===
One can generalize the algebraic object that is acting on the vector space, replacing a single operator acting on a vector space with an algebra representation – an associative algebra acting on a module. The study of such actions is the field of representation theory.
The representation-theoretical concept of weight is an analog of eigenvalues, while weight vectors and weight spaces are the analogs of eigenvectors and eigenspaces, respectively.
Hecke eigensheaf is a tensor-multiple of itself and is considered in Langlands correspondence.
== Dynamic equations ==
The simplest difference equations have the form
x
t
=
a
1
x
t
−
1
+
a
2
x
t
−
2
+
⋯
+
a
k
x
t
−
k
.
{\displaystyle x_{t}=a_{1}x_{t-1}+a_{2}x_{t-2}+\cdots +a_{k}x_{t-k}.}
The solution of this equation for x in terms of t is found by using its characteristic equation
λ
k
−
a
1
λ
k
−
1
−
a
2
λ
k
−
2
−
⋯
−
a
k
−
1
λ
−
a
k
=
0
,
{\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0,}
which can be found by stacking into matrix form a set of equations consisting of the above difference equation and the k – 1 equations
x
t
−
1
=
x
t
−
1
,
…
,
x
t
−
k
+
1
=
x
t
−
k
+
1
,
{\displaystyle x_{t-1}=x_{t-1},\ \dots ,\ x_{t-k+1}=x_{t-k+1},}
giving a k-dimensional system of the first order in the stacked variable vector
[
x
t
⋯
x
t
−
k
+
1
]
{\displaystyle {\begin{bmatrix}x_{t}&\cdots &x_{t-k+1}\end{bmatrix}}}
in terms of its once-lagged value, and taking the characteristic equation of this system's matrix. This equation gives k characteristic roots
λ
1
,
…
,
λ
k
,
{\displaystyle \lambda _{1},\,\ldots ,\,\lambda _{k},}
for use in the solution equation
x
t
=
c
1
λ
1
t
+
⋯
+
c
k
λ
k
t
.
{\displaystyle x_{t}=c_{1}\lambda _{1}^{t}+\cdots +c_{k}\lambda _{k}^{t}.}
A similar procedure is used for solving a differential equation of the form
d
k
x
d
t
k
+
a
k
−
1
d
k
−
1
x
d
t
k
−
1
+
⋯
+
a
1
d
x
d
t
+
a
0
x
=
0.
{\displaystyle {\frac {d^{k}x}{dt^{k}}}+a_{k-1}{\frac {d^{k-1}x}{dt^{k-1}}}+\cdots +a_{1}{\frac {dx}{dt}}+a_{0}x=0.}
== Calculation ==
The calculation of eigenvalues and eigenvectors is a topic where theory, as presented in elementary linear algebra textbooks, is often very far from practice.
=== Classical method ===
The classical method is to first find the eigenvalues, and then calculate the eigenvectors for each eigenvalue. It is in several ways poorly suited for non-exact arithmetics such as floating-point.
==== Eigenvalues ====
The eigenvalues of a matrix
A
{\displaystyle A}
can be determined by finding the roots of the characteristic polynomial. This is easy for
2
×
2
{\displaystyle 2\times 2}
matrices, but the difficulty increases rapidly with the size of the matrix.
In theory, the coefficients of the characteristic polynomial can be computed exactly, since they are sums of products of matrix elements; and there are algorithms that can find all the roots of a polynomial of arbitrary degree to any required accuracy. However, this approach is not viable in practice because the coefficients would be contaminated by unavoidable round-off errors, and the roots of a polynomial can be an extremely sensitive function of the coefficients (as exemplified by Wilkinson's polynomial). Even for matrices whose elements are integers the calculation becomes nontrivial, because the sums are very long; the constant term is the determinant, which for an
n
×
n
{\displaystyle n\times n}
matrix is a sum of
n
!
{\displaystyle n!}
different products.
Explicit algebraic formulas for the roots of a polynomial exist only if the degree
n
{\displaystyle n}
is 4 or less. According to the Abel–Ruffini theorem there is no general, explicit and exact algebraic formula for the roots of a polynomial with degree 5 or more. (Generality matters because any polynomial with degree
n
{\displaystyle n}
is the characteristic polynomial of some companion matrix of order
n
{\displaystyle n}
.) Therefore, for matrices of order 5 or more, the eigenvalues and eigenvectors cannot be obtained by an explicit algebraic formula, and must therefore be computed by approximate numerical methods. Even the exact formula for the roots of a degree 3 polynomial is numerically impractical.
==== Eigenvectors ====
Once the (exact) value of an eigenvalue is known, the corresponding eigenvectors can be found by finding nonzero solutions of the eigenvalue equation, that becomes a system of linear equations with known coefficients. For example, once it is known that 6 is an eigenvalue of the matrix
A
=
[
4
1
6
3
]
{\displaystyle A={\begin{bmatrix}4&1\\6&3\end{bmatrix}}}
we can find its eigenvectors by solving the equation
A
v
=
6
v
{\displaystyle Av=6v}
, that is
[
4
1
6
3
]
[
x
y
]
=
6
⋅
[
x
y
]
{\displaystyle {\begin{bmatrix}4&1\\6&3\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=6\cdot {\begin{bmatrix}x\\y\end{bmatrix}}}
This matrix equation is equivalent to two linear equations
{
4
x
+
y
=
6
x
6
x
+
3
y
=
6
y
{\displaystyle \left\{{\begin{aligned}4x+y&=6x\\6x+3y&=6y\end{aligned}}\right.}
that is
{
−
2
x
+
y
=
0
6
x
−
3
y
=
0
{\displaystyle \left\{{\begin{aligned}-2x+y&=0\\6x-3y&=0\end{aligned}}\right.}
Both equations reduce to the single linear equation
y
=
2
x
{\displaystyle y=2x}
. Therefore, any vector of the form
[
a
2
a
]
T
{\displaystyle {\begin{bmatrix}a&2a\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
a
{\displaystyle a}
, is an eigenvector of
A
{\displaystyle A}
with eigenvalue
λ
=
6
{\displaystyle \lambda =6}
.
The matrix
A
{\displaystyle A}
above has another eigenvalue
λ
=
1
{\displaystyle \lambda =1}
. A similar calculation shows that the corresponding eigenvectors are the nonzero solutions of
3
x
+
y
=
0
{\displaystyle 3x+y=0}
, that is, any vector of the form
[
b
−
3
b
]
T
{\displaystyle {\begin{bmatrix}b&-3b\end{bmatrix}}^{\textsf {T}}}
, for any nonzero real number
b
{\displaystyle b}
.
=== Simple iterative methods ===
The converse approach, of first seeking the eigenvectors and then determining each eigenvalue from its eigenvector, turns out to be far more tractable for computers. The easiest algorithm here consists of picking an arbitrary starting vector and then repeatedly multiplying it with the matrix (optionally normalizing the vector to keep its elements of reasonable size); this makes the vector converge towards an eigenvector. A variation is to instead multiply the vector by
(
A
−
μ
I
)
−
1
{\displaystyle (A-\mu I)^{-1}}
; this causes it to converge to an eigenvector of the eigenvalue closest to
μ
∈
C
{\displaystyle \mu \in \mathbb {C} }
.
If
v
{\displaystyle \mathbf {v} }
is (a good approximation of) an eigenvector of
A
{\displaystyle A}
, then the corresponding eigenvalue can be computed as
λ
=
v
∗
A
v
v
∗
v
{\displaystyle \lambda ={\frac {\mathbf {v} ^{*}A\mathbf {v} }{\mathbf {v} ^{*}\mathbf {v} }}}
where
v
∗
{\displaystyle \mathbf {v} ^{*}}
denotes the conjugate transpose of
v
{\displaystyle \mathbf {v} }
.
=== Modern methods ===
Efficient, accurate methods to compute eigenvalues and eigenvectors of arbitrary matrices were not known until the QR algorithm was designed in 1961. Combining the Householder transformation with the LU decomposition results in an algorithm with better convergence than the QR algorithm. For large Hermitian sparse matrices, the Lanczos algorithm is one example of an efficient iterative method to compute eigenvalues and eigenvectors, among several other possibilities.
Most numeric methods that compute the eigenvalues of a matrix also determine a set of corresponding eigenvectors as a by-product of the computation, although sometimes implementors choose to discard the eigenvector information as soon as it is no longer needed.
== Applications ==
=== Geometric transformations ===
Eigenvectors and eigenvalues can be useful for understanding linear transformations of geometric shapes.
The following table presents some example transformations in the plane along with their 2×2 matrices, eigenvalues, and eigenvectors.
The characteristic equation for a rotation is a quadratic equation with discriminant
D
=
−
4
(
sin
θ
)
2
{\displaystyle D=-4(\sin \theta )^{2}}
, which is a negative number whenever θ is not an integer multiple of 180°. Therefore, except for these special cases, the two eigenvalues are complex numbers,
cos
θ
±
i
sin
θ
{\displaystyle \cos \theta \pm i\sin \theta }
; and all eigenvectors have non-real entries. Indeed, except for those special cases, a rotation changes the direction of every nonzero vector in the plane.
A linear transformation that takes a square to a rectangle of the same area (a squeeze mapping) has reciprocal eigenvalues.
=== Principal component analysis ===
The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal component analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthogonal basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used as a means of dimensionality reduction in the study of large data sets, such as those encountered in bioinformatics. In Q methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing; cf. criteria for determining the number of factors). More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.
=== Graphs ===
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix
A
{\displaystyle A}
, or (increasingly) of the graph's Laplacian matrix due to its discrete Laplace operator, which is either
D
−
A
{\displaystyle D-A}
(sometimes called the combinatorial Laplacian) or
I
−
D
−
1
/
2
A
D
−
1
/
2
{\displaystyle I-D^{-1/2}AD^{-1/2}}
(sometimes called the normalized Laplacian), where
D
{\displaystyle D}
is a diagonal matrix with
D
i
i
{\displaystyle D_{ii}}
equal to the degree of vertex
v
i
{\displaystyle v_{i}}
, and in
D
−
1
/
2
{\displaystyle D^{-1/2}}
, the
i
{\displaystyle i}
th diagonal entry is
1
/
deg
(
v
i
)
{\textstyle 1/{\sqrt {\deg(v_{i})}}}
. The
k
{\displaystyle k}
th principal eigenvector of a graph is defined as either the eigenvector corresponding to the
k
{\displaystyle k}
th largest or
k
{\displaystyle k}
th smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.
=== Markov chains ===
A Markov chain is represented by a matrix whose entries are the transition probabilities between states of a system. In particular the entries are non-negative, and every row of the matrix sums to one, being the sum of probabilities of transitions from one state to some other state of the system. The Perron–Frobenius theorem gives sufficient conditions for a Markov chain to have a unique dominant eigenvalue, which governs the convergence of the system to a steady state.
=== Vibration analysis ===
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors are the shapes of these vibrational modes. In particular, undamped vibration is governed by
m
x
¨
+
k
x
=
0
{\displaystyle m{\ddot {x}}+kx=0}
or
m
x
¨
=
−
k
x
{\displaystyle m{\ddot {x}}=-kx}
That is, acceleration is proportional to position (i.e., we expect
x
{\displaystyle x}
to be sinusoidal in time).
In
n
{\displaystyle n}
dimensions,
m
{\displaystyle m}
becomes a mass matrix and
k
{\displaystyle k}
a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
k
x
=
ω
2
m
x
{\displaystyle kx=\omega ^{2}mx}
where
ω
2
{\displaystyle \omega ^{2}}
is the eigenvalue and
ω
{\displaystyle \omega }
is the (imaginary) angular frequency. The principal vibration modes are different from the principal compliance modes, which are the eigenvectors of
k
{\displaystyle k}
alone. Furthermore, damped vibration, governed by
m
x
¨
+
c
x
˙
+
k
x
=
0
{\displaystyle m{\ddot {x}}+c{\dot {x}}+kx=0}
leads to a so-called quadratic eigenvalue problem,
(
ω
2
m
+
ω
c
+
k
)
x
=
0.
{\displaystyle \left(\omega ^{2}m+\omega c+k\right)x=0.}
This can be reduced to a generalized eigenvalue problem by algebraic manipulation at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
=== Tensor of moment of inertia ===
In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.
=== Stress tensor ===
In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.
=== Schrödinger equation ===
An example of an eigenvalue equation where the transformation
T
{\displaystyle T}
is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
H
ψ
E
=
E
ψ
E
{\displaystyle H\psi _{E}=E\psi _{E}\,}
where
H
{\displaystyle H}
, the Hamiltonian, is a second-order differential operator and
ψ
E
{\displaystyle \psi _{E}}
, the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue
E
{\displaystyle E}
, interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for
ψ
E
{\displaystyle \psi _{E}}
within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which
ψ
E
{\displaystyle \psi _{E}}
and
H
{\displaystyle H}
can be represented as a one-dimensional array (i.e., a vector) and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
The bra–ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
. In this notation, the Schrödinger equation is:
H
|
Ψ
E
⟩
=
E
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle =E|\Psi _{E}\rangle }
where
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
is an eigenstate of
H
{\displaystyle H}
and
E
{\displaystyle E}
represents the eigenvalue.
H
{\displaystyle H}
is an observable self-adjoint operator, the infinite-dimensional analog of Hermitian matrices. As in the matrix case, in the equation above
H
|
Ψ
E
⟩
{\displaystyle H|\Psi _{E}\rangle }
is understood to be the vector obtained by application of the transformation
H
{\displaystyle H}
to
|
Ψ
E
⟩
{\displaystyle |\Psi _{E}\rangle }
.
=== Wave transport ===
Light, acoustic waves, and microwaves are randomly scattered numerous times when traversing a static disordered system. Even though multiple scattering repeatedly randomizes the waves, ultimately coherent wave transport through the system is a deterministic process which can be described by a field transmission matrix
t
{\displaystyle \mathbf {t} }
. The eigenvectors of the transmission operator
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
form a set of disorder-specific input wavefronts which enable waves to couple into the disordered system's eigenchannels: the independent pathways waves can travel through the system. The eigenvalues,
τ
{\displaystyle \tau }
, of
t
†
t
{\displaystyle \mathbf {t} ^{\dagger }\mathbf {t} }
correspond to the intensity transmittance associated with each eigenchannel. One of the remarkable properties of the transmission operator of diffusive systems is their bimodal eigenvalue distribution with
τ
max
=
1
{\displaystyle \tau _{\max }=1}
and
τ
min
=
0
{\displaystyle \tau _{\min }=0}
. Furthermore, one of the striking properties of open eigenchannels, beyond the perfect transmittance, is the statistically robust spatial profile of the eigenchannels.
=== Molecular orbitals ===
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. Thus, if one wants to underline this aspect, one speaks of nonlinear eigenvalue problems. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
=== Geology and glaciology ===
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast's fabric can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can be compared graphically or as a stereographic projection. Graphically, many geologists use a Tri-Plot (Sneed and Folk) diagram,. A stereographic projection projects 3-dimensional spaces onto a two-dimensional plane. A type of stereographic projection is Wulff Net, which is commonly used in crystallography to create stereograms.
The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. The three eigenvectors are ordered
v
1
,
v
2
,
v
3
{\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\mathbf {v} _{3}}
by their eigenvalues
E
1
≥
E
2
≥
E
3
{\displaystyle E_{1}\geq E_{2}\geq E_{3}}
;
v
1
{\displaystyle \mathbf {v} _{1}}
then is the primary orientation/dip of clast,
v
2
{\displaystyle \mathbf {v} _{2}}
is the secondary and
v
3
{\displaystyle \mathbf {v} _{3}}
is the tertiary, in terms of strength. The clast orientation is defined as the direction of the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of
E
1
{\displaystyle E_{1}}
,
E
2
{\displaystyle E_{2}}
, and
E
3
{\displaystyle E_{3}}
are dictated by the nature of the sediment's fabric. If
E
1
=
E
2
=
E
3
{\displaystyle E_{1}=E_{2}=E_{3}}
, the fabric is said to be isotropic. If
E
1
=
E
2
>
E
3
{\displaystyle E_{1}=E_{2}>E_{3}}
, the fabric is said to be planar. If
E
1
>
E
2
>
E
3
{\displaystyle E_{1}>E_{2}>E_{3}}
, the fabric is said to be linear.
=== Basic reproduction number ===
The basic reproduction number (
R
0
{\displaystyle R_{0}}
) is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then
R
0
{\displaystyle R_{0}}
is the average number of people that one typical infectious person will infect. The generation time of an infection is the time,
t
G
{\displaystyle t_{G}}
, from one person becoming infected to the next person becoming infected. In a heterogeneous population, the next generation matrix defines how many people in the population will become infected after time
t
G
{\displaystyle t_{G}}
has passed. The value
R
0
{\displaystyle R_{0}}
is then the largest eigenvalue of the next generation matrix.
=== Eigenfaces ===
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal component analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems for speaker adaptation.
== See also ==
Antieigenvalue theory
Eigenoperator
Eigenplane
Eigenmoments
Eigenvalue algorithm
Quantum states
Jordan normal form
List of numerical-analysis software
Nonlinear eigenproblem
Normal eigenvalue
Quadratic eigenvalue problem
Singular value
Spectrum of a matrix
== Notes ==
=== Citations ===
== Sources ==
== Further reading ==
== External links ==
What are Eigen Values? – non-technical introduction from PhysLink.com's "Ask the Experts"
Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu.
Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy
Eigenvectors and eigenvalues | Essence of linear algebra, chapter 10 – A visual explanation with 3Blue1Brown
Matrix Eigenvectors Calculator from Symbolab (Click on the bottom right button of the 2×12 grid to select a matrix size. Select an
n
×
n
{\displaystyle n\times n}
size (for a square matrix), then fill out the entries numerically and click on the Go button. It can accept complex numbers as well.)
Wikiversity uses introductory physics to introduce Eigenvalues and eigenvectors
=== Theory ===
Computation of Eigenvalues
Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst | Wikipedia/Eigenvalue |
In the mathematical field of complex analysis, a meromorphic function on an open subset D of the complex plane is a function that is holomorphic on all of D except for a set of isolated points, which are poles of the function. The term comes from the Greek meros (μέρος), meaning "part".
Every meromorphic function on D can be expressed as the ratio between two holomorphic functions (with the denominator not constant 0) defined on D: any pole must coincide with a zero of the denominator.
== Heuristic description ==
Intuitively, a meromorphic function is a ratio of two well-behaved (holomorphic) functions. Such a function will still be well-behaved, except possibly at the points where the denominator of the fraction is zero. If the denominator has a zero at z and the numerator does not, then the value of the function will approach infinity; if both parts have a zero at z, then one must compare the multiplicity of these zeros.
From an algebraic point of view, if the function's domain is connected, then the set of meromorphic functions is the field of fractions of the integral domain of the set of holomorphic functions. This is analogous to the relationship between the rational numbers and the integers.
== Prior, alternate use ==
Both the field of study wherein the term is used and the precise meaning of the term changed in the 20th century. In the 1930s, in group theory, a meromorphic function (or meromorph) was a function from a group G into itself that preserved the product on the group. The image of this function was called an automorphism of G. Similarly, a homomorphic function (or homomorph) was a function between groups that preserved the product, while a homomorphism was the image of a homomorph. This form of the term is now obsolete, and the related term meromorph is no longer used in group theory.
The term endomorphism is now used for the function itself, with no special name given to the image of the function.
A meromorphic function is not necessarily an endomorphism, since the complex points at its poles are not in its domain, but may be in its range.
== Properties ==
Since poles are isolated, there are at most countably many for a meromorphic function. The set of poles can be infinite, as exemplified by the function
f
(
z
)
=
csc
z
=
1
sin
z
.
{\displaystyle f(z)=\csc z={\frac {1}{\sin z}}.}
By using analytic continuation to eliminate removable singularities, meromorphic functions can be added, subtracted, multiplied, and the quotient
f
/
g
{\displaystyle f/g}
can be formed unless
g
(
z
)
=
0
{\displaystyle g(z)=0}
on a connected component of D. Thus, if D is connected, the meromorphic functions form a field, in fact a field extension of the complex numbers.
=== Higher dimensions ===
In several complex variables, a meromorphic function is defined to be locally a quotient of two holomorphic functions. For example,
f
(
z
1
,
z
2
)
=
z
1
/
z
2
{\displaystyle f(z_{1},z_{2})=z_{1}/z_{2}}
is a meromorphic function on the two-dimensional complex affine space. Here it is no longer true that every meromorphic function can be regarded as a holomorphic function with values in the Riemann sphere: There is a set of "indeterminacy" of codimension two (in the given example this set consists of the origin
(
0
,
0
)
{\displaystyle (0,0)}
).
Unlike in dimension one, in higher dimensions there do exist compact complex manifolds on which there are no non-constant meromorphic functions, for example, most complex tori.
== Examples ==
All rational functions, for example
f
(
z
)
=
z
3
−
2
z
+
10
z
5
+
3
z
−
1
,
{\displaystyle f(z)={\frac {z^{3}-2z+10}{z^{5}+3z-1}},}
are meromorphic on the whole complex plane. Furthermore, they are the only meromorphic functions on the extended complex plane.
The functions
f
(
z
)
=
e
z
z
and
f
(
z
)
=
sin
z
(
z
−
1
)
2
{\displaystyle f(z)={\frac {e^{z}}{z}}\quad {\text{and}}\quad f(z)={\frac {\sin {z}}{(z-1)^{2}}}}
as well as the gamma function and the Riemann zeta function are meromorphic on the whole complex plane.
The function
f
(
z
)
=
e
1
z
{\displaystyle f(z)=e^{\frac {1}{z}}}
is defined in the whole complex plane except for the origin, 0. However, 0 is not a pole of this function, rather an essential singularity. Thus, this function is not meromorphic in the whole complex plane. However, it is meromorphic (even holomorphic) on
C
∖
{
0
}
{\displaystyle \mathbb {C} \setminus \{0\}}
.
The complex logarithm function
f
(
z
)
=
ln
(
z
)
{\displaystyle f(z)=\ln(z)}
is not meromorphic on the whole complex plane, as it cannot be defined on the whole complex plane while only excluding a set of isolated points.
The function
f
(
z
)
=
csc
1
z
=
1
sin
(
1
z
)
{\displaystyle f(z)=\csc {\frac {1}{z}}={\frac {1}{\sin \left({\frac {1}{z}}\right)}}}
is not meromorphic in the whole plane, since the point
z
=
0
{\displaystyle z=0}
is an accumulation point of poles and is thus not an isolated singularity.
The function
f
(
z
)
=
sin
1
z
{\displaystyle f(z)=\sin {\frac {1}{z}}}
is not meromorphic either, as it has an essential singularity at 0.
== On Riemann surfaces ==
On a Riemann surface, every point admits an open neighborhood
which is biholomorphic to an open subset of the complex plane. Thereby the notion of a meromorphic function can be defined for every Riemann surface.
When D is the entire Riemann sphere, the field of meromorphic functions is simply the field of rational functions in one variable over the complex field, since one can prove that any meromorphic function on the sphere is rational. (This is a special case of the so-called GAGA principle.)
For every Riemann surface, a meromorphic function is the same as a holomorphic function that maps to the Riemann sphere and which is not the constant function equal to ∞. The poles correspond to those complex numbers which are mapped to ∞.
On a non-compact Riemann surface, every meromorphic function can be realized as a quotient of two (globally defined) holomorphic functions. In contrast, on a compact Riemann surface, every holomorphic function is constant, while there always exist non-constant meromorphic functions.
== See also ==
Cousin problems
Mittag-Leffler's theorem
Weierstrass factorization theorem
== Footnotes ==
== References == | Wikipedia/Meromorphic_function |
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions.
A function is analytic if and only if for every
x
0
{\displaystyle x_{0}}
in its domain, its Taylor series about
x
0
{\displaystyle x_{0}}
converges to the function in some neighborhood of
x
0
{\displaystyle x_{0}}
. This is stronger than merely being infinitely differentiable at
x
0
{\displaystyle x_{0}}
, and therefore having a well-defined Taylor series; the Fabius function provides an example of a function that is infinitely differentiable but not analytic.
== Definitions ==
Formally, a function
f
{\displaystyle f}
is real analytic on an open set
D
{\displaystyle D}
in the real line if for any
x
0
∈
D
{\displaystyle x_{0}\in D}
one can write
f
(
x
)
=
∑
n
=
0
∞
a
n
(
x
−
x
0
)
n
=
a
0
+
a
1
(
x
−
x
0
)
+
a
2
(
x
−
x
0
)
2
+
⋯
{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}\left(x-x_{0}\right)^{n}=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})^{2}+\cdots }
in which the coefficients
a
0
,
a
1
,
…
{\displaystyle a_{0},a_{1},\dots }
are real numbers and the series is convergent to
f
(
x
)
{\displaystyle f(x)}
for
x
{\displaystyle x}
in a neighborhood of
x
0
{\displaystyle x_{0}}
.
Alternatively, a real analytic function is an infinitely differentiable function such that the Taylor series at any point
x
0
{\displaystyle x_{0}}
in its domain
T
(
x
)
=
∑
n
=
0
∞
f
(
n
)
(
x
0
)
n
!
(
x
−
x
0
)
n
{\displaystyle T(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(x_{0})}{n!}}(x-x_{0})^{n}}
converges to
f
(
x
)
{\displaystyle f(x)}
for
x
{\displaystyle x}
in a neighborhood of
x
0
{\displaystyle x_{0}}
pointwise. The set of all real analytic functions on a given set
D
{\displaystyle D}
is often denoted by
C
ω
(
D
)
{\displaystyle {\mathcal {C}}^{\,\omega }(D)}
, or just by
C
ω
{\displaystyle {\mathcal {C}}^{\,\omega }}
if the domain is understood.
A function
f
{\displaystyle f}
defined on some subset of the real line is said to be real analytic at a point
x
{\displaystyle x}
if there is a neighborhood
D
{\displaystyle D}
of
x
{\displaystyle x}
on which
f
{\displaystyle f}
is real analytic.
The definition of a complex analytic function is obtained by replacing, in the definitions above, "real" with "complex" and "real line" with "complex plane". A function is complex analytic if and only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic" and "analytic" are often used interchangeably for such functions.
In complex analysis, a function is called analytic in an open set "U" if it is (complex) differentiable at each point in "U" and its complex derivative is continuous on "U".
== Examples ==
Typical examples of analytic functions are
The following elementary functions:
All polynomials: if a polynomial has degree n, any terms of degree larger than n in its Taylor series expansion must immediately vanish to 0, and so this series will be trivially convergent. Furthermore, every polynomial is its own Maclaurin series.
The exponential function is analytic. Any Taylor series for this function converges not only for x close enough to x0 (as in the definition) but for all values of x (real or complex).
The trigonometric functions, logarithm, and the power functions are analytic on any open set of their domain.
Most special functions (at least in some range of the complex plane):
hypergeometric functions
Bessel functions
gamma functions
Typical examples of functions that are not analytic are
The absolute value function when defined on the set of real numbers or complex numbers is not everywhere analytic because it is not differentiable at 0.
Piecewise defined functions (functions given by different formulae in different regions) are typically not analytic where the pieces meet.
The complex conjugate function z → z* is not complex analytic, although its restriction to the real line is the identity function and therefore real analytic, and it is real analytic as a function from
R
2
{\displaystyle \mathbb {R} ^{2}}
to
R
2
{\displaystyle \mathbb {R} ^{2}}
.
Other non-analytic smooth functions, and in particular any smooth function
f
{\displaystyle f}
with compact support, i.e.
f
∈
C
0
∞
(
R
n
)
{\displaystyle f\in {\mathcal {C}}_{0}^{\infty }(\mathbb {R} ^{n})}
, cannot be analytic on
R
n
{\displaystyle \mathbb {R} ^{n}}
.
== Alternative characterizations ==
The following conditions are equivalent:
f
{\displaystyle f}
is real analytic on an open set
D
{\displaystyle D}
.
There is a complex analytic extension of
f
{\displaystyle f}
to an open set
G
⊂
C
{\displaystyle G\subset \mathbb {C} }
which contains
D
{\displaystyle D}
.
f
{\displaystyle f}
is smooth and for every compact set
K
⊂
D
{\displaystyle K\subset D}
there exists a constant
C
{\displaystyle C}
such that for every
x
∈
K
{\displaystyle x\in K}
and every non-negative integer
k
{\displaystyle k}
the following bound holds
|
d
k
f
d
x
k
(
x
)
|
≤
C
k
+
1
k
!
{\displaystyle \left|{\frac {d^{k}f}{dx^{k}}}(x)\right|\leq C^{k+1}k!}
Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much more easily characterized.
For the case of an analytic function with several variables (see below), the real analyticity can be characterized using the Fourier–Bros–Iagolnitzer transform.
In the multivariable case, real analytic functions satisfy a direct generalization of the third characterization. Let
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
be an open set, and let
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
.
Then
f
{\displaystyle f}
is real analytic on
U
{\displaystyle U}
if and only if
f
∈
C
∞
(
U
)
{\displaystyle f\in C^{\infty }(U)}
and for every compact
K
⊆
U
{\displaystyle K\subseteq U}
there exists a constant
C
{\displaystyle C}
such that for every multi-index
α
∈
Z
≥
0
n
{\displaystyle \alpha \in \mathbb {Z} _{\geq 0}^{n}}
the following bound holds
sup
x
∈
K
|
∂
α
f
∂
x
α
(
x
)
|
≤
C
|
α
|
+
1
α
!
{\displaystyle \sup _{x\in K}\left|{\frac {\partial ^{\alpha }f}{\partial x^{\alpha }}}(x)\right|\leq C^{|\alpha |+1}\alpha !}
== Properties of analytic functions ==
The sums, products, and compositions of analytic functions are analytic.
The reciprocal of an analytic function that is nowhere zero is analytic, as is the inverse of an invertible analytic function whose derivative is nowhere zero. (See also the Lagrange inversion theorem.)
Any analytic function is smooth, that is, infinitely differentiable. The converse is not true for real functions; in fact, in a certain sense, the real analytic functions are sparse compared to all real infinitely differentiable functions. For the complex numbers, the converse does hold, and in fact any function differentiable once on an open set is analytic on that set (see "analyticity and differentiability" below).
For any open set
Ω
⊆
C
{\displaystyle \Omega \subseteq \mathbb {C} }
, the set A(Ω) of all analytic functions
u
:
Ω
→
C
{\displaystyle u:\Omega \to \mathbb {C} }
is a Fréchet space with respect to the uniform convergence on compact sets. The fact that uniform limits on compact sets of analytic functions are analytic is an easy consequence of Morera's theorem. The set
A
∞
(
Ω
)
{\displaystyle A_{\infty }(\Omega )}
of all bounded analytic functions with the supremum norm is a Banach space.
A polynomial cannot be zero at too many points unless it is the zero polynomial (more precisely, the number of zeros is at most the degree of the polynomial). A similar but weaker statement holds for analytic functions. If the set of zeros of an analytic function ƒ has an accumulation point inside its domain, then ƒ is zero everywhere on the connected component containing the accumulation point. In other words, if (rn) is a sequence of distinct numbers such that ƒ(rn) = 0 for all n and this sequence converges to a point r in the domain of D, then ƒ is identically zero on the connected component of D containing r. This is known as the identity theorem.
Also, if all the derivatives of an analytic function at a point are zero, the function is constant on the corresponding connected component.
These statements imply that while analytic functions do have more degrees of freedom than polynomials, they are still quite rigid.
== Analyticity and differentiability ==
As noted above, any analytic function (real or complex) is infinitely differentiable (also known as smooth, or
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
). (Note that this differentiability is in the sense of real variables; compare complex derivatives below.) There exist smooth real functions that are not analytic: see non-analytic smooth function. In fact there are many such functions.
The situation is quite different when one considers complex analytic functions and complex derivatives. It can be proved that any complex function differentiable (in the complex sense) in an open set is analytic. Consequently, in complex analysis, the term analytic function is synonymous with holomorphic function.
== Real versus complex analytic functions ==
Real and complex analytic functions have important differences (one could notice that even from their different relationship with differentiability). Analyticity of complex functions is a more restrictive property, as it has more restrictive necessary conditions and complex analytic functions have more structure than their real-line counterparts.
According to Liouville's theorem, any bounded complex analytic function defined on the whole complex plane is constant. The corresponding statement for real analytic functions, with the complex plane replaced by the real line, is clearly false; this is illustrated by
f
(
x
)
=
1
x
2
+
1
.
{\displaystyle f(x)={\frac {1}{x^{2}+1}}.}
Also, if a complex analytic function is defined in an open ball around a point x0, its power series expansion at x0 is convergent in the whole open ball (holomorphic functions are analytic). This statement for real analytic functions (with open ball meaning an open interval of the real line rather than an open disk of the complex plane) is not true in general; the function of the example above gives an example for x0 = 0 and a ball of radius exceeding 1, since the power series 1 − x2 + x4 − x6... diverges for |x| ≥ 1.
Any real analytic function on some open set on the real line can be extended to a complex analytic function on some open set of the complex plane. However, not every real analytic function defined on the whole real line can be extended to a complex function defined on the whole complex plane. The function f(x) defined in the paragraph above is a counterexample, as it is not defined for x = ±i. This explains why the Taylor series of f(x) diverges for |x| > 1, i.e., the radius of convergence is 1 because the complexified function has a pole at distance 1 from the evaluation point 0 and no further poles within the open disc of radius 1 around the evaluation point.
== Analytic functions of several variables ==
One can define analytic functions in several variables by means of power series in those variables (see power series). Analytic functions of several variables have some of the same properties as analytic functions of one variable. However, especially for complex analytic functions, new and interesting phenomena show up in 2 or more complex dimensions:
Zero sets of complex analytic functions in more than one variable are never discrete. This can be proved by Hartogs's extension theorem.
Domains of holomorphy for single-valued functions consist of arbitrary (connected) open sets. In several complex variables, however, only some connected open sets are domains of holomorphy. The characterization of domains of holomorphy leads to the notion of pseudoconvexity.
== See also ==
Cauchy–Riemann equations
Holomorphic function
Paley–Wiener theorem
Quasi-analytic function
Infinite compositions of analytic functions
Non-analytic smooth function
== Notes ==
== References ==
Conway, John B. (1978). Functions of One Complex Variable I. Graduate Texts in Mathematics 11 (2nd ed.). Springer-Verlag. ISBN 978-0-387-90328-6.
Krantz, Steven; Parks, Harold R. (2002). A Primer of Real Analytic Functions (2nd ed.). Birkhäuser. ISBN 0-8176-4264-1.
Gamelin, Theodore W. (2004). Complex Analysis. Springer. ISBN 9788181281142.
== External links ==
"Analytic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Analytic Function". MathWorld.
Solver for all zeros of a complex analytic function that lie within a rectangular region by Ivan B. Ivanov | Wikipedia/Analytic_function |
In mathematics, an algebraic extension is a field extension L/K such that every element of the larger field L is algebraic over the smaller field K; that is, every element of L is a root of a non-zero polynomial with coefficients in K. A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic.
The algebraic extensions of the field
Q
{\displaystyle \mathbb {Q} }
of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory. Another example of a common algebraic extension is the extension
C
/
R
{\displaystyle \mathbb {C} /\mathbb {R} }
of the real numbers by the complex numbers.
== Some properties ==
All transcendental extensions are of infinite degree. This in turn implies that all finite extensions are algebraic. The converse is not true however: there are infinite extensions which are algebraic. For instance, the field of all algebraic numbers is an infinite algebraic extension of the rational numbers.
Let E be an extension field of K, and a ∈ E. The smallest subfield of E that contains K and a is commonly denoted
K
(
a
)
.
{\displaystyle K(a).}
If a is algebraic over K, then the elements of K(a) can be expressed as polynomials in a with coefficients in K; that is,
K
(
a
)
=
K
[
a
]
{\displaystyle K(a)=K[a]}
, the smallest ring containing K and a. In this case,
K
(
a
)
{\displaystyle K(a)}
is a finite extension of K and all its elements are algebraic over K. In particular,
K
(
a
)
{\displaystyle K(a)}
is a K-vector space with basis
{
1
,
a
,
.
.
.
,
a
d
−
1
}
{\displaystyle \{1,a,...,a^{d-1}\}}
, where d is the degree of the minimal polynomial of a. These properties do not hold if a is not algebraic. For example,
Q
(
π
)
≠
Q
[
π
]
,
{\displaystyle \mathbb {Q} (\pi )\neq \mathbb {Q} [\pi ],}
and they are both infinite dimensional vector spaces over
Q
.
{\displaystyle \mathbb {Q} .}
An algebraically closed field F has no proper algebraic extensions, that is, no algebraic extensions E with F < E. An example is the field of complex numbers. Every field has an algebraic extension which is algebraically closed (called its algebraic closure), but proving this in general requires some form of the axiom of choice.
An extension L/K is algebraic if and only if every sub K-algebra of L is a field.
== Properties ==
The following three properties hold:
If E is an algebraic extension of F and F is an algebraic extension of K then E is an algebraic extension of K.
If E and F are algebraic extensions of K in a common overfield C, then the compositum EF is an algebraic extension of K.
If E is an algebraic extension of F and E > K > F then E is an algebraic extension of K.
These finitary results can be generalized using transfinite induction:
This fact, together with Zorn's lemma (applied to an appropriately chosen poset), establishes the existence of algebraic closures.
== Generalizations ==
Model theory generalizes the notion of algebraic extension to arbitrary theories: an embedding of M into N is called an algebraic extension if for every x in N there is a formula p with parameters in M, such that p(x) is true and the set
{
y
∈
N
∣
p
(
y
)
}
{\displaystyle \left\{y\in N\mid p(y)\right\}}
is finite. It turns out that applying this definition to the theory of fields gives the usual definition of algebraic extension. The Galois group of N over M can again be defined as the group of automorphisms, and it turns out that most of the theory of Galois groups can be developed for the general case.
== Relative algebraic closures ==
Given a field k and a field K containing k, one defines the relative algebraic closure of k in K to be the subfield of K consisting of all elements of K that are algebraic over k, that is all elements of K that are a root of some nonzero polynomial with coefficients in k.
== See also ==
Integral element
Lüroth's theorem
Galois extension
Separable extension
Normal extension
== Notes ==
== References ==
Fraleigh, John B. (2014), A First Course in Abstract Algebra, Pearson, ISBN 978-1-292-02496-7
Hazewinkel, Michiel; Gubareni, Nadiya; Gubareni, Nadezhda Mikhaĭlovna; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, vol. 1, Springer, ISBN 1-4020-2690-0
Lang, Serge (1993), "V.1:Algebraic Extensions", Algebra (Third ed.), Reading, Mass.: Addison-Wesley, pp. 223ff, ISBN 978-0-201-55540-0, Zbl 0848.13001
Malik, D. B.; Mordeson, John N.; Sen, M. K. (1997), Fundamentals of Abstract Algebra, McGraw-Hill, ISBN 0-07-040035-0
McCarthy, Paul J. (1991) [corrected reprint of 2nd edition, 1976], Algebraic extensions of fields, New York: Dover Publications, ISBN 0-486-66651-4, Zbl 0768.12001
Roman, Steven (1995), Field Theory, GTM 158, Springer-Verlag, ISBN 9780387944081
Rotman, Joseph J. (2002), Advanced Modern Algebra, Prentice Hall, ISBN 9780130878687 | Wikipedia/Algebraic_extension |
In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, and their inverses (e.g., arcsin, log, or x1/n).
All elementary functions are continuous on their domains.
Elementary functions were introduced by Joseph Liouville in a series of papers from 1833 to 1841. An algebraic treatment of elementary functions was started by Joseph Fels Ritt in the 1930s. Many textbooks and dictionaries do not give a precise definition of the elementary functions, and mathematicians differ on it.
== Examples ==
=== Basic examples ===
Elementary functions of a single variable x include:
Constant functions:
2
,
π
,
e
,
{\displaystyle 2,\ \pi ,\ e,}
etc.
Rational powers of x:
x
,
x
2
,
x
(
x
1
2
)
,
x
2
3
,
{\displaystyle x,\ x^{2},\ {\sqrt {x}}\ (x^{\frac {1}{2}}),\ x^{\frac {2}{3}},}
etc.
Exponential functions:
e
x
,
a
x
{\displaystyle e^{x},\ a^{x}}
Logarithms:
log
x
,
log
a
x
{\displaystyle \log x,\ \log _{a}x}
Trigonometric functions:
sin
x
,
cos
x
,
tan
x
,
{\displaystyle \sin x,\ \cos x,\ \tan x,}
etc.
Inverse trigonometric functions:
arcsin
x
,
arccos
x
,
{\displaystyle \arcsin x,\ \arccos x,}
etc.
Hyperbolic functions:
sinh
x
,
cosh
x
,
{\displaystyle \sinh x,\ \cosh x,}
etc.
Inverse hyperbolic functions:
arsinh
x
,
arcosh
x
,
{\displaystyle \operatorname {arsinh} x,\ \operatorname {arcosh} x,}
etc.
All functions obtained by adding, subtracting, multiplying or dividing a finite number of any of the previous functions
All functions obtained by root extraction of a polynomial with coefficients in elementary functions
All functions obtained by composing a finite number of any of the previously listed functions
Certain elementary functions of a single complex variable z, such as
z
{\displaystyle {\sqrt {z}}}
and
log
z
{\displaystyle \log z}
, may be multivalued. Additionally, certain classes of functions may be obtained by others using the final two rules. For example, the exponential function
e
z
{\displaystyle e^{z}}
composed with addition, subtraction, and division provides the hyperbolic functions, while initial composition with
i
z
{\displaystyle iz}
instead provides the trigonometric functions.
=== Composite examples ===
Examples of elementary functions include:
Addition, e.g. (x + 1)
Multiplication, e.g. (2x)
Polynomial functions
e
tan
x
1
+
x
2
sin
(
1
+
(
log
x
)
2
)
{\displaystyle {\frac {e^{\tan x}}{1+x^{2}}}\sin \left({\sqrt {1+(\log x)^{2}}}\right)}
−
i
log
(
x
+
i
1
−
x
2
)
{\displaystyle -i\log \left(x+i{\sqrt {1-x^{2}}}\right)}
The last function is equal to
arccos
x
{\displaystyle \arccos x}
, the inverse cosine, in the entire complex plane.
All monomials, polynomials, rational functions and algebraic functions are elementary.
The absolute value function, for real
x
{\displaystyle x}
, is also elementary as it can be expressed as the composition of a power and root of
x
{\displaystyle x}
:
|
x
|
=
x
2
{\textstyle |x|={\sqrt {x^{2}}}}
.
=== Non-elementary functions ===
Many mathematicians exclude non-analytic functions such as the absolute value function or discontinuous functions such as the step function, but others allow them. Some have proposed extending the set to include, for example, the Lambert W function.
Some examples of functions that are not elementary:
tetration
the gamma function
non-elementary Liouvillian functions, including
the exponential integral (Ei), logarithmic integral (Li or li) and Fresnel integrals (S and C).
the error function,
e
r
f
(
x
)
=
2
π
∫
0
x
e
−
t
2
d
t
,
{\displaystyle \mathrm {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt,}
a fact that may not be immediately obvious, but can be proven using the Risch algorithm.
other nonelementary integrals, including the Dirichlet integral and elliptic integral.
== Closure ==
It follows directly from the definition that the set of elementary functions is closed under arithmetic operations, root extraction and composition. The elementary functions are closed under differentiation. They are not closed under limits and infinite sums. Importantly, the elementary functions are not closed under integration, as shown by Liouville's theorem, see nonelementary integral. The Liouvillian functions are defined as the elementary functions and, recursively, the integrals of the Liouvillian functions.
== Differential algebra ==
The mathematical definition of an elementary function, or a function in elementary form, is considered in the context of differential algebra. A differential algebra is an algebra with the extra operation of derivation (algebraic version of differentiation). Using the derivation operation new equations can be written and their solutions used in extensions of the algebra. By starting with the field of rational functions, two special types of transcendental extensions (the logarithm and the exponential) can be added to the field building a tower containing elementary functions.
A differential field F is a field F0 (rational functions over the rationals Q for example) together with a derivation map u → ∂u. (Here ∂u is a new function. Sometimes the notation u′ is used.) The derivation captures the properties of differentiation, so that for any two elements of the base field, the derivation is linear
∂
(
u
+
v
)
=
∂
u
+
∂
v
{\displaystyle \partial (u+v)=\partial u+\partial v}
and satisfies the Leibniz product rule
∂
(
u
⋅
v
)
=
∂
u
⋅
v
+
u
⋅
∂
v
.
{\displaystyle \partial (u\cdot v)=\partial u\cdot v+u\cdot \partial v\,.}
An element h is a constant if ∂h = 0. If the base field is over the rationals, care must be taken when extending the field to add the needed transcendental constants.
A function u of a differential extension F[u] of a differential field F is an elementary function over F if the function u
is algebraic over F, or
is an exponential, that is, ∂u = u ∂a for a ∈ F, or
is a logarithm, that is, ∂u = ∂a / a for a ∈ F.
(see also Liouville's theorem)
== See also ==
Algebraic function – Mathematical function
Closed-form expression – Mathematical formula involving a given set of operations
Differential Galois theory – Study of Galois symmetry groups of differential fields
Elementary function arithmetic – System of arithmetic in proof theory
Liouville's theorem (differential algebra) – Says when antiderivatives of elementary functions can be expressed as elementary functions
Tarski's high school algebra problem – Mathematical problem
Transcendental function – Analytic function that does not satisfy a polynomial equation
Tupper's self-referential formula – Formula that visually represents itself when graphed
== Notes ==
== References ==
Liouville, Joseph (1833a). "Premier mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 124–148.
Liouville, Joseph (1833b). "Second mémoire sur la détermination des intégrales dont la valeur est algébrique". Journal de l'École Polytechnique. tome XIV: 149–193.
Liouville, Joseph (1833c). "Note sur la détermination des intégrales dont la valeur est algébrique". Journal für die reine und angewandte Mathematik. 10: 347–359.
Ritt, Joseph (1950). Differential Algebra. AMS.
Rosenlicht, Maxwell (1972). "Integration in finite terms". American Mathematical Monthly. 79 (9): 963–972. doi:10.2307/2318066. JSTOR 2318066.
== Further reading ==
Davenport, James H. (2007). "What Might "Understand a Function" Mean?". Towards Mechanized Mathematical Assistants. Lecture Notes in Computer Science. Vol. 4573. pp. 55–65. doi:10.1007/978-3-540-73086-6_5. ISBN 978-3-540-73083-5. S2CID 8049737.
== External links ==
Elementary functions at Encyclopaedia of Mathematics
Weisstein, Eric W. "Elementary function". MathWorld. | Wikipedia/Elementary_function_(differential_algebra) |
The rank–nullity theorem is a theorem in linear algebra, which asserts:
the number of columns of a matrix M is the sum of the rank of M and the nullity of M; and
the dimension of the domain of a linear transformation f is the sum of the rank of f (the dimension of the image of f) and the nullity of f (the dimension of the kernel of f).
It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity.
== Stating the theorem ==
=== Linear transformations ===
Let
T
:
V
→
W
{\displaystyle T:V\to W}
be a linear transformation between two vector spaces where
T
{\displaystyle T}
's domain
V
{\displaystyle V}
is finite dimensional. Then
rank
(
T
)
+
nullity
(
T
)
=
dim
V
,
{\displaystyle \operatorname {rank} (T)~+~\operatorname {nullity} (T)~=~\dim V,}
where
rank
(
T
)
{\textstyle \operatorname {rank} (T)}
is the rank of
T
{\displaystyle T}
(the dimension of its image) and
nullity
(
T
)
{\displaystyle \operatorname {nullity} (T)}
is the nullity of
T
{\displaystyle T}
(the dimension of its kernel). In other words,
dim
(
Im
T
)
+
dim
(
Ker
T
)
=
dim
(
Domain
(
T
)
)
.
{\displaystyle \dim(\operatorname {Im} T)+\dim(\operatorname {Ker} T)=\dim(\operatorname {Domain} (T)).}
This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since
T
{\displaystyle T}
induces an isomorphism from
V
/
Ker
(
T
)
{\displaystyle V/\operatorname {Ker} (T)}
to
Im
(
T
)
,
{\displaystyle \operatorname {Im} (T),}
the existence of a basis for
V
{\displaystyle V}
that extends any given basis of
Ker
(
T
)
{\displaystyle \operatorname {Ker} (T)}
implies, via the splitting lemma, that
Im
(
T
)
⊕
Ker
(
T
)
≅
V
.
{\displaystyle \operatorname {Im} (T)\oplus \operatorname {Ker} (T)\cong V.}
Taking dimensions, the rank–nullity theorem follows.
=== Matrices ===
Linear maps can be represented with matrices. More precisely, an
m
×
n
{\displaystyle m\times n}
matrix M represents a linear map
f
:
F
n
→
F
m
,
{\displaystyle f:F^{n}\to F^{m},}
where
F
{\displaystyle F}
is the underlying field. So, the dimension of the domain of
f
{\displaystyle f}
is n, the number of columns of M, and the rank–nullity theorem for an
m
×
n
{\displaystyle m\times n}
matrix M is
rank
(
M
)
+
nullity
(
M
)
=
n
.
{\displaystyle \operatorname {rank} (M)+\operatorname {nullity} (M)=n.}
== Proofs ==
Here we provide two proofs. The first operates in the general case, using linear maps. The second proof looks at the homogeneous system
A
x
=
0
,
{\displaystyle \mathbf {Ax} =\mathbf {0} ,}
where
A
{\displaystyle \mathbf {A} }
is a
m
×
n
{\displaystyle m\times n}
with rank
r
,
{\displaystyle r,}
and shows explicitly that there exists a set of
n
−
r
{\displaystyle n-r}
linearly independent solutions that span the null space of
A
{\displaystyle \mathbf {A} }
.
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
=== First proof ===
Let
V
,
W
{\displaystyle V,W}
be vector spaces over some field
F
,
{\displaystyle F,}
and
T
{\displaystyle T}
defined as in the statement of the theorem with
dim
V
=
n
{\displaystyle \dim V=n}
.
As
Ker
T
⊂
V
{\displaystyle \operatorname {Ker} T\subset V}
is a subspace, there exists a basis for it. Suppose
dim
Ker
T
=
k
{\displaystyle \dim \operatorname {Ker} T=k}
and let
K
:=
{
v
1
,
…
,
v
k
}
⊂
Ker
(
T
)
{\displaystyle {\mathcal {K}}:=\{v_{1},\ldots ,v_{k}\}\subset \operatorname {Ker} (T)}
be such a basis.
We may now, by the Steinitz exchange lemma, extend
K
{\displaystyle {\mathcal {K}}}
with
n
−
k
{\displaystyle n-k}
linearly independent vectors
w
1
,
…
,
w
n
−
k
{\displaystyle w_{1},\ldots ,w_{n-k}}
to form a full basis of
V
{\displaystyle V}
.
Let
S
:=
{
w
1
,
…
,
w
n
−
k
}
⊂
V
∖
Ker
(
T
)
{\displaystyle {\mathcal {S}}:=\{w_{1},\ldots ,w_{n-k}\}\subset V\setminus \operatorname {Ker} (T)}
such that
B
:=
K
∪
S
=
{
v
1
,
…
,
v
k
,
w
1
,
…
,
w
n
−
k
}
⊂
V
{\displaystyle {\mathcal {B}}:={\mathcal {K}}\cup {\mathcal {S}}=\{v_{1},\ldots ,v_{k},w_{1},\ldots ,w_{n-k}\}\subset V}
is a basis for
V
{\displaystyle V}
.
From this, we know that
Im
T
=
Span
T
(
B
)
=
Span
{
T
(
v
1
)
,
…
,
T
(
v
k
)
,
T
(
w
1
)
,
…
,
T
(
w
n
−
k
)
}
{\displaystyle \operatorname {Im} T=\operatorname {Span} T({\mathcal {B}})=\operatorname {Span} \{T(v_{1}),\ldots ,T(v_{k}),T(w_{1}),\ldots ,T(w_{n-k})\}}
=
Span
{
T
(
w
1
)
,
…
,
T
(
w
n
−
k
)
}
=
Span
T
(
S
)
.
{\displaystyle =\operatorname {Span} \{T(w_{1}),\ldots ,T(w_{n-k})\}=\operatorname {Span} T({\mathcal {S}}).}
We now claim that
T
(
S
)
{\displaystyle T({\mathcal {S}})}
is a basis for
Im
T
{\displaystyle \operatorname {Im} T}
.
The above equality already states that
T
(
S
)
{\displaystyle T({\mathcal {S}})}
is a generating set for
Im
T
{\displaystyle \operatorname {Im} T}
; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose
T
(
S
)
{\displaystyle T({\mathcal {S}})}
is not linearly independent, and let
∑
j
=
1
n
−
k
α
j
T
(
w
j
)
=
0
W
{\displaystyle \sum _{j=1}^{n-k}\alpha _{j}T(w_{j})=0_{W}}
for some
α
j
∈
F
{\displaystyle \alpha _{j}\in F}
.
Thus, owing to the linearity of
T
{\displaystyle T}
, it follows that
T
(
∑
j
=
1
n
−
k
α
j
w
j
)
=
0
W
⟹
(
∑
j
=
1
n
−
k
α
j
w
j
)
∈
Ker
T
=
Span
K
⊂
V
.
{\displaystyle T\left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)=0_{W}\implies \left(\sum _{j=1}^{n-k}\alpha _{j}w_{j}\right)\in \operatorname {Ker} T=\operatorname {Span} {\mathcal {K}}\subset V.}
This is a contradiction to
B
{\displaystyle {\mathcal {B}}}
being a basis, unless all
α
j
{\displaystyle \alpha _{j}}
are equal to zero. This shows that
T
(
S
)
{\displaystyle T({\mathcal {S}})}
is linearly independent, and more specifically that it is a basis for
Im
T
{\displaystyle \operatorname {Im} T}
.
To summarize, we have
K
{\displaystyle {\mathcal {K}}}
, a basis for
Ker
T
{\displaystyle \operatorname {Ker} T}
, and
T
(
S
)
{\displaystyle T({\mathcal {S}})}
, a basis for
Im
T
{\displaystyle \operatorname {Im} T}
.
Finally we may state that
Rank
(
T
)
+
Nullity
(
T
)
=
dim
Im
T
+
dim
Ker
T
{\displaystyle \operatorname {Rank} (T)+\operatorname {Nullity} (T)=\dim \operatorname {Im} T+\dim \operatorname {Ker} T}
=
|
T
(
S
)
|
+
|
K
|
=
(
n
−
k
)
+
k
=
n
=
dim
V
.
{\displaystyle =|T({\mathcal {S}})|+|{\mathcal {K}}|=(n-k)+k=n=\dim V.}
This concludes our proof.
=== Second proof ===
Let
A
{\displaystyle \mathbf {A} }
be an
m
×
n
{\displaystyle m\times n}
matrix with
r
{\displaystyle r}
linearly independent columns (i.e.
Rank
(
A
)
=
r
{\displaystyle \operatorname {Rank} (\mathbf {A} )=r}
). We will show that:
To do this, we will produce an
n
×
(
n
−
r
)
{\displaystyle n\times (n-r)}
matrix
X
{\displaystyle \mathbf {X} }
whose columns form a basis of the null space of
A
{\displaystyle \mathbf {A} }
.
Without loss of generality, assume that the first
r
{\displaystyle r}
columns of
A
{\displaystyle \mathbf {A} }
are linearly independent. So, we can write
A
=
(
A
1
A
2
)
,
{\displaystyle \mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{2}\end{pmatrix}},}
where
A
1
{\displaystyle \mathbf {A} _{1}}
is an
m
×
r
{\displaystyle m\times r}
matrix with
r
{\displaystyle r}
linearly independent column vectors, and
A
2
{\displaystyle \mathbf {A} _{2}}
is an
m
×
(
n
−
r
)
{\displaystyle m\times (n-r)}
matrix such that each of its
n
−
r
{\displaystyle n-r}
columns is linear combinations of the columns of
A
1
{\displaystyle \mathbf {A} _{1}}
.
This means that
A
2
=
A
1
B
{\displaystyle \mathbf {A} _{2}=\mathbf {A} _{1}\mathbf {B} }
for some
r
×
(
n
−
r
)
{\displaystyle r\times (n-r)}
matrix
B
{\displaystyle \mathbf {B} }
(see rank factorization) and, hence,
A
=
(
A
1
A
1
B
)
.
{\displaystyle \mathbf {A} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}.}
Let
X
=
(
−
B
I
n
−
r
)
,
{\displaystyle \mathbf {X} ={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}},}
where
I
n
−
r
{\displaystyle \mathbf {I} _{n-r}}
is the
(
n
−
r
)
×
(
n
−
r
)
{\displaystyle (n-r)\times (n-r)}
identity matrix. So,
X
{\displaystyle \mathbf {X} }
is an
n
×
(
n
−
r
)
{\displaystyle n\times (n-r)}
matrix such that
A
X
=
(
A
1
A
1
B
)
(
−
B
I
n
−
r
)
=
−
A
1
B
+
A
1
B
=
0
m
×
(
n
−
r
)
.
{\displaystyle \mathbf {A} \mathbf {X} ={\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}=-\mathbf {A} _{1}\mathbf {B} +\mathbf {A} _{1}\mathbf {B} =\mathbf {0} _{m\times (n-r)}.}
Therefore, each of the
n
−
r
{\displaystyle n-r}
columns of
X
{\displaystyle \mathbf {X} }
are particular solutions of
A
x
=
0
F
m
{\displaystyle \mathbf {Ax} ={0}_{{F}^{m}}}
.
Furthermore, the
n
−
r
{\displaystyle n-r}
columns of
X
{\displaystyle \mathbf {X} }
are linearly independent because
X
u
=
0
F
n
{\displaystyle \mathbf {Xu} =\mathbf {0} _{{F}^{n}}}
will imply
u
=
0
F
n
−
r
{\displaystyle \mathbf {u} =\mathbf {0} _{{F}^{n-r}}}
for
u
∈
F
n
−
r
{\displaystyle \mathbf {u} \in {F}^{n-r}}
:
X
u
=
0
F
n
⟹
(
−
B
I
n
−
r
)
u
=
0
F
n
⟹
(
−
B
u
u
)
=
(
0
F
r
0
F
n
−
r
)
⟹
u
=
0
F
n
−
r
.
{\displaystyle \mathbf {X} \mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} =\mathbf {0} _{{F}^{n}}\implies {\begin{pmatrix}-\mathbf {B} \mathbf {u} \\\mathbf {u} \end{pmatrix}}={\begin{pmatrix}\mathbf {0} _{{F}^{r}}\\\mathbf {0} _{{F}^{n-r}}\end{pmatrix}}\implies \mathbf {u} =\mathbf {0} _{{F}^{n-r}}.}
Therefore, the column vectors of
X
{\displaystyle \mathbf {X} }
constitute a set of
n
−
r
{\displaystyle n-r}
linearly independent solutions for
A
x
=
0
F
m
{\displaystyle \mathbf {Ax} =\mathbf {0} _{\mathbb {F} ^{m}}}
.
We next prove that any solution of
A
x
=
0
F
m
{\displaystyle \mathbf {Ax} =\mathbf {0} _{{F}^{m}}}
must be a linear combination of the columns of
X
{\displaystyle \mathbf {X} }
.
For this, let
u
=
(
u
1
u
2
)
∈
F
n
{\displaystyle \mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}\in {F}^{n}}
be any vector such that
A
u
=
0
F
m
{\displaystyle \mathbf {Au} =\mathbf {0} _{{F}^{m}}}
. Since the columns of
A
1
{\displaystyle \mathbf {A} _{1}}
are linearly independent,
A
1
x
=
0
F
m
{\displaystyle \mathbf {A} _{1}\mathbf {x} =\mathbf {0} _{{F}^{m}}}
implies
x
=
0
F
r
{\displaystyle \mathbf {x} =\mathbf {0} _{{F}^{r}}}
.
Therefore,
A
u
=
0
F
m
⟹
(
A
1
A
1
B
)
(
u
1
u
2
)
=
A
1
u
1
+
A
1
B
u
2
=
A
1
(
u
1
+
B
u
2
)
=
0
F
m
⟹
u
1
+
B
u
2
=
0
F
r
⟹
u
1
=
−
B
u
2
{\displaystyle {\begin{array}{rcl}\mathbf {A} \mathbf {u} &=&\mathbf {0} _{{F}^{m}}\\\implies {\begin{pmatrix}\mathbf {A} _{1}&\mathbf {A} _{1}\mathbf {B} \end{pmatrix}}{\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}&=&\mathbf {A} _{1}\mathbf {u} _{1}+\mathbf {A} _{1}\mathbf {B} \mathbf {u} _{2}&=&\mathbf {A} _{1}(\mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2})&=&\mathbf {0} _{\mathbb {F} ^{m}}\\\implies \mathbf {u} _{1}+\mathbf {B} \mathbf {u} _{2}&=&\mathbf {0} _{{F}^{r}}\\\implies \mathbf {u} _{1}&=&-\mathbf {B} \mathbf {u} _{2}\end{array}}}
⟹
u
=
(
u
1
u
2
)
=
(
−
B
I
n
−
r
)
u
2
=
X
u
2
.
{\displaystyle \implies \mathbf {u} ={\begin{pmatrix}\mathbf {u} _{1}\\\mathbf {u} _{2}\end{pmatrix}}={\begin{pmatrix}-\mathbf {B} \\\mathbf {I} _{n-r}\end{pmatrix}}\mathbf {u} _{2}=\mathbf {X} \mathbf {u} _{2}.}
This proves that any vector
u
{\displaystyle \mathbf {u} }
that is a solution of
A
x
=
0
{\displaystyle \mathbf {Ax} =\mathbf {0} }
must be a linear combination of the
n
−
r
{\displaystyle n-r}
special solutions given by the columns of
X
{\displaystyle \mathbf {X} }
. And we have already seen that the columns of
X
{\displaystyle \mathbf {X} }
are linearly independent. Hence, the columns of
X
{\displaystyle \mathbf {X} }
constitute a basis for the null space of
A
{\displaystyle \mathbf {A} }
. Therefore, the nullity of
A
{\displaystyle \mathbf {A} }
is
n
−
r
{\displaystyle n-r}
. Since
r
{\displaystyle r}
equals rank of
A
{\displaystyle \mathbf {A} }
, it follows that
Rank
(
A
)
+
Nullity
(
A
)
=
n
{\displaystyle \operatorname {Rank} (\mathbf {A} )+\operatorname {Nullity} (\mathbf {A} )=n}
. This concludes our proof.
== A third fundamental subspace ==
When
T
:
V
→
W
{\displaystyle T:V\to W}
is a linear transformation between two finite-dimensional subspaces, with
n
=
dim
(
V
)
{\displaystyle n=\dim(V)}
and
m
=
dim
(
W
)
{\displaystyle m=\dim(W)}
(so can be represented by an
m
×
n
{\displaystyle m\times n}
matrix
M
{\displaystyle M}
), the rank–nullity theorem asserts that if
T
{\displaystyle T}
has rank
r
{\displaystyle r}
, then
n
−
r
{\displaystyle n-r}
is the dimension of the null space of
M
{\displaystyle M}
, which represents the kernel of
T
{\displaystyle T}
. In some texts, a third fundamental subspace associated to
T
{\displaystyle T}
is considered alongside its image and kernel: the cokernel of
T
{\displaystyle T}
is the quotient space
W
/
Im
(
T
)
{\displaystyle W/\operatorname {Im} (T)}
, and its dimension is
m
−
r
{\displaystyle m-r}
. This dimension formula (which might also be rendered
dim
Im
(
T
)
+
dim
Coker
(
T
)
=
dim
(
W
)
{\displaystyle \dim \operatorname {Im} (T)+\dim \operatorname {Coker} (T)=\dim(W)}
) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra.
== Reformulations and generalizations ==
This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
0
→
U
→
V
→
T
R
→
0
{\displaystyle 0\rightarrow U\rightarrow V\mathbin {\overset {T}{\rightarrow }} R\rightarrow 0}
is a short exact sequence of vector spaces, then
U
⊕
R
≅
V
{\displaystyle U\oplus R\cong V}
, hence
dim
(
U
)
+
dim
(
R
)
=
dim
(
V
)
.
{\displaystyle \dim(U)+\dim(R)=\dim(V).}
Here
R
{\displaystyle R}
plays the role of
Im
T
{\displaystyle \operatorname {Im} T}
and
U
{\displaystyle U}
is
Ker
T
{\displaystyle \operatorname {Ker} T}
, i.e.
0
→
ker
T
↪
V
→
T
im
T
→
0
{\displaystyle 0\rightarrow \ker T\mathbin {\hookrightarrow } V\mathbin {\overset {T}{\rightarrow }} \operatorname {im} T\rightarrow 0}
In the finite-dimensional case, this formulation is susceptible to a generalization: if
0
→
V
1
→
V
2
→
⋯
V
r
→
0
{\displaystyle 0\rightarrow V_{1}\rightarrow V_{2}\rightarrow \cdots V_{r}\rightarrow 0}
is an exact sequence of finite-dimensional vector spaces, then
∑
i
=
1
r
(
−
1
)
i
dim
(
V
i
)
=
0.
{\displaystyle \sum _{i=1}^{r}(-1)^{i}\dim(V_{i})=0.}
The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map
T
∈
Hom
(
V
,
W
)
{\displaystyle T\in \operatorname {Hom} (V,W)}
, where
V
{\displaystyle V}
and
W
{\displaystyle W}
are finite-dimensional, is defined by
index
T
=
dim
Ker
(
T
)
−
dim
Coker
T
.
{\displaystyle \operatorname {index} T=\dim \operatorname {Ker} (T)-\dim \operatorname {Coker} T.}
Intuitively,
dim
Ker
T
{\displaystyle \dim \operatorname {Ker} T}
is the number of independent solutions
v
{\displaystyle v}
of the equation
T
v
=
0
{\displaystyle Tv=0}
, and
dim
Coker
T
{\displaystyle \dim \operatorname {Coker} T}
is the number of independent restrictions that have to be put on
w
{\displaystyle w}
to make
T
v
=
w
{\displaystyle Tv=w}
solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
index
T
=
dim
V
−
dim
W
.
{\displaystyle \operatorname {index} T=\dim V-\dim W.}
We see that we can easily read off the index of the linear map
T
{\displaystyle T}
from the involved spaces, without any need to analyze
T
{\displaystyle T}
in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.
== Citations ==
== References ==
Axler, Sheldon (2015). Linear Algebra Done Right. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388
Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (2014). Linear Algebra (4th ed.). Pearson Education. ISBN 978-0130084514.
Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Valenza, Robert J. (1993) [1951]. Linear Algebra: An Introduction to Abstract Mathematics. Undergraduate Texts in Mathematics (3rd ed.). Springer. ISBN 3-540-94099-5.
== External links ==
Gilbert Strang, MIT Linear Algebra Lecture on the Four Fundamental Subspaces, from MIT OpenCourseWare | Wikipedia/Fundamental_theorem_of_linear_algebra |
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers
R
{\displaystyle \mathbb {R} }
, or a subset of
R
{\displaystyle \mathbb {R} }
that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval.
The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of
R
{\displaystyle \mathbb {R} }
-vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an
R
{\displaystyle \mathbb {R} }
-algebra, such as the complex numbers or the quaternions. The structure
R
{\displaystyle \mathbb {R} }
-vector space of the codomain induces a structure of
R
{\displaystyle \mathbb {R} }
-vector space on the functions. If the codomain has a structure of
R
{\displaystyle \mathbb {R} }
-algebra, the same is true for the functions.
The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve.
When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications.
== Real function ==
A real function is a function from a subset of
R
{\displaystyle \mathbb {R} }
to
R
,
{\displaystyle \mathbb {R} ,}
where
R
{\displaystyle \mathbb {R} }
denotes as usual the set of real numbers. That is, the domain of a real function is a subset
R
{\displaystyle \mathbb {R} }
, and its codomain is
R
.
{\displaystyle \mathbb {R} .}
It is generally assumed that the domain contains an interval of positive length.
=== Basic examples ===
For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of:
All polynomial functions, including constant functions and linear functions
Sine and cosine functions
Exponential function
Some functions are defined everywhere, but not continuous at some points. For example
The Heaviside step function is defined everywhere, but not continuous at zero.
Some functions are defined and continuous everywhere, but not everywhere differentiable. For example
The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero.
The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero.
Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example:
A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator.
The tangent function is not defined for
π
2
+
k
π
,
{\displaystyle {\frac {\pi }{2}}+k\pi ,}
where k is any integer.
The logarithm function is defined only for positive values of the variable.
Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of:
The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable).
== General definition ==
A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.
Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of
R
{\displaystyle \mathbb {R} }
, the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
such that its domain X is a subset of
R
{\displaystyle \mathbb {R} }
that contains an interval of positive length.
A simple example of a function in one variable could be:
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
X
=
{
x
∈
R
:
x
≥
0
}
{\displaystyle X=\{x\in \mathbb {R} \,:\,x\geq 0\}}
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
which is the square root of x.
=== Image ===
The image of a function
f
(
x
)
{\displaystyle f(x)}
is the set of all values of f when the variable x runs in the whole domain of f. For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.
The preimage of a given real number y is the set of the solutions of the equation y = f(x).
=== Domain ===
The domain of a function of several real variables is a subset of
R
{\displaystyle \mathbb {R} }
that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y.
Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable.
=== Algebraic structure ===
The arithmetic operations may be applied to the functions in the following way:
For every real number r, the constant function
(
x
)
↦
r
{\displaystyle (x)\mapsto r}
, is everywhere defined.
For every real number r and every function f, the function
r
f
:
(
x
)
↦
r
f
(
x
)
{\displaystyle rf:(x)\mapsto rf(x)}
has the same domain as f (or is everywhere defined if r = 0).
If f and g are two functions of respective domains X and Y such that X∩Y contains an open subset of
R
{\displaystyle \mathbb {R} }
, then
f
+
g
:
(
x
)
↦
f
(
x
)
+
g
(
x
)
{\displaystyle f+g:(x)\mapsto f(x)+g(x)}
and
f
g
:
(
x
)
↦
f
(
x
)
g
(
x
)
{\displaystyle f\,g:(x)\mapsto f(x)\,g(x)}
are functions that have a domain containing X∩Y.
It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (
R
{\displaystyle \mathbb {R} }
-algebras).
One may similarly define
1
/
f
:
(
x
)
↦
1
/
f
(
x
)
,
{\displaystyle 1/f:(x)\mapsto 1/f(x),}
which is a function only if the set of the points (x) in the domain of f such that f(x) ≠ 0 contains an open subset of
R
{\displaystyle \mathbb {R} }
. This constraint implies that the above two algebras are not fields.
=== Continuity and limit ===
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space.
For defining the continuity, it is useful to consider the distance function of
R
{\displaystyle \mathbb {R} }
, which is an everywhere defined function of 2 real variables:
d
(
x
,
y
)
=
|
x
−
y
|
{\displaystyle d(x,y)=|x-y|}
A function f is continuous at a point
a
{\displaystyle a}
which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that
|
f
(
x
)
−
f
(
a
)
|
<
ε
{\displaystyle |f(x)-f(a)|<\varepsilon }
for all
x
{\displaystyle x}
such that
d
(
x
,
a
)
<
φ
.
{\displaystyle d(x,a)<\varphi .}
In other words, φ may be chosen small enough for having the image by f of the interval of radius φ centered at
a
{\displaystyle a}
contained in the interval of length 2ε centered at
f
(
a
)
.
{\displaystyle f(a).}
A function is continuous if it is continuous at every point of its domain.
The limit of a real-valued function of a real variable is as follows. Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted
L
=
lim
x
→
a
f
(
x
)
,
{\displaystyle L=\lim _{x\to a}f(x),}
if the following condition is satisfied:
For every positive real number ε > 0, there is a positive real number δ > 0 such that
|
f
(
x
)
−
L
|
<
ε
{\displaystyle |f(x)-L|<\varepsilon }
for all x in the domain such that
d
(
x
,
a
)
<
δ
.
{\displaystyle d(x,a)<\delta .}
If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have
f
(
a
)
=
lim
x
→
a
f
(
x
)
.
{\displaystyle f(a)=\lim _{x\to a}f(x).}
When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a.
== Calculus ==
One can collect a number of functions each of a real variable, say
y
1
=
f
1
(
x
)
,
y
2
=
f
2
(
x
)
,
…
,
y
n
=
f
n
(
x
)
{\displaystyle y_{1}=f_{1}(x)\,,\quad y_{2}=f_{2}(x)\,,\ldots ,y_{n}=f_{n}(x)}
into a vector parametrized by x:
y
=
(
y
1
,
y
2
,
…
,
y
n
)
=
[
f
1
(
x
)
,
f
2
(
x
)
,
…
,
f
n
(
x
)
]
{\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})=[f_{1}(x),f_{2}(x),\ldots ,f_{n}(x)]}
The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n:
d
y
d
x
=
(
d
y
1
d
x
,
d
y
2
d
x
,
…
,
d
y
n
d
x
)
{\displaystyle {\frac {d\mathbf {y} }{dx}}=\left({\frac {dy_{1}}{dx}},{\frac {dy_{2}}{dx}},\ldots ,{\frac {dy_{n}}{dx}}\right)}
One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x:
∫
a
b
y
(
x
)
⋅
d
r
=
∫
a
b
y
(
x
)
⋅
d
r
(
x
)
d
x
d
x
{\displaystyle \int _{a}^{b}\mathbf {y} (x)\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {y} (x)\cdot {\frac {d\mathbf {r} (x)}{dx}}dx}
where · is the dot product, and x = a and x = b are the start and endpoints of the curve.
=== Theorems ===
With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign.
== Implicit functions ==
A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space
R
{\displaystyle \mathbb {R} }
2 to the zero element in
R
{\displaystyle \mathbb {R} }
(just the ordinary zero 0):
ϕ
:
R
2
→
{
0
}
{\displaystyle \phi :\mathbb {R} ^{2}\to \{0\}}
and
ϕ
(
x
,
y
)
=
0
{\displaystyle \phi (x,y)=0}
is an equation in the variables. Implicit functions are a more general way to represent functions, since if:
y
=
f
(
x
)
{\displaystyle y=f(x)}
then we can always define:
ϕ
(
x
,
y
)
=
y
−
f
(
x
)
=
0
{\displaystyle \phi (x,y)=y-f(x)=0}
but the converse is not always possible, i.e. not all implicit functions have the form of this equation.
== One-dimensional space curves in ==
R
{\displaystyle \mathbb {R} }
n
=== Formulation ===
Given the functions r1 = r1(t), r2 = r2(t), ..., rn = rn(t) all of a common variable t, so that:
r
1
:
R
→
R
r
2
:
R
→
R
⋯
r
n
:
R
→
R
r
1
=
r
1
(
t
)
r
2
=
r
2
(
t
)
⋯
r
n
=
r
n
(
t
)
{\displaystyle {\begin{aligned}r_{1}:\mathbb {R} \rightarrow \mathbb {R} &\quad r_{2}:\mathbb {R} \rightarrow \mathbb {R} &\cdots &\quad r_{n}:\mathbb {R} \rightarrow \mathbb {R} \\r_{1}=r_{1}(t)&\quad r_{2}=r_{2}(t)&\cdots &\quad r_{n}=r_{n}(t)\\\end{aligned}}}
or taken together:
r
:
R
→
R
n
,
r
=
r
(
t
)
{\displaystyle \mathbf {r} :\mathbb {R} \rightarrow \mathbb {R} ^{n}\,,\quad \mathbf {r} =\mathbf {r} (t)}
then the parametrized n-tuple,
r
(
t
)
=
[
r
1
(
t
)
,
r
2
(
t
)
,
…
,
r
n
(
t
)
]
{\displaystyle \mathbf {r} (t)=[r_{1}(t),r_{2}(t),\ldots ,r_{n}(t)]}
describes a one-dimensional space curve.
=== Tangent line to curve ===
At a point r(t = c) = a = (a1, a2, ..., an) for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t:
r
1
(
t
)
−
a
1
d
r
1
(
t
)
/
d
t
=
r
2
(
t
)
−
a
2
d
r
2
(
t
)
/
d
t
=
⋯
=
r
n
(
t
)
−
a
n
d
r
n
(
t
)
/
d
t
{\displaystyle {\frac {r_{1}(t)-a_{1}}{dr_{1}(t)/dt}}={\frac {r_{2}(t)-a_{2}}{dr_{2}(t)/dt}}=\cdots ={\frac {r_{n}(t)-a_{n}}{dr_{n}(t)/dt}}}
=== Normal plane to curve ===
The equation of the n-dimensional hyperplane normal to the tangent line at r = a is:
(
p
1
−
a
1
)
d
r
1
(
t
)
d
t
+
(
p
2
−
a
2
)
d
r
2
(
t
)
d
t
+
⋯
+
(
p
n
−
a
n
)
d
r
n
(
t
)
d
t
=
0
{\displaystyle (p_{1}-a_{1}){\frac {dr_{1}(t)}{dt}}+(p_{2}-a_{2}){\frac {dr_{2}(t)}{dt}}+\cdots +(p_{n}-a_{n}){\frac {dr_{n}(t)}{dt}}=0}
or in terms of the dot product:
(
p
−
a
)
⋅
d
r
(
t
)
d
t
=
0
{\displaystyle (\mathbf {p} -\mathbf {a} )\cdot {\frac {d\mathbf {r} (t)}{dt}}=0}
where p = (p1, p2, ..., pn) are points in the plane, not on the space curve.
=== Relation to kinematics ===
The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector dr(t)/dt|t = c, and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (p − a) must be normal to dr(t)/dt|t = c.
Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature.
== Matrix valued functions ==
A matrix can also be a function of a single variable. For example, the rotation matrix in 2d:
R
(
θ
)
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
{\displaystyle R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}}
is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations):
Λ
(
β
)
=
[
1
1
−
β
2
−
β
1
−
β
2
0
0
−
β
1
−
β
2
1
1
−
β
2
0
0
0
0
1
0
0
0
0
1
]
{\displaystyle \Lambda (\beta )={\begin{bmatrix}{\frac {1}{\sqrt {1-\beta ^{2}}}}&-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&0&0\\-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&{\frac {1}{\sqrt {1-\beta ^{2}}}}&0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}}
is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant.
== Banach and Hilbert spaces and quantum mechanics ==
Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation:
i
ℏ
∂
∂
t
Ψ
=
H
^
Ψ
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi }
where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces.
== Complex-valued function of a real variable ==
A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.
If f(x) is such a complex valued function, it may be decomposed as
f(x) = g(x) + ih(x),
where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.
== Cardinality of sets of functions of a real variable ==
The cardinality of the set of real-valued functions of a real variable,
R
R
=
{
f
:
R
→
R
}
{\displaystyle \mathbb {R} ^{\mathbb {R} }=\{f:\mathbb {R} \to \mathbb {R} \}}
, is
ℶ
2
=
2
c
{\displaystyle \beth _{2}=2^{\mathfrak {c}}}
, which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic:
c
a
r
d
(
R
R
)
=
c
a
r
d
(
R
)
c
a
r
d
(
R
)
=
c
c
=
(
2
ℵ
0
)
c
=
2
ℵ
0
⋅
c
=
2
c
.
{\displaystyle \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=\mathrm {card} (\mathbb {R} )^{\mathrm {card} (\mathbb {R} )}={\mathfrak {c}}^{\mathfrak {c}}=(2^{\aleph _{0}})^{\mathfrak {c}}=2^{\aleph _{0}\cdot {\mathfrak {c}}}=2^{\mathfrak {c}}.}
Furthermore, if
X
{\displaystyle X}
is a set such that
2
≤
c
a
r
d
(
X
)
≤
c
{\displaystyle 2\leq \mathrm {card} (X)\leq {\mathfrak {c}}}
, then the cardinality of the set
X
R
=
{
f
:
R
→
X
}
{\displaystyle X^{\mathbb {R} }=\{f:\mathbb {R} \to X\}}
is also
2
c
{\displaystyle 2^{\mathfrak {c}}}
, since
2
c
=
c
a
r
d
(
2
R
)
≤
c
a
r
d
(
X
R
)
≤
c
a
r
d
(
R
R
)
=
2
c
.
{\displaystyle 2^{\mathfrak {c}}=\mathrm {card} (2^{\mathbb {R} })\leq \mathrm {card} (X^{\mathbb {R} })\leq \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=2^{\mathfrak {c}}.}
However, the set of continuous functions
C
0
(
R
)
=
{
f
:
R
→
R
:
f
c
o
n
t
i
n
u
o
u
s
}
{\displaystyle C^{0}(\mathbb {R} )=\{f:\mathbb {R} \to \mathbb {R} :f\ \mathrm {continuous} \}}
has a strictly smaller cardinality, the cardinality of the continuum,
c
{\displaystyle {\mathfrak {c}}}
. This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic:
c
a
r
d
(
C
0
(
R
)
)
≤
c
a
r
d
(
R
Q
)
=
(
2
ℵ
0
)
ℵ
0
=
2
ℵ
0
⋅
ℵ
0
=
2
ℵ
0
=
c
.
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\leq \mathrm {card} (\mathbb {R} ^{\mathbb {Q} })=(2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}\cdot \aleph _{0}}=2^{\aleph _{0}}={\mathfrak {c}}.}
On the other hand, since there is a clear bijection between
R
{\displaystyle \mathbb {R} }
and the set of constant functions
{
f
:
R
→
R
:
f
(
x
)
≡
x
0
}
{\displaystyle \{f:\mathbb {R} \to \mathbb {R} :f(x)\equiv x_{0}\}}
, which forms a subset of
C
0
(
R
)
{\displaystyle C^{0}(\mathbb {R} )}
,
c
a
r
d
(
C
0
(
R
)
)
≥
c
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\geq {\mathfrak {c}}}
must also hold. Hence,
c
a
r
d
(
C
0
(
R
)
)
=
c
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))={\mathfrak {c}}}
.
== See also ==
Real analysis
Function of several real variables
Complex analysis
Function of several complex variables
== References ==
F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2.
R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7.
N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6.
== External links ==
Multivariable Calculus
L. A. Talman (2007) Differentiability for Multivariable Functions | Wikipedia/Real_function |
In Euclidean geometry, an affine transformation or affinity (from the Latin, affinis, "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.
More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line.
If X is the point set of an affine space, then every affine transformation on X can be represented as the composition of a linear transformation on X and a translation of X. Unlike a purely linear transformation, an affine transformation need not preserve the origin of the affine space. Thus, every linear transformation is affine, but not every affine transformation is linear.
Examples of affine transformations include translation, scaling, homothety, similarity, reflection, rotation, hyperbolic rotation, shear mapping, and compositions of them in any combination and sequence.
Viewing an affine space as the complement of a hyperplane at infinity of a projective space, the affine transformations are the projective transformations of that projective space that leave the hyperplane at infinity invariant, restricted to the complement of that hyperplane.
A generalization of an affine transformation is an affine map (or affine homomorphism or affine mapping) between two (potentially different) affine spaces over the same field k. Let (X, V, k) and (Z, W, k) be two affine spaces with X and Z the point sets and V and W the respective associated vector spaces over the field k. A map f : X → Z is an affine map if there exists a linear map mf : V → W such that mf (x − y) = f (x) − f (y) for all x, y in X.
== Definition ==
Let X be an affine space over a field k, and V be its associated vector space. An affine transformation is a bijection f from X onto itself that is an affine map; this means that a linear map g from V to V is well defined by the equation
g
(
y
−
x
)
=
f
(
y
)
−
f
(
x
)
;
{\displaystyle g(y-x)=f(y)-f(x);}
here, as usual, the subtraction of two points denotes the free vector from the second point to the first one, and "well-defined" means that
y
−
x
=
y
′
−
x
′
{\displaystyle y-x=y'-x'}
implies that
f
(
y
)
−
f
(
x
)
=
f
(
y
′
)
−
f
(
x
′
)
.
{\displaystyle f(y)-f(x)=f(y')-f(x').}
If the dimension of X is at least two, a semiaffine transformation f of X is a bijection from X onto itself satisfying:
For every d-dimensional affine subspace S of X, then f (S) is also a d-dimensional affine subspace of X.
If S and T are parallel affine subspaces of X, then f (S) and f (T) are parallel.
These two conditions are satisfied by affine transformations, and express what is precisely meant by the expression that "f preserves parallelism".
These conditions are not independent as the second follows from the first. Furthermore, if the field k has at least three elements, the first condition can be simplified to: f is a collineation, that is, it maps lines to lines.
== Structure ==
By the definition of an affine space, V acts on X, so that, for every pair
(
x
,
v
)
{\displaystyle (x,\mathbf {v} )}
in X × V there is associated a point y in X. We can denote this action by
v
→
(
x
)
=
y
{\displaystyle {\vec {v}}(x)=y}
. Here we use the convention that
v
→
=
v
{\displaystyle {\vec {v}}={\textbf {v}}}
are two interchangeable notations for an element of V. By fixing a point c in X one can define a function mc : X → V by mc(x) = cx→. For any c, this function is one-to-one, and so, has an inverse function mc−1 : V → X given by
m
c
−
1
(
v
)
=
v
→
(
c
)
{\displaystyle m_{c}^{-1}({\textbf {v}})={\vec {v}}(c)}
. These functions can be used to turn X into a vector space (with respect to the point c) by defining:
x
+
y
=
m
c
−
1
(
m
c
(
x
)
+
m
c
(
y
)
)
,
for all
x
,
y
in
X
,
{\displaystyle x+y=m_{c}^{-1}\left(m_{c}(x)+m_{c}(y)\right),{\text{ for all }}x,y{\text{ in }}X,}
and
r
x
=
m
c
−
1
(
r
m
c
(
x
)
)
,
for all
r
in
k
and
x
in
X
.
{\displaystyle rx=m_{c}^{-1}\left(rm_{c}(x)\right),{\text{ for all }}r{\text{ in }}k{\text{ and }}x{\text{ in }}X.}
This vector space has origin c and formally needs to be distinguished from the affine space X, but common practice is to denote it by the same symbol and mention that it is a vector space after an origin has been specified. This identification permits points to be viewed as vectors and vice versa.
For any linear transformation λ of V, we can define the function L(c, λ) : X → X by
L
(
c
,
λ
)
(
x
)
=
m
c
−
1
(
λ
(
m
c
(
x
)
)
)
=
c
+
λ
(
c
x
→
)
.
{\displaystyle L(c,\lambda )(x)=m_{c}^{-1}\left(\lambda (m_{c}(x))\right)=c+\lambda ({\vec {cx}}).}
Then L(c, λ) is an affine transformation of X which leaves the point c fixed. It is a linear transformation of X, viewed as a vector space with origin c.
Let σ be any affine transformation of X. Pick a point c in X and consider the translation of X by the vector
w
=
c
σ
(
c
)
→
{\displaystyle {\mathbf {w}}={\overrightarrow {c\sigma (c)}}}
, denoted by Tw. Translations are affine transformations and the composition of affine transformations is an affine transformation. For this choice of c, there exists a unique linear transformation λ of V such that
σ
(
x
)
=
T
w
(
L
(
c
,
λ
)
(
x
)
)
.
{\displaystyle \sigma (x)=T_{\mathbf {w}}\left(L(c,\lambda )(x)\right).}
That is, an arbitrary affine transformation of X is the composition of a linear transformation of X (viewed as a vector space) and a translation of X.
This representation of affine transformations is often taken as the definition of an affine transformation (with the choice of origin being implicit).
== Representation ==
As shown above, an affine map is the composition of two functions: a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, and vector addition to represent translations. Formally, in the finite-dimensional case, if the linear map is represented as a multiplication by an invertible matrix
A
{\displaystyle A}
and the translation as the addition of a vector
b
{\displaystyle \mathbf {b} }
, an affine map
f
{\displaystyle f}
acting on a vector
x
{\displaystyle \mathbf {x} }
can be represented as
y
=
f
(
x
)
=
A
x
+
b
.
{\displaystyle \mathbf {y} =f(\mathbf {x} )=A\mathbf {x} +\mathbf {b} .}
=== Augmented matrix ===
Using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. The technique requires that all vectors be augmented with a "1" at the end, and all matrices be augmented with an extra row of zeros at the bottom, an extra column—the translation vector—to the right, and a "1" in the lower right corner. If
A
{\displaystyle A}
is a matrix,
[
y
1
]
=
[
A
b
0
⋯
0
1
]
[
x
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} \\1\end{bmatrix}}=\left[{\begin{array}{ccc|c}&A&&\mathbf {b} \\0&\cdots &0&1\end{array}}\right]{\begin{bmatrix}\mathbf {x} \\1\end{bmatrix}}}
is equivalent to the following
y
=
A
x
+
b
.
{\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} .}
The above-mentioned augmented matrix is called an affine transformation matrix. In the general case, when the last row vector is not restricted to be
[
0
⋯
0
1
]
{\displaystyle \left[{\begin{array}{ccc|c}0&\cdots &0&1\end{array}}\right]}
, the matrix becomes a projective transformation matrix (as it can also be used to perform projective transformations).
This representation exhibits the set of all invertible affine transformations as the semidirect product of
K
n
{\displaystyle K^{n}}
and
GL
(
n
,
K
)
{\displaystyle \operatorname {GL} (n,K)}
. This is a group under the operation of composition of functions, called the affine group.
Ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate "1" to every vector, one essentially considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the additional coordinate is 1. Thus the origin of the original space can be found at
(
0
,
0
,
…
,
0
,
1
)
{\displaystyle (0,0,\dotsc ,0,1)}
. A translation within the original space by means of a linear transformation of the higher-dimensional space is then possible (specifically, a shear transformation). The coordinates in the higher-dimensional space are an example of homogeneous coordinates. If the original space is Euclidean, the higher dimensional space is a real projective space.
The advantage of using homogeneous coordinates is that one can combine any number of affine transformations into one by multiplying the respective matrices. This property is used extensively in computer graphics, computer vision and robotics.
==== Example augmented matrix ====
Suppose you have three points that define a non-degenerate triangle in a plane, or four points that define a non-degenerate tetrahedron in 3-dimensional space, or generally n + 1 points x1, ..., xn+1 that define a non-degenerate simplex in n-dimensional space. Suppose you have corresponding destination points y1, ..., yn+1, where these new points can lie in a space with any number of dimensions. (Furthermore, the new points need not be distinct from each other and need not form a non-degenerate simplex.) The unique augmented matrix M that achieves the affine transformation
[
y
i
1
]
=
M
[
x
i
1
]
{\displaystyle {\begin{bmatrix}\mathbf {y} _{i}\\1\end{bmatrix}}=M{\begin{bmatrix}\mathbf {x} _{i}\\1\end{bmatrix}}}
for every i is
M
=
[
y
1
⋯
y
n
+
1
1
⋯
1
]
[
x
1
⋯
x
n
+
1
1
⋯
1
]
−
1
.
{\displaystyle M={\begin{bmatrix}\mathbf {y} _{1}&\cdots &\mathbf {y} _{n+1}\\1&\cdots &1\end{bmatrix}}{\begin{bmatrix}\mathbf {x} _{1}&\cdots &\mathbf {x} _{n+1}\\1&\cdots &1\end{bmatrix}}^{-1}.}
== Properties ==
=== Properties preserved ===
An affine transformation preserves:
collinearity between points: three or more points which lie on the same line (called collinear points) continue to be collinear after the transformation.
parallelism: two or more lines which are parallel, continue to be parallel after the transformation.
convexity of sets: a convex set continues to be convex after the transformation. Moreover, the extreme points of the original set are mapped to the extreme points of the transformed set.
ratios of lengths of parallel line segments: for distinct parallel segments defined by points
p
1
{\displaystyle p_{1}}
and
p
2
{\displaystyle p_{2}}
,
p
3
{\displaystyle p_{3}}
and
p
4
{\displaystyle p_{4}}
, the ratio of
p
1
p
2
→
{\displaystyle {\overrightarrow {p_{1}p_{2}}}}
and
p
3
p
4
→
{\displaystyle {\overrightarrow {p_{3}p_{4}}}}
is the same as that of
f
(
p
1
)
f
(
p
2
)
→
{\displaystyle {\overrightarrow {f(p_{1})f(p_{2})}}}
and
f
(
p
3
)
f
(
p
4
)
→
{\displaystyle {\overrightarrow {f(p_{3})f(p_{4})}}}
.
barycenters of weighted collections of points.
=== Groups ===
As an affine transformation is invertible, the square matrix
A
{\displaystyle A}
appearing in its matrix representation is invertible. The matrix representation of the inverse transformation is thus
[
A
−
1
−
A
−
1
b
→
0
…
0
1
]
.
{\displaystyle \left[{\begin{array}{ccc|c}&A^{-1}&&-A^{-1}{\vec {b}}\ \\0&\ldots &0&1\end{array}}\right].}
The invertible affine transformations (of an affine space onto itself) form the affine group, which has the general linear group of degree
n
{\displaystyle n}
as subgroup and is itself a subgroup of the general linear group of degree
n
+
1
{\displaystyle n+1}
.
The similarity transformations form the subgroup where
A
{\displaystyle A}
is a scalar times an orthogonal matrix. For example, if the affine transformation acts on the plane and if the determinant of
A
{\displaystyle A}
is 1 or −1 then the transformation is an equiareal mapping. Such transformations form a subgroup called the equi-affine group. A transformation that is both equi-affine and a similarity is an isometry of the plane taken with Euclidean distance.
Each of these groups has a subgroup of orientation-preserving or positive affine transformations: those where the determinant of
A
{\displaystyle A}
is positive. In the last case this is in 3D the group of rigid transformations (proper rotations and pure translations).
If there is a fixed point, we can take that as the origin, and the affine transformation reduces to a linear transformation. This may make it easier to classify and understand the transformation. For example, describing a transformation as a rotation by a certain angle with respect to a certain axis may give a clearer idea of the overall behavior of the transformation than describing it as a combination of a translation and a rotation. However, this depends on application and context.
== Affine maps ==
An affine map
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
between two affine spaces is a map on the points that acts linearly on the vectors (that is, the vectors between points of the space). In symbols,
f
{\displaystyle f}
determines a linear transformation
φ
{\displaystyle \varphi }
such that, for any pair of points
P
,
Q
∈
A
{\displaystyle P,Q\in {\mathcal {A}}}
:
f
(
P
)
f
(
Q
)
→
=
φ
(
P
Q
→
)
{\displaystyle {\overrightarrow {f(P)~f(Q)}}=\varphi ({\overrightarrow {PQ}})}
or
f
(
Q
)
−
f
(
P
)
=
φ
(
Q
−
P
)
{\displaystyle f(Q)-f(P)=\varphi (Q-P)}
.
We can interpret this definition in a few other ways, as follows.
If an origin
O
∈
A
{\displaystyle O\in {\mathcal {A}}}
is chosen, and
B
{\displaystyle B}
denotes its image
f
(
O
)
∈
B
{\displaystyle f(O)\in {\mathcal {B}}}
, then this means that for any vector
x
→
{\displaystyle {\vec {x}}}
:
f
:
(
O
+
x
→
)
↦
(
B
+
φ
(
x
→
)
)
{\displaystyle f\colon (O+{\vec {x}})\mapsto (B+\varphi ({\vec {x}}))}
.
If an origin
O
′
∈
B
{\displaystyle O'\in {\mathcal {B}}}
is also chosen, this can be decomposed as an affine transformation
g
:
A
→
B
{\displaystyle g\colon {\mathcal {A}}\to {\mathcal {B}}}
that sends
O
↦
O
′
{\displaystyle O\mapsto O'}
, namely
g
:
(
O
+
x
→
)
↦
(
O
′
+
φ
(
x
→
)
)
{\displaystyle g\colon (O+{\vec {x}})\mapsto (O'+\varphi ({\vec {x}}))}
,
followed by the translation by a vector
b
→
=
O
′
B
→
{\displaystyle {\vec {b}}={\overrightarrow {O'B}}}
.
The conclusion is that, intuitively,
f
{\displaystyle f}
consists of a translation and a linear map.
=== Alternative definition ===
Given two affine spaces
A
{\displaystyle {\mathcal {A}}}
and
B
{\displaystyle {\mathcal {B}}}
, over the same field, a function
f
:
A
→
B
{\displaystyle f\colon {\mathcal {A}}\to {\mathcal {B}}}
is an affine map if and only if for every family
{
(
a
i
,
λ
i
)
}
i
∈
I
{\displaystyle \{(a_{i},\lambda _{i})\}_{i\in I}}
of weighted points in
A
{\displaystyle {\mathcal {A}}}
such that
∑
i
∈
I
λ
i
=
1
{\displaystyle \sum _{i\in I}\lambda _{i}=1}
,
we have
f
(
∑
i
∈
I
λ
i
a
i
)
=
∑
i
∈
I
λ
i
f
(
a
i
)
{\displaystyle f\left(\sum _{i\in I}\lambda _{i}a_{i}\right)=\sum _{i\in I}\lambda _{i}f(a_{i})}
.
In other words,
f
{\displaystyle f}
preserves barycenters.
== History ==
The word "affine" as a mathematical term is defined in connection with tangents to curves in Euler's 1748 Introductio in analysin infinitorum. Felix Klein attributes the term "affine transformation" to Möbius and Gauss.
== Image transformation ==
In their applications to digital image processing, the affine transformations are analogous to printing on a sheet of rubber and stretching the sheet's edges parallel to the plane. This transform relocates pixels requiring intensity interpolation to approximate the value of moved pixels, bicubic interpolation is the standard for image transformations in image processing applications. Affine transformations scale, rotate, translate, mirror and shear images as shown in the following examples:
The affine transforms are applicable to the registration process where two or more images are aligned (registered). An example of image registration is the generation of panoramic images that are the product of multiple images stitched together.
=== Affine warping ===
The affine transform preserves parallel lines. However, the stretching and shearing transformations warp shapes, as the following example shows:
This is an example of image warping. However, the affine transformations do not facilitate projection onto a curved surface or radial distortions.
== In the plane ==
Every affine transformations in a Euclidean plane is the composition of a translation and an affine transformation that fixes a point; the latter may be
a homothety,
rotations around the fixed point,
a scaling, with possibly negative scaling factors, in two directions (not necessarily perpendicular); this includes reflections,
a shear mapping
a squeeze mapping.
Given two non-degenerate triangles ABC and A′B′C′ in a Euclidean plane, there is a unique affine transformation T that maps A to A′, B to B′ and C to C′. Each of ABC and A′B′C′ defines an affine coordinate system and a barycentric coordinate system. Given a point P, the point T(P) is the point that has the same coordinates on the second system as the coordinates of P on the first system.
Affine transformations do not respect lengths or angles; they multiply areas by the constant factor
area of A′B′C′ / area of ABC.
A given T may either be direct (respect orientation), or indirect (reverse orientation), and this may be determined by comparing the orientations of the triangles.
== Examples ==
=== Over the real numbers ===
The functions
f
:
R
→
R
,
f
(
x
)
=
m
x
+
c
{\displaystyle f\colon \mathbb {R} \to \mathbb {R} ,\;f(x)=mx+c}
with
m
{\displaystyle m}
and
c
{\displaystyle c}
in
R
{\displaystyle \mathbb {R} }
and
m
≠
0
{\displaystyle m\neq 0}
, are precisely the affine transformations of the real line.
=== In plane geometry ===
In
R
2
{\displaystyle \mathbb {R} ^{2}}
, the transformation shown at left is accomplished using the map given by:
[
x
y
]
↦
[
0
1
2
1
]
[
x
y
]
+
[
−
100
−
100
]
{\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}\mapsto {\begin{bmatrix}0&1\\2&1\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}+{\begin{bmatrix}-100\\-100\end{bmatrix}}}
Transforming the three corner points of the original triangle (in red) gives three new points which form the new triangle (in blue). This transformation skews and translates the original triangle.
In fact, all triangles are related to one another by affine transformations. This is also true for all parallelograms, but not for all quadrilaterals.
== See also ==
Anamorphosis – artistic applications of affine transformations
Affine geometry
3D projection
Homography
Flat (geometry)
Bent function
Multilinear polynomial
== Notes ==
== References ==
Berger, Marcel (1987), Geometry I, Berlin: Springer, ISBN 3-540-11658-3
Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1999), Geometry, Cambridge University Press, ISBN 978-0-521-59787-6
Nomizu, Katsumi; Sasaki, S. (1994), Affine Differential Geometry (New ed.), Cambridge University Press, ISBN 978-0-521-44177-3
Klein, Felix (1948) [1939], Elementary Mathematics from an Advanced Standpoint: Geometry, Dover
Samuel, Pierre (1988), Projective Geometry, Springer-Verlag, ISBN 0-387-96752-4
Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
Snapper, Ernst; Troyer, Robert J. (1989) [1971], Metric Affine Geometry, Dover, ISBN 978-0-486-66108-7
Wan, Zhe-xian (1993), Geometry of Classical Groups over Finite Fields, Chartwell-Bratt, ISBN 0-86238-326-9
== External links ==
Media related to Affine transformation at Wikimedia Commons
"Affine transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Geometric Operations: Affine Transform, R. Fisher, S. Perkins, A. Walker and E. Wolfart.
Weisstein, Eric W. "Affine Transformation". MathWorld.
Affine Transform by Bernard Vuilleumier, Wolfram Demonstrations Project.
Affine Transformation with MATLAB | Wikipedia/Affine_transformation |
In mathematics, an even function is a real function such that
f
(
−
x
)
=
f
(
x
)
{\displaystyle f(-x)=f(x)}
for every
x
{\displaystyle x}
in its domain. Similarly, an odd function is a function such that
f
(
−
x
)
=
−
f
(
x
)
{\displaystyle f(-x)=-f(x)}
for every
x
{\displaystyle x}
in its domain.
They are named for the parity of the powers of the power functions which satisfy each condition: the function
f
(
x
)
=
x
n
{\displaystyle f(x)=x^{n}}
is even if n is an even integer, and it is odd if n is an odd integer.
Even functions are those real functions whose graph is self-symmetric with respect to the y-axis, and odd functions are those whose graph is self-symmetric with respect to the origin.
If the domain of a real function is self-symmetric with respect to the origin, then the function can be uniquely decomposed as the sum of an even function and an odd function.
== Early history ==
The concept of even and odd functions appears to date back to the early 18th century, with Leonard Euler playing a significant role in their formalization. Euler introduced the concepts of even and odd functions (using Latin terms pares and impares) in his work Traiectoriarum Reciprocarum Solutio from 1727. Before Euler, however, Isaac Newton had already developed geometric means of deriving coefficients of power series when writing the Principia (1687), and included algebraic techniques in an early draft of his Quadrature of Curves, though he removed it before publication in 1706. It is also noteworthy that Newton didn't explicitly name or focus on the even-odd decomposition, his work with power series would have involved understanding properties related to even and odd powers.
== Definition and examples ==
Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on.
The given examples are real functions, to illustrate the symmetry of their graphs.
=== Even functions ===
A real function f is even if, for every x in its domain, −x is also in its domain and: p. 11
f
(
−
x
)
=
f
(
x
)
{\displaystyle f(-x)=f(x)}
or equivalently
f
(
x
)
−
f
(
−
x
)
=
0.
{\displaystyle f(x)-f(-x)=0.}
Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis.
Examples of even functions are:
The absolute value
x
↦
|
x
|
,
{\displaystyle x\mapsto |x|,}
x
↦
x
2
,
{\displaystyle x\mapsto x^{2},}
x
↦
x
n
{\displaystyle x\mapsto x^{n}}
for any even integer
n
,
{\displaystyle n,}
cosine
cos
,
{\displaystyle \cos ,}
hyperbolic cosine
cosh
,
{\displaystyle \cosh ,}
Gaussian function
x
↦
exp
(
−
x
2
)
.
{\displaystyle x\mapsto \exp(-x^{2}).}
=== Odd functions ===
A real function f is odd if, for every x in its domain, −x is also in its domain and: p. 72
f
(
−
x
)
=
−
f
(
x
)
{\displaystyle f(-x)=-f(x)}
or equivalently
f
(
x
)
+
f
(
−
x
)
=
0.
{\displaystyle f(x)+f(-x)=0.}
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin.
If
x
=
0
{\displaystyle x=0}
is in the domain of an odd function
f
(
x
)
{\displaystyle f(x)}
, then
f
(
0
)
=
0
{\displaystyle f(0)=0}
.
Examples of odd functions are:
The sign function
x
↦
sgn
(
x
)
,
{\displaystyle x\mapsto \operatorname {sgn}(x),}
The identity function
x
↦
x
,
{\displaystyle x\mapsto x,}
x
↦
x
n
{\displaystyle x\mapsto x^{n}}
for any odd integer
n
,
{\displaystyle n,}
x
↦
x
n
{\displaystyle x\mapsto {\sqrt[{n}]{x}}}
for any odd positive integer
n
,
{\displaystyle n,}
sine
sin
,
{\displaystyle \sin ,}
hyperbolic sine
sinh
,
{\displaystyle \sinh ,}
The error function
erf
.
{\displaystyle \operatorname {erf} .}
== Basic properties ==
=== Uniqueness ===
If a function is both even and odd, it is equal to 0 everywhere it is defined.
If a function is odd, the absolute value of that function is an even function.
=== Addition and subtraction ===
The sum of two even functions is even.
The sum of two odd functions is odd.
The difference between two odd functions is odd.
The difference between two even functions is even.
The sum of an even and odd function is not even or odd, unless one of the functions is equal to zero over the given domain.
=== Multiplication and division ===
The product and quotient of two even functions is an even function.
This implies that the product of any number of even functions is also even.
This implies that the reciprocal of an even function is also even.
The product and quotient of two odd functions is an even function.
The product and both quotients of an even function and an odd function is an odd function.
This implies that the reciprocal of an odd function is odd.
=== Composition ===
The composition of two even functions is even.
The composition of two odd functions is odd.
The composition of an even function and an odd function is even.
The composition of any function with an even function is even (but not vice versa).
=== Inverse function ===
If an odd function is invertible, then its inverse is also odd.
== Even–odd decomposition ==
If a real function has a domain that is self-symmetric with respect to the origin, it may be uniquely decomposed as the sum of an even and an odd function, which are called respectively the even part (or the even component) and the odd part (or the odd component) of the function, and are defined by
f
even
(
x
)
=
f
(
x
)
+
f
(
−
x
)
2
,
{\displaystyle f_{\text{even}}(x)={\frac {f(x)+f(-x)}{2}},}
and
f
odd
(
x
)
=
f
(
x
)
−
f
(
−
x
)
2
.
{\displaystyle f_{\text{odd}}(x)={\frac {f(x)-f(-x)}{2}}.}
It is straightforward to verify that
f
even
{\displaystyle f_{\text{even}}}
is even,
f
odd
{\displaystyle f_{\text{odd}}}
is odd, and
f
=
f
even
+
f
odd
.
{\displaystyle f=f_{\text{even}}+f_{\text{odd}}.}
This decomposition is unique since, if
f
(
x
)
=
g
(
x
)
+
h
(
x
)
,
{\displaystyle f(x)=g(x)+h(x),}
where g is even and h is odd, then
g
=
f
even
{\displaystyle g=f_{\text{even}}}
and
h
=
f
odd
,
{\displaystyle h=f_{\text{odd}},}
since
2
f
e
(
x
)
=
f
(
x
)
+
f
(
−
x
)
=
g
(
x
)
+
g
(
−
x
)
+
h
(
x
)
+
h
(
−
x
)
=
2
g
(
x
)
,
2
f
o
(
x
)
=
f
(
x
)
−
f
(
−
x
)
=
g
(
x
)
−
g
(
−
x
)
+
h
(
x
)
−
h
(
−
x
)
=
2
h
(
x
)
.
{\displaystyle {\begin{aligned}2f_{\text{e}}(x)&=f(x)+f(-x)=g(x)+g(-x)+h(x)+h(-x)=2g(x),\\2f_{\text{o}}(x)&=f(x)-f(-x)=g(x)-g(-x)+h(x)-h(-x)=2h(x).\end{aligned}}}
For example, the hyperbolic cosine and the hyperbolic sine may be regarded as the even and odd parts of the exponential function, as the first one is an even function, the second one is odd, and
e
x
=
cosh
(
x
)
⏟
f
even
(
x
)
+
sinh
(
x
)
⏟
f
odd
(
x
)
{\displaystyle e^{x}=\underbrace {\cosh(x)} _{f_{\text{even}}(x)}+\underbrace {\sinh(x)} _{f_{\text{odd}}(x)}}
.
Fourier's sine and cosine transforms also perform even–odd decomposition by representing a function's odd part with sine waves (an odd function) and the function's even part with cosine waves (an even function).
== Further algebraic properties ==
Any linear combination of even functions is even, and the even functions form a vector space over the reals. Similarly, any linear combination of odd functions is odd, and the odd functions also form a vector space over the reals. In fact, the vector space of all real functions is the direct sum of the subspaces of even and odd functions. This is a more abstract way of expressing the property in the preceding section.
The space of functions can be considered a graded algebra over the real numbers by this property, as well as some of those above.
The even functions form a commutative algebra over the reals. However, the odd functions do not form an algebra over the reals, as they are not closed under multiplication.
== Analytic properties ==
A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous.
In the following, properties involving derivatives, Fourier series, Taylor series are considered, and these concepts are thus supposed to be defined for the considered functions.
=== Basic analytic properties ===
The derivative of an even function is odd.
The derivative of an odd function is even.
If an odd function is integrable over a bounded symmetric interval
[
−
A
,
A
]
{\displaystyle [-A,A]}
, the integral over that interval is zero; that is
∫
−
A
A
f
(
x
)
d
x
=
0
{\displaystyle \int _{-A}^{A}f(x)\,dx=0}
.
This implies that the Cauchy principal value of an odd function over the entire real line is zero.
If an even function is integrable over a bounded symmetric interval
[
−
A
,
A
]
{\displaystyle [-A,A]}
, the integral over that interval is twice the integral from 0 to A; that is
∫
−
A
A
f
(
x
)
d
x
=
2
∫
0
A
f
(
x
)
d
x
{\displaystyle \int _{-A}^{A}f(x)\,dx=2\int _{0}^{A}f(x)\,dx}
.
This property is also true for the improper integral when
A
=
∞
{\displaystyle A=\infty }
, provided the integral from 0 to
∞
{\displaystyle \infty }
converges.
=== Series ===
The Maclaurin series of an even function includes only even powers.
The Maclaurin series of an odd function includes only odd powers.
The Fourier series of a periodic even function includes only cosine terms.
The Fourier series of a periodic odd function includes only sine terms.
The Fourier transform of a purely real-valued even function is real and even. (see Fourier analysis § Symmetry properties)
The Fourier transform of a purely real-valued odd function is imaginary and odd. (see Fourier analysis § Symmetry properties)
== Harmonics ==
In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memory-less nonlinear system, that is, a system whose output at time t only depends on the input at time t and does not depend on the input at any previous times. Such a system is described by a response function
V
out
(
t
)
=
f
(
V
in
(
t
)
)
{\displaystyle V_{\text{out}}(t)=f(V_{\text{in}}(t))}
. The type of harmonics produced depend on the response function f:
When the response function is even, the resulting signal will consist of only even harmonics of the input sine wave;
0
f
,
2
f
,
4
f
,
6
f
,
…
{\displaystyle 0f,2f,4f,6f,\dots }
The fundamental is also an odd harmonic, so will not be present.
A simple example is a full-wave rectifier.
The
0
f
{\displaystyle 0f}
component represents the DC offset, due to the one-sided nature of even-symmetric transfer functions.
When it is odd, the resulting signal will consist of only odd harmonics of the input sine wave;
1
f
,
3
f
,
5
f
,
…
{\displaystyle 1f,3f,5f,\dots }
The output signal will be half-wave symmetric.
A simple example is clipping in a symmetric push-pull amplifier.
When it is asymmetric, the resulting signal may contain either even or odd harmonics;
1
f
,
2
f
,
3
f
,
…
{\displaystyle 1f,2f,3f,\dots }
Simple examples are a half-wave rectifier, and clipping in an asymmetrical class-A amplifier.
This does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics.
== Generalizations ==
=== Multivariate functions ===
Even symmetry:
A function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
is called even symmetric if:
f
(
x
1
,
x
2
,
…
,
x
n
)
=
f
(
−
x
1
,
−
x
2
,
…
,
−
x
n
)
for all
x
1
,
…
,
x
n
∈
R
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=f(-x_{1},-x_{2},\ldots ,-x_{n})\quad {\text{for all }}x_{1},\ldots ,x_{n}\in \mathbb {R} }
Odd symmetry:
A function
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
is called odd symmetric if:
f
(
x
1
,
x
2
,
…
,
x
n
)
=
−
f
(
−
x
1
,
−
x
2
,
…
,
−
x
n
)
for all
x
1
,
…
,
x
n
∈
R
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=-f(-x_{1},-x_{2},\ldots ,-x_{n})\quad {\text{for all }}x_{1},\ldots ,x_{n}\in \mathbb {R} }
=== Complex-valued functions ===
The definitions for even and odd symmetry for complex-valued functions of a real argument are similar to the real case. In signal processing, a similar symmetry is sometimes considered, which involves complex conjugation.
Conjugate symmetry:
A complex-valued function of a real argument
f
:
R
→
C
{\displaystyle f:\mathbb {R} \to \mathbb {C} }
is called conjugate symmetric if
f
(
x
)
=
f
(
−
x
)
¯
for all
x
∈
R
{\displaystyle f(x)={\overline {f(-x)}}\quad {\text{for all }}x\in \mathbb {R} }
A complex valued function is conjugate symmetric if and only if its real part is an even function and its imaginary part is an odd function.
A typical example of a conjugate symmetric function is the cis function
x
→
e
i
x
=
cos
x
+
i
sin
x
{\displaystyle x\to e^{ix}=\cos x+i\sin x}
Conjugate antisymmetry:
A complex-valued function of a real argument
f
:
R
→
C
{\displaystyle f:\mathbb {R} \to \mathbb {C} }
is called conjugate antisymmetric if:
f
(
x
)
=
−
f
(
−
x
)
¯
for all
x
∈
R
{\displaystyle f(x)=-{\overline {f(-x)}}\quad {\text{for all }}x\in \mathbb {R} }
A complex valued function is conjugate antisymmetric if and only if its real part is an odd function and its imaginary part is an even function.
=== Finite length sequences ===
The definitions of odd and even symmetry are extended to N-point sequences (i.e. functions of the form
f
:
{
0
,
1
,
…
,
N
−
1
}
→
R
{\displaystyle f:\left\{0,1,\ldots ,N-1\right\}\to \mathbb {R} }
) as follows:: p. 411
Even symmetry:
A N-point sequence is called conjugate symmetric if
f
(
n
)
=
f
(
N
−
n
)
for all
n
∈
{
1
,
…
,
N
−
1
}
.
{\displaystyle f(n)=f(N-n)\quad {\text{for all }}n\in \left\{1,\ldots ,N-1\right\}.}
Such a sequence is often called a palindromic sequence; see also Palindromic polynomial.
Odd symmetry:
A N-point sequence is called conjugate antisymmetric if
f
(
n
)
=
−
f
(
N
−
n
)
for all
n
∈
{
1
,
…
,
N
−
1
}
.
{\displaystyle f(n)=-f(N-n)\quad {\text{for all }}n\in \left\{1,\ldots ,N-1\right\}.}
Such a sequence is sometimes called an anti-palindromic sequence; see also Antipalindromic polynomial.
== See also ==
Hermitian function for a generalization in complex numbers
Taylor series
Fourier series
Holstein–Herring method
Parity (physics)
== Notes ==
== References ==
Gelfand, I. M.; Glagoleva, E. G.; Shnol, E. E. (2002) [1969], Functions and Graphs, Mineola, N.Y: Dover Publications | Wikipedia/Odd_function |
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
If x0 is an interior point in the domain of a function f, then f is said to be differentiable at x0 if the derivative
f
′
(
x
0
)
{\displaystyle f'(x_{0})}
exists. In other words, the graph of f has a non-vertical tangent line at the point (x0, f(x0)). f is said to be differentiable on U if it is differentiable at every point of U. f is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function
f
{\textstyle f}
. Generally speaking, f is said to be of class
C
k
{\displaystyle C^{k}}
if its first
k
{\displaystyle k}
derivatives
f
′
(
x
)
,
f
′
′
(
x
)
,
…
,
f
(
k
)
(
x
)
{\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}
exist and are continuous over the domain of the function
f
{\textstyle f}
.
For a multivariable function, as shown here, the differentiability of it is something more complex than the existence of the partial derivatives of it.
== Differentiability of real functions of one variable ==
A function
f
:
U
→
R
{\displaystyle f:U\to \mathbb {R} }
, defined on an open set
U
⊂
R
{\textstyle U\subset \mathbb {R} }
, is said to be differentiable at
a
∈
U
{\displaystyle a\in U}
if the derivative
f
′
(
a
)
=
lim
h
→
0
f
(
a
+
h
)
−
f
(
a
)
h
=
lim
x
→
a
f
(
x
)
−
f
(
a
)
x
−
a
{\displaystyle f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}=\lim _{x\to a}{\frac {f(x)-f(a)}{x-a}}}
exists. This implies that the function is continuous at a.
This function f is said to be differentiable on U if it is differentiable at every point of U. In this case, the derivative of f is thus a function from U into
R
.
{\displaystyle \mathbb {R} .}
A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as is shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes).
=== Semi-differentiability ===
The above definition can be extended to define the derivative at boundary points. The derivative of a function
f
:
A
→
R
{\textstyle f:A\to \mathbb {R} }
defined on a closed subset
A
⊊
R
{\textstyle A\subsetneq \mathbb {R} }
of the real numbers, evaluated at a boundary point
c
{\textstyle c}
, can be defined as the following one-sided limit, where the argument
x
{\textstyle x}
approaches
c
{\textstyle c}
such that it is always within
A
{\textstyle A}
:
f
′
(
c
)
=
lim
x
→
c
x
∈
A
f
(
x
)
−
f
(
c
)
x
−
c
.
{\displaystyle f'(c)=\lim _{\scriptstyle x\to c \atop \scriptstyle x\in A}{\frac {f(x)-f(c)}{x-c}}.}
For
x
{\textstyle x}
to remain within
A
{\textstyle A}
, which is a subset of the reals, it follows that this limit will be defined as either
f
′
(
c
)
=
lim
x
→
c
+
f
(
x
)
−
f
(
c
)
x
−
c
or
f
′
(
c
)
=
lim
x
→
c
−
f
(
x
)
−
f
(
c
)
x
−
c
.
{\displaystyle f'(c)=\lim _{x\to c^{+}}{\frac {f(x)-f(c)}{x-c}}\quad {\text{or}}\quad f'(c)=\lim _{x\to c^{-}}{\frac {f(x)-f(c)}{x-c}}.}
== Differentiability and continuity ==
If f is differentiable at a point x0, then f must also be continuous at x0. In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
== Differentiability classes ==
A function
f
{\textstyle f}
is said to be continuously differentiable if the derivative
f
′
(
x
)
{\textstyle f^{\prime }(x)}
exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function
f
(
x
)
=
{
x
2
sin
(
1
/
x
)
if
x
≠
0
0
if
x
=
0
{\displaystyle f(x)\;=\;{\begin{cases}x^{2}\sin(1/x)&{\text{ if }}x\neq 0\\0&{\text{ if }}x=0\end{cases}}}
is differentiable at 0, since
f
′
(
0
)
=
lim
ε
→
0
(
ε
2
sin
(
1
/
ε
)
−
0
ε
)
=
0
{\displaystyle f'(0)=\lim _{\varepsilon \to 0}\left({\frac {\varepsilon ^{2}\sin(1/\varepsilon )-0}{\varepsilon }}\right)=0}
exists. However, for
x
≠
0
,
{\displaystyle x\neq 0,}
differentiation rules imply
f
′
(
x
)
=
2
x
sin
(
1
/
x
)
−
cos
(
1
/
x
)
,
{\displaystyle f'(x)=2x\sin(1/x)-\cos(1/x)\;,}
which has no limit as
x
→
0.
{\displaystyle x\to 0.}
Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless, Darboux's theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem.
Similarly to how continuous functions are said to be of class
C
0
,
{\displaystyle C^{0},}
continuously differentiable functions are sometimes said to be of class
C
1
{\displaystyle C^{1}}
. A function is of class
C
2
{\displaystyle C^{2}}
if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of class
C
k
{\displaystyle C^{k}}
if the first
k
{\displaystyle k}
derivatives
f
′
(
x
)
,
f
′
′
(
x
)
,
…
,
f
(
k
)
(
x
)
{\textstyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)}
all exist and are continuous. If derivatives
f
(
n
)
{\displaystyle f^{(n)}}
exist for all positive integers
n
,
{\textstyle n,}
the function is smooth or equivalently, of class
C
∞
.
{\displaystyle C^{\infty }.}
== Differentiability in higher dimensions ==
A function of several real variables f: Rm → Rn is said to be differentiable at a point x0 if there exists a linear map J: Rm → Rn such that
lim
h
→
0
‖
f
(
x
0
+
h
)
−
f
(
x
0
)
−
J
(
h
)
‖
R
n
‖
h
‖
R
m
=
0.
{\displaystyle \lim _{\mathbf {h} \to \mathbf {0} }{\frac {\|\mathbf {f} (\mathbf {x_{0}} +\mathbf {h} )-\mathbf {f} (\mathbf {x_{0}} )-\mathbf {J} \mathbf {(h)} \|_{\mathbf {R} ^{n}}}{\|\mathbf {h} \|_{\mathbf {R} ^{m}}}}=0.}
If a function is differentiable at x0, then all of the partial derivatives exist at x0, and the linear map J is given by the Jacobian matrix, an n × m matrix in this case. A similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus.
If all the partial derivatives of a function exist in a neighborhood of a point x0 and are continuous at the point x0, then the function is differentiable at that point x0.
However, the existence of the partial derivatives (or even of all the directional derivatives) does not guarantee that a function is differentiable at a point. For example, the function f: R2 → R defined by
f
(
x
,
y
)
=
{
x
if
y
≠
x
2
0
if
y
=
x
2
{\displaystyle f(x,y)={\begin{cases}x&{\text{if }}y\neq x^{2}\\0&{\text{if }}y=x^{2}\end{cases}}}
is not differentiable at (0, 0), but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function
f
(
x
,
y
)
=
{
y
3
/
(
x
2
+
y
2
)
if
(
x
,
y
)
≠
(
0
,
0
)
0
if
(
x
,
y
)
=
(
0
,
0
)
{\displaystyle f(x,y)={\begin{cases}y^{3}/(x^{2}+y^{2})&{\text{if }}(x,y)\neq (0,0)\\0&{\text{if }}(x,y)=(0,0)\end{cases}}}
is not differentiable at (0, 0), but again all of the partial derivatives and directional derivatives exist.
== Differentiability in complex analysis ==
In complex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividing complex numbers. So, a function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
is said to be differentiable at
x
=
a
{\textstyle x=a}
when
f
′
(
a
)
=
lim
h
→
0
h
∈
C
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle f'(a)=\lim _{\underset {h\in \mathbb {C} }{h\to 0}}{\frac {f(a+h)-f(a)}{h}}.}
Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
, that is complex-differentiable at a point
x
=
a
{\textstyle x=a}
is automatically differentiable at that point, when viewed as a function
f
:
R
2
→
R
2
{\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}
. This is because the complex-differentiability implies that
lim
h
→
0
h
∈
C
|
f
(
a
+
h
)
−
f
(
a
)
−
f
′
(
a
)
h
|
|
h
|
=
0.
{\displaystyle \lim _{\underset {h\in \mathbb {C} }{h\to 0}}{\frac {|f(a+h)-f(a)-f'(a)h|}{|h|}}=0.}
However, a function
f
:
C
→
C
{\textstyle f:\mathbb {C} \to \mathbb {C} }
can be differentiable as a multi-variable function, while not being complex-differentiable. For example,
f
(
z
)
=
z
+
z
¯
2
{\displaystyle f(z)={\frac {z+{\overline {z}}}{2}}}
is differentiable at every point, viewed as the 2-variable real function
f
(
x
,
y
)
=
x
{\displaystyle f(x,y)=x}
, but it is not complex-differentiable at any point because the limit
lim
h
→
0
h
+
h
¯
2
h
{\textstyle \lim _{h\to 0}{\frac {h+{\bar {h}}}{2h}}}
gives different values for different approaches to 0.
Any function that is complex-differentiable in a neighborhood of a point is called holomorphic at that point. Such a function is necessarily infinitely differentiable, and in fact analytic.
== Differentiable functions on manifolds ==
If M is a differentiable manifold, a real or complex-valued function f on M is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate chart defined around p. If M and N are differentiable manifolds, a function f: M → N is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate charts defined around p and f(p).
== See also ==
Generalizations of the derivative
Semi-differentiability
Differentiable programming
== References == | Wikipedia/Continuously_differentiable_function |
In mathematics, a concave function is one for which the function value at any convex combination of elements in the domain is greater than or equal to that convex combination of those domain elements. Equivalently, a concave function is any function for which the hypograph is convex. The class of concave functions is in a sense the opposite of the class of convex functions. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex.
== Definition ==
A real-valued function
f
{\displaystyle f}
on an interval (or, more generally, a convex set in vector space) is said to be concave if, for any
x
{\displaystyle x}
and
y
{\displaystyle y}
in the interval and for any
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
,
f
(
(
1
−
α
)
x
+
α
y
)
≥
(
1
−
α
)
f
(
x
)
+
α
f
(
y
)
{\displaystyle f((1-\alpha )x+\alpha y)\geq (1-\alpha )f(x)+\alpha f(y)}
A function is called strictly concave if
f
(
(
1
−
α
)
x
+
α
y
)
>
(
1
−
α
)
f
(
x
)
+
α
f
(
y
)
{\displaystyle f((1-\alpha )x+\alpha y)>(1-\alpha )f(x)+\alpha f(y)}
for any
α
∈
(
0
,
1
)
{\displaystyle \alpha \in (0,1)}
and
x
≠
y
{\displaystyle x\neq y}
.
For a function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
, this second definition merely states that for every
z
{\displaystyle z}
strictly between
x
{\displaystyle x}
and
y
{\displaystyle y}
, the point
(
z
,
f
(
z
)
)
{\displaystyle (z,f(z))}
on the graph of
f
{\displaystyle f}
is above the straight line joining the points
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
and
(
y
,
f
(
y
)
)
{\displaystyle (y,f(y))}
.
A function
f
{\displaystyle f}
is quasiconcave if the upper contour sets of the function
S
(
a
)
=
{
x
:
f
(
x
)
≥
a
}
{\displaystyle S(a)=\{x:f(x)\geq a\}}
are convex sets.
== Properties ==
=== Functions of a single variable ===
A differentiable function f is (strictly) concave on an interval if and only if its derivative function f ′ is (strictly) monotonically decreasing on that interval, that is, a concave function has a non-increasing (decreasing) slope.
Points where concavity changes (between concave and convex) are inflection points.
If f is twice-differentiable, then f is concave if and only if f ′′ is non-positive (or, informally, if the "acceleration" is non-positive). If f ′′ is negative then f is strictly concave, but the converse is not true, as shown by f(x) = −x4.
If f is concave and differentiable, then it is bounded above by its first-order Taylor approximation:
f
(
y
)
≤
f
(
x
)
+
f
′
(
x
)
[
y
−
x
]
{\displaystyle f(y)\leq f(x)+f'(x)[y-x]}
A Lebesgue measurable function on an interval C is concave if and only if it is midpoint concave, that is, for any x and y in C
f
(
x
+
y
2
)
≥
f
(
x
)
+
f
(
y
)
2
{\displaystyle f\left({\frac {x+y}{2}}\right)\geq {\frac {f(x)+f(y)}{2}}}
If a function f is concave, and f(0) ≥ 0, then f is subadditive on
[
0
,
∞
)
{\displaystyle [0,\infty )}
. Proof:
Since f is concave and 1 ≥ t ≥ 0, letting y = 0 we have
f
(
t
x
)
=
f
(
t
x
+
(
1
−
t
)
⋅
0
)
≥
t
f
(
x
)
+
(
1
−
t
)
f
(
0
)
≥
t
f
(
x
)
.
{\displaystyle f(tx)=f(tx+(1-t)\cdot 0)\geq tf(x)+(1-t)f(0)\geq tf(x).}
For
a
,
b
∈
[
0
,
∞
)
{\displaystyle a,b\in [0,\infty )}
:
f
(
a
)
+
f
(
b
)
=
f
(
(
a
+
b
)
a
a
+
b
)
+
f
(
(
a
+
b
)
b
a
+
b
)
≥
a
a
+
b
f
(
a
+
b
)
+
b
a
+
b
f
(
a
+
b
)
=
f
(
a
+
b
)
{\displaystyle f(a)+f(b)=f\left((a+b){\frac {a}{a+b}}\right)+f\left((a+b){\frac {b}{a+b}}\right)\geq {\frac {a}{a+b}}f(a+b)+{\frac {b}{a+b}}f(a+b)=f(a+b)}
=== Functions of n variables ===
A function f is concave over a convex set if and only if the function −f is a convex function over the set.
The sum of two concave functions is itself concave and so is the pointwise minimum of two concave functions, i.e. the set of concave functions on a given domain form a semifield.
Near a strict local maximum in the interior of the domain of a function, the function must be concave; as a partial converse, if the derivative of a strictly concave function is zero at some point, then that point is a local maximum.
Any local maximum of a concave function is also a global maximum. A strictly concave function will have at most one global maximum.
== Examples ==
The functions
f
(
x
)
=
−
x
2
{\displaystyle f(x)=-x^{2}}
and
g
(
x
)
=
x
{\displaystyle g(x)={\sqrt {x}}}
are concave on their domains, as their second derivatives
f
″
(
x
)
=
−
2
{\displaystyle f''(x)=-2}
and
g
″
(
x
)
=
−
1
4
x
3
/
2
{\textstyle g''(x)=-{\frac {1}{4x^{3/2}}}}
are always negative.
The logarithm function
f
(
x
)
=
log
x
{\displaystyle f(x)=\log {x}}
is concave on its domain
(
0
,
∞
)
{\displaystyle (0,\infty )}
, as its derivative
1
x
{\displaystyle {\frac {1}{x}}}
is a strictly decreasing function.
Any affine function
f
(
x
)
=
a
x
+
b
{\displaystyle f(x)=ax+b}
is both concave and convex, but neither strictly-concave nor strictly-convex.
The sine function is concave on the interval
[
0
,
π
]
{\displaystyle [0,\pi ]}
.
The function
f
(
B
)
=
log
|
B
|
{\displaystyle f(B)=\log |B|}
, where
|
B
|
{\displaystyle |B|}
is the determinant of a nonnegative-definite matrix B, is concave.
== Applications ==
Rays bending in the computation of radiowave attenuation in the atmosphere involve concave functions.
In expected utility theory for choice under uncertainty, cardinal utility functions of risk averse decision makers are concave.
In microeconomic theory, production functions are usually assumed to be concave over some or all of their domains, resulting in diminishing returns to input factors.
In thermodynamics and information theory, entropy is a concave function. In the case of thermodynamic entropy, without phase transition, entropy as a function of extensive variables is strictly concave. If the system can undergo phase transition, and if it is allowed to split into two subsystems of different phase (phase separation, e.g. boiling), the entropy-maximal parameters of the subsystems will result in a combined entropy precisely on the straight line between the two phases. This means that the "effective entropy" of a system with phase transition is the convex envelope of entropy without phase separation; therefore, the entropy of a system including phase separation will be non-strictly concave.
== See also ==
Concave polygon
Jensen's inequality
Logarithmically concave function
Quasiconcave function
Concavification
== References ==
== Further References ==
Crouzeix, J.-P. (2008). "Quasi-concavity". In Durlauf, Steven N.; Blume, Lawrence E (eds.). The New Palgrave Dictionary of Economics (Second ed.). Palgrave Macmillan. pp. 815–816. doi:10.1057/9780230226203.1375. ISBN 978-0-333-78676-5.
Rao, Singiresu S. (2009). Engineering Optimization: Theory and Practice. John Wiley and Sons. p. 779. ISBN 978-0-470-18352-6. | Wikipedia/Concave_function |
Graphic design is a profession, academic discipline and applied art that involves creating visual communications intended to transmit specific messages to social groups, with specific objectives. Graphic design is an interdisciplinary branch of design and of the fine arts. Its practice involves creativity, innovation and lateral thinking using manual or digital tools, where it is usual to use text and graphics to communicate visually.
The role of the graphic designer in the communication process is that of the encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages. In its nature, design pieces can be philosophical, aesthetic, emotional and political. Usually, graphic design uses the aesthetics of typography and the compositional arrangement of the text, ornamentation, and imagery to convey ideas, feelings, and attitudes beyond what language alone expresses. The design work can be based on a customer's demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation.
Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration. In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied.
With origins in Antiquity and the Middle Ages, graphic design as applied art was initially linked to the boom of the rise of printing in Europe in the 15th century and the growth of consumer culture in the Industrial Revolution. From there it emerged as a distinct profession in the West, closely associated with advertising in the 19th century and its evolution allowed its consolidation in the 20th century. Given the rapid and massive growth in information exchange today, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them.
== Terminology ==
The term "graphic design" makes an early appearance in a 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers:
An Enterprising Trades Union
… The admittedly high standard of intelligence which prevails among printers is an assurance that with the elemental principles of design at their finger ends many of them will grow in knowledge and develop into specialists in graphic design and decorating. …
A decade later, the 1917–1918 course catalog of the California School of Arts & Crafts advertised a course titled Graphic Design and Lettering, which replaced one called Advanced Design and Lettering. Both classes were taught by Frederick Meyer.
== History ==
In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client. In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience."
The definition of the graphic designer profession is relatively recent concerning its preparation, activity, and objectives. Although there is no consensus on an exact date when graphic design emerged, some date it back to the Interwar period. Others understand that it began to be identified as such by the late 19th century.
It can be argued that graphic communications with specific purposes have their origins in Paleolithic cave paintings and the birth of written language in the third millennium BCE. However, the differences in working methods, auxiliary sciences, and required training are such that it is not possible to clearly identify the current graphic designer with prehistoric man, the 15th-century xylographer, or the lithographer of 1890.
The diversity of opinions stems from some considering any graphic manifestation as a product of graphic design, while others only recognize those that arise as a result of the application of an industrial production model—visual manifestations that have been "projected" to address various needs: productive, symbolic, ergonomic, contextual, among others.
By the late 19th century, graphic design emerged as a distinct profession in the West, partly due to the process of labor specialization that occurred there and partly due to the new technologies and business possibilities brought about by the Industrial Revolution. New production methods led to the separation of the design of a communication medium (such as a poster) from its actual production. Increasingly, throughout the 19th and early 20th centuries, advertising agencies, book publishers, and magazines hired art directors who organized all visual elements of communication and integrated them into a harmonious whole, creating an expression appropriate to the content.
Throughout the 20th century, the technology available to designers continued to advance rapidly, as did the artistic and commercial possibilities of design. The profession expanded greatly, and graphic designers created, among other things, magazine pages, book covers, posters, CD covers, postage stamps, packaging, brands, signs, advertisements, kinetic titles for TV programs and movies, and websites. By the early 21st century, graphic design had become a global profession as advanced technology and industry spread worldwide.
=== Historical background ===
In China, during the Tang dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century in China, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279).
In Mesopotamia, writing (as an extension of graphic design) began with commerce. The earliest writing system, cuneiform, started out with basic pictograms, which were representations of houses, lambs, or grain.
In the mid-15th century in Mainz, Germany, Johannes Gutenberg developed a way to reproduce printed pages at a faster pace using movable type made with a new metal alloy that created a revolution in the dissemination of information.
=== Nineteenth century ===
In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.
From 1891 to 1896, William Morris' Kelmscott Press was a leader in graphic design associated with the Arts and Crafts movement, creating hand-made books in medieval and Renaissance era style, in addition to wallpaper and textile designs. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau.
Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book.
=== Twentieth century ===
In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering". Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title. In 1936, author and graphic designer Leon Friend published his book titled "Graphic Design" and it is known to be the first piece of literature to cover the topic extensively.
The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement. The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters. It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle.
In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, film and theater sets, posters, fabrics, clothing, furniture, logos, menus, etc.
Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application.
The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto. First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre 51 stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."
== Applications ==
Graphic design can have many applications, from road signs to technical schematics and reference manuals. It is often used in branding products and elements of company identity such as logos, colors, packaging, labelling and text.
From scientific journals to news reporting, the presentation of opinions and facts is often improved with graphics and thoughtful compositions of visual information – known as information design. With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include Data and information visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics.
== Skills ==
A graphic design project may involve the creative presentation of existing text, ornament, and images.
The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent.
=== Typography ===
Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is the selection of typefaces, point size, tracking (the space between all characters used), kerning (the space between two specific characters) and leading (line spacing).
Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation. Certain fonts communicate or resemble stereotypical notions. For example, the 1942 Report is a font which types text akin to a typewriter or a vintage report.
=== Page layout ===
Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages. Elements typically consist of type (text), images (pictures), and (with print media) occasionally place-holder graphics such as a dieline for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.
=== Grids ===
A grid serves as a method of arranging both space and information, allowing the reader to easily comprehend the overall project. Furthermore, a grid functions as a container for information and a means of establishing and maintaining order. Despite grids being utilized for centuries, many graphic designers associate them with Swiss design. The desire for order in the 1940s resulted in a highly systematic approach to visualizing information. However, grids were later regarded as tedious and uninteresting, earning the label of "designersaur." Today, grids are once again considered crucial tools for professionals, whether they are novices or veterans.
== Tools ==
In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media. Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally.
=== Computers and software ===
Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up. While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome.
Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer.
Graphic designers are expected to be proficient in software programs for image-making, typography and layout. Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Inc. Adobe Photoshop (a raster-based program for photo editing) and Adobe Illustrator (a vector-based program for drawing) are often used in the final stage. CorelDraw, a vector graphics editing software developed and marketed by Corel Corporation, is also used worldwide. Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as Adobe InDesign, Serif PagePlus and QuarkXPress.
Many free and open-source programs are also used by both professionals and casual graphic designers. Inkscape uses Scalable Vector Graphics (SVG) as its primary file format and allows importing and exporting other formats. Other open-source programs used include GIMP for photo-editing and image manipulation, Krita for digital painting, and Scribus for page layout.
== Related design fields ==
=== Print design ===
A specialized branch of graphic design and historically its earliest form, print design involves creating visual content intended for reproduction on physical substrates such as silk, paper, and later, plastic, for mass communication and persuasion (e.g., marketing, governmental publishing, propaganda). Print design techniques have evolved over centuries, beginning with the invention of movable type by the Chinese alchemist Pi Sheng, later refined by the German inventor Johannes Gutenberg. Over time, methods such as lithography, screen printing, and offset printing have been developed, culminating in the contemporary use of digital presses that integrate traditional print techniques with modern digital technology.
=== Interface design ===
Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical user interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface. Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design.
=== User experience design ===
User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function. UX design involves creating the interface and interactions for a website or application, and is considered both an act and an art. This profession requires a combination of skills, including visual design, social psychology, development, project management, and most importantly, empathy towards the end-users.
=== Experiential (environmental) graphic design ===
Experiential graphic design is the application of communication skills to the built environment. It’s also known as environmental graphic design (EGD) or environmental graphics. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams.
Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are wayfinding, placemaking, branded environments, exhibitions and museum displays, public installations and digital environments.
== Occupations ==
Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations. As of 2023, median pay was $58,910 per year. The main job titles within the industry are often country specific. They can include graphic designer, art director, creative director, animator and entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design.
Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%. Graphic designers will be expected to constantly learn new techniques, programs, and methods.
Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations. Graphic designers may also work freelance, working on their own terms, prices, ideas, etc.
A graphic designer typically reports to the art director, creative director or senior media creative. As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs.
=== Crowdsourcing in graphic design ===
Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as graphic design, architecture, apparel design, writing, illustration, and others. Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design. Companies, startups, small businesses and entrepreneurs have all benefitted from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common. Major companies that operate in the design crowdsourcing space are generally referred to as design contest sites.]
== Role of graphic design ==
Graphic design is essential for advertising, branding, and marketing, influencing how people act. Good graphic design builds strong, recognizable brands, communicates messages clearly, and shapes how consumers see and react to things.
One way that graphic design influences consumer behavior is through the use of visual elements, such as color, typography, and imagery. Studies have shown that certain colors can evoke specific emotions and behaviors in consumers, and that typography can influence how information is perceived and remembered. For example, serif fonts are often associated with tradition and elegance, while sans-serif fonts are seen as modern and minimalistic. These factors can all impact the way consumers perceive a brand and its messaging.
Another way that graphic design impacts consumer behavior is through its ability to communicate complex information in a clear and accessible way. For example, infographics and data visualizations can help to distill complex information into a format that is easy to understand and engaging for consumers. This can help to build trust and credibility with consumers, and encourage them to take action.
== Ethical consideration in graphic design ==
Ethics are an important consideration in graphic design, particularly when it comes to accurately representing information and avoiding harmful stereotypes. Graphic designers have a responsibility to ensure that their work is truthful, accurate, and free from any misleading or deceptive elements. This requires a commitment to honesty, integrity, and transparency in all aspects of the design process.
One of the key ethical considerations in graphic design is the responsibility to accurately represent information. This means ensuring that any claims or statements made in advertising or marketing materials are true and supported by evidence. For example, a company should not use misleading statistics to promote their product or service, or make false claims about its benefits. Graphic designers must take care to accurately represent information in all visual elements, such as graphs, charts, and images, and avoid distorting or misrepresenting data.
Another important ethical consideration in graphic design is the need to avoid harmful stereotypes. This means avoiding any images or messaging that perpetuate negative or harmful stereotypes based on race, gender, religion, or other characteristics. Graphic designers should strive to create designs that are inclusive and respectful of all individuals and communities, and avoid reinforcing negative attitudes or biases.
== Future of graphic design ==
=== AI, Automation and graphic design ===
The future of graphic design is likely to be heavily influenced by emerging technologies and social trends. Advancements in areas such as artificial intelligence, virtual and augmented reality, and automation are likely to transform the way that graphic designers work and create designs. Social trends, such as a greater focus on sustainability and inclusivity, are also likely to impact the future of graphic design.
One area where emerging technologies are likely to have a significant impact on graphic design is in the automation of certain tasks. Easily accessible computer software using AI algorithms will complete many practical tasks performed by graphic designers, allowing clients to bypass human designers altogether. Machine learning algorithms, for example, can analyze large datasets and create designs based on patterns and trends, freeing up designers to focus on more complex and creative tasks. Virtual and augmented reality technologies may also allow designers to create immersive and interactive experiences for users, blurring the lines between the digital and physical worlds. Artificial intelligence has also led to many challenges within the world of graphic design. Some of those challenges include maintaining brand authenticity, ensuring quality, issues of bias, and preserving creative control.
Visual communication design education is ill prepared for automation, artificial intelligence and machine learning.
Social trends are also likely to shape the future of graphic design. As consumers become more conscious of environmental issues, for example, there may be a greater demand for designs that prioritize sustainability and minimize waste. Similarly, there is likely to be a growing focus on inclusivity and diversity in design, with designers seeking to create designs that are accessible and representative of a wide range of individuals and communities.
== See also ==
=== Related areas ===
=== Related topics ===
== References ==
== Bibliography ==
Fiell, Charlotte and Fiell, Peter (editors). Contemporary Graphic Design. Taschen Publishers, 2008. ISBN 978-3-8228-5269-9
Wiedemann, Julius and Taborda, Felipe (editors). Latin-American Graphic Design. Taschen Publishers, 2008. ISBN 978-3-8228-4035-1
== External links ==
Media related to Graphic design at Wikimedia Commons
The Universal Arts of Graphic Design Archived 13 November 2015 at the Wayback Machine – Documentary produced by Off Book
Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor | Wikipedia/Graphic_design |
In mathematics, the differential geometry of surfaces deals with the differential geometry of smooth surfaces with various additional structures, most often, a Riemannian metric.
Surfaces have been extensively studied from various perspectives: extrinsically, relating to their embedding in Euclidean space and intrinsically, reflecting their properties determined solely by the distance within the surface as measured along curves on the surface. One of the fundamental concepts investigated is the Gaussian curvature, first studied in depth by Carl Friedrich Gauss, who showed that curvature was an intrinsic property of a surface, independent of its isometric embedding in Euclidean space.
Surfaces naturally arise as graphs of functions of a pair of variables, and sometimes appear in parametric form or as loci associated to space curves. An important role in their study has been played by Lie groups (in the spirit of the Erlangen program), namely the symmetry groups of the Euclidean plane, the sphere and the hyperbolic plane. These Lie groups can be used to describe surfaces of constant Gaussian curvature; they also provide an essential ingredient in the modern approach to intrinsic differential geometry through connections. On the other hand, extrinsic properties relying on an embedding of a surface in Euclidean space have also been extensively studied. This is well illustrated by the non-linear Euler–Lagrange equations in the calculus of variations: although Euler developed the one variable equations to understand geodesics, defined independently of an embedding, one of Lagrange's main applications of the two variable equations was to minimal surfaces, a concept that can only be defined in terms of an embedding.
== History ==
The volumes of certain quadric surfaces of revolution were calculated by Archimedes. The development of calculus in the seventeenth century provided a more systematic way of computing them. Curvature of general surfaces was first studied by Euler. In 1760 he proved a formula for the curvature of a plane section of a surface and in 1771 he considered surfaces represented in a parametric form. Monge laid down the foundations of their theory in his classical memoir L'application de l'analyse à la géometrie which appeared in 1795. The defining contribution to the theory of surfaces was made by Gauss in two remarkable papers written in 1825 and 1827. This marked a new departure from tradition because for the first time Gauss considered the intrinsic geometry of a surface, the properties which are determined only by the geodesic distances between points on the surface independently of the particular way in which the surface is located in the ambient Euclidean space. The crowning result, the Theorema Egregium of Gauss, established that the Gaussian curvature is an intrinsic invariant, i.e. invariant under local isometries. This point of view was extended to higher-dimensional spaces by Riemann and led to what is known today as Riemannian geometry. The nineteenth century was the golden age for the theory of surfaces, from both the topological and the differential-geometric point of view, with most leading geometers devoting themselves to their study. Darboux collected many results in his four-volume treatise Théorie des surfaces (1887–1896).
== Overview ==
It is intuitively quite familiar to say that the leaf of a plant, the surface of a glass, or the shape of a face, are curved in certain ways, and that all of these shapes, even after ignoring any distinguishing markings, have certain geometric features which distinguish one from another. The differential geometry of surfaces is concerned with a mathematical understanding of such phenomena. The study of this field, which was initiated in its modern form in the 1700s, has led to the development of higher-dimensional and abstract geometry, such as Riemannian geometry and general relativity.
The essential mathematical object is that of a regular surface. Although conventions vary in their precise definition, these form a general class of subsets of three-dimensional Euclidean space (ℝ3) which capture part of the familiar notion of "surface." By analyzing the class of curves which lie on such a surface, and the degree to which the surfaces force them to curve in ℝ3, one can associate to each point of the surface two numbers, called the principal curvatures. Their average is called the mean curvature of the surface, and their product is called the Gaussian curvature.
There are many classic examples of regular surfaces, including:
familiar examples such as planes, cylinders, and spheres
minimal surfaces, which are defined by the property that their mean curvature is zero at every point. The best-known examples are catenoids and helicoids, although many more have been discovered. Minimal surfaces can also be defined by properties to do with surface area, with the consequence that they provide a mathematical model for the shape of soap films when stretched across a wire frame
ruled surfaces, which are surfaces that have at least one straight line running through every point; examples include the cylinder and the hyperboloid of one sheet.
A surprising result of Carl Friedrich Gauss, known as the Theorema Egregium, showed that the Gaussian curvature of a surface, which by its definition has to do with how curves on the surface change directions in three dimensional space, can actually be measured by the lengths of curves lying on the surfaces together with the angles made when two curves on the surface intersect. Terminologically, this says that the Gaussian curvature can be calculated from the first fundamental form (also called metric tensor) of the surface. The second fundamental form, by contrast, is an object which encodes how lengths and angles of curves on the surface are distorted when the curves are pushed off of the surface.
Despite measuring different aspects of length and angle, the first and second fundamental forms are not independent from one another, and they satisfy certain constraints called the Gauss–Codazzi equations. A major theorem, often called the fundamental theorem of the differential geometry of surfaces, asserts that whenever two objects satisfy the Gauss-Codazzi constraints, they will arise as the first and second fundamental forms of a regular surface.
Using the first fundamental form, it is possible to define new objects on a regular surface. Geodesics are curves on the surface which satisfy a certain second-order ordinary differential equation which is specified by the first fundamental form. They are very directly connected to the study of lengths of curves; a geodesic of sufficiently short length will always be the curve of shortest length on the surface which connects its two endpoints. Thus, geodesics are fundamental to the optimization problem of determining the shortest path between two given points on a regular surface.
One can also define parallel transport along any given curve, which gives a prescription for how to deform a tangent vector to the surface at one point of the curve to tangent vectors at all other points of the curve. The prescription is determined by a first-order ordinary differential equation which is specified by the first fundamental form.
The above concepts are essentially all to do with multivariable calculus. The Gauss–Bonnet theorem is a more global result, which relates the Gaussian curvature of a surface together with its topological type. It asserts that the average value of the Gaussian curvature is completely determined by the Euler characteristic of the surface together with its surface area.
Any regular surface is an example both of a Riemannian manifold and Riemann surface. Essentially all of the theory of regular surfaces as discussed here has a generalization in the theory of Riemannian manifolds and their submanifolds.
== Regular surfaces in Euclidean space ==
=== Definition ===
It is intuitively clear that a sphere is smooth, while a cone or a pyramid, due to their vertex or edges, are not. The notion of a "regular surface" is a formalization of the notion of a smooth surface. The definition utilizes the local representation of a surface via maps between Euclidean spaces. There is a standard notion of smoothness for such maps; a map between two open subsets of Euclidean space is smooth if its partial derivatives of every order exist at every point of the domain.
A regular surface in Euclidean space ℝ3 is a subset S of ℝ3 such that every point of S admits any of the following three concepts: local parametrizations, Monge patches, or implicit functions.
The following table gives definitions of such objects; Monge patches is perhaps the most visually intuitive, as it essentially says that a regular surface is a subset of ℝ3 which is locally the graph of a smooth function (whether over a region in the yz plane, the xz plane, or the xy plane).
The homeomorphisms appearing in the first definition are known as local parametrizations or local coordinate systems or local charts on S. The equivalence of the first two definitions asserts that, around any point on a regular surface, there always exist local parametrizations of the form (u, v) ↦ (h(u, v), u, v), (u, v) ↦ (u, h(u, v), v), or (u, v) ↦ (u, v, h(u, v)), known as Monge patches. Functions F as in the third definition are called local defining functions. The equivalence of all three definitions follows from the implicit function theorem.
Given any two local parametrizations f : V → U and f ′ : V ′→ U ′ of a regular surface, the composition f −1 ∘ f ′ is necessarily smooth as a map between open subsets of ℝ2. This shows that any regular surface naturally has the structure of a smooth manifold, with a smooth atlas being given by the inverses of local parametrizations.
In the classical theory of differential geometry, surfaces are usually studied only in the regular case. It is, however, also common to study non-regular surfaces, in which the two partial derivatives ∂u f and ∂v f of a local parametrization may fail to be linearly independent. In this case, S may have singularities such as cuspidal edges. Such surfaces are typically studied in singularity theory. Other weakened forms of regular surfaces occur in computer-aided design, where a surface is broken apart into disjoint pieces, with the derivatives of local parametrizations failing to even be continuous along the boundaries.
Simple examples. A simple example of a regular surface is given by the 2-sphere {(x, y, z) | x2 + y2 + z2 = 1}; this surface can be covered by six Monge patches (two of each of the three types given above), taking h(u, v) = ± (1 − u2 − v2)1/2. It can also be covered by two local parametrizations, using stereographic projection. The set {(x, y, z) : ((x2 + y2)1/2 − r)2 + z2 =
R2} is a torus of revolution with radii r and R. It is a regular surface; local parametrizations can be given of the form
f
(
s
,
t
)
=
(
(
R
cos
s
+
r
)
cos
t
,
(
R
cos
s
+
r
)
sin
t
,
R
sin
s
)
.
{\displaystyle f(s,t)={\big (}(R\cos s+r)\cos t,(R\cos s+r)\sin t,R\sin s{\big )}.}
The hyperboloid on two sheets {(x, y, z) : z2 = 1 + x2 + y2} is a regular surface; it can be covered by two Monge patches, with h(u, v) = ±(1 + u2 + v2)1/2. The helicoid appears in the theory of minimal surfaces. It is covered by a single local parametrization, f(u, v) = (u sin v, u cos v, v).
=== Tangent vectors and normal vectors ===
Let S be a regular surface in ℝ3, and let p be an element of S. Using any of the above definitions, one can single out certain vectors in ℝ3 as being tangent to S at p, and certain vectors in ℝ3 as being orthogonal to S at p.
One sees that the tangent space or tangent plane to S at p, which is defined to consist of all tangent vectors to S at p, is a two-dimensional linear subspace of ℝ3; it is often denoted by TpS. The normal space to S at p, which is defined to consist of all normal vectors to S at p, is a one-dimensional linear subspace of ℝ3 which is orthogonal to the tangent space TpS. As such, at each point p of S, there are two normal vectors of unit length (unit normal vectors). The unit normal vectors at p can be given in terms of local parametrizations, Monge patches, or local defining functions, via the formulas
±
∂
f
∂
u
×
∂
f
∂
v
‖
∂
f
∂
u
×
∂
f
∂
v
‖
|
f
−
1
(
p
)
,
±
(
∂
h
∂
u
,
∂
h
∂
v
,
−
1
)
1
+
(
∂
h
∂
u
)
2
+
(
∂
h
∂
v
)
2
|
(
p
1
,
p
2
)
,
or
±
∇
F
(
p
)
‖
∇
F
(
p
)
‖
,
{\displaystyle \pm \left.{\frac {{\frac {\partial f}{\partial u}}\times {\frac {\partial f}{\partial v}}}{{\big \|}{\frac {\partial f}{\partial u}}\times {\frac {\partial f}{\partial v}}{\big \|}}}\right|_{f^{-1}(p)},\qquad \pm \left.{\frac {{\big (}{\frac {\partial h}{\partial u}},{\frac {\partial h}{\partial v}},-1{\big )}}{\sqrt {1+{\big (}{\frac {\partial h}{\partial u}}{\big )}^{2}+{\big (}{\frac {\partial h}{\partial v}}{\big )}^{2}}}}\right|_{(p_{1},p_{2})},\qquad {\text{or}}\qquad \pm {\frac {\nabla F(p)}{{\big \|}\nabla F(p){\big \|}}},}
following the same notations as in the previous definitions.
It is also useful to note an "intrinsic" definition of tangent vectors, which is typical of the generalization of regular surface theory to the setting of smooth manifolds. It defines the tangent space as an abstract two-dimensional real vector space, rather than as a linear subspace of ℝ3. In this definition, one says that a tangent vector to S at p is an assignment, to each local parametrization f : V → S with p ∈ f(V), of two numbers X1 and X2, such that for any other local parametrization f ′ : V → S with p ∈ f(V) (and with corresponding numbers (X ′)1 and (X ′)2), one has
(
X
1
X
2
)
=
A
f
′
(
p
)
(
(
X
′
)
1
(
X
′
)
2
)
,
{\displaystyle {\begin{pmatrix}X^{1}\\X^{2}\end{pmatrix}}=A_{f'(p)}{\begin{pmatrix}(X')^{1}\\(X')^{2}\end{pmatrix}},}
where Af ′(p) is the Jacobian matrix of the mapping f −1 ∘ f ′, evaluated at the point f ′(p). The collection of tangent vectors to S at p naturally has the structure of a two-dimensional vector space. A tangent vector in this sense corresponds to a tangent vector in the previous sense by considering the vector
X
1
∂
f
∂
u
+
X
2
∂
f
∂
v
.
{\displaystyle X^{1}{\frac {\partial f}{\partial u}}+X^{2}{\frac {\partial f}{\partial v}}.}
in ℝ3. The Jacobian condition on X1 and X2 ensures, by the chain rule, that this vector does not depend on f.
For smooth functions on a surface, vector fields (i.e. tangent vector fields) have an important interpretation as first order operators or derivations. Let
S
{\displaystyle S}
be a regular surface,
U
{\displaystyle U}
an open subset of the plane and
f
:
U
→
S
{\displaystyle f:U\rightarrow S}
a coordinate chart. If
V
=
f
(
U
)
{\displaystyle V=f(U)}
, the space
C
∞
(
U
)
{\displaystyle C^{\infty }(U)}
can be identified with
C
∞
(
V
)
{\displaystyle C^{\infty }(V)}
. Similarly
f
{\displaystyle f}
identifies vector fields on
U
{\displaystyle U}
with vector fields on
V
{\displaystyle V}
. Taking standard variables u and v, a vector field has the form
X
=
a
∂
u
+
b
∂
v
{\displaystyle X=a\partial _{u}+b\partial _{v}}
, with a and b smooth functions. If
X
{\displaystyle X}
is a vector field and
g
{\displaystyle g}
is a smooth function, then
X
g
{\displaystyle Xg}
is also a smooth function. The first order differential operator
X
{\displaystyle X}
is a derivation, i.e. it satisfies the Leibniz rule
X
(
g
h
)
=
(
X
g
)
h
+
g
(
X
h
)
.
{\displaystyle X(gh)=(Xg)h+g(Xh).}
For vector fields X and Y it is simple to check that the operator
[
X
,
Y
]
=
X
Y
−
Y
X
{\displaystyle [X,Y]=XY-YX}
is a derivation corresponding to a vector field. It is called the Lie bracket
[
X
,
Y
]
{\displaystyle [X,Y]}
. It is skew-symmetric
[
X
,
Y
]
=
−
[
Y
,
X
]
{\displaystyle [X,Y]=-[Y,X]}
and satisfies the Jacobi identity:
[
[
X
,
Y
]
,
Z
]
+
[
[
Y
,
Z
]
,
X
]
+
[
[
Z
,
X
]
,
Y
]
=
0.
{\displaystyle [[X,Y],Z]+[[Y,Z],X]+[[Z,X],Y]=0.}
In summary, vector fields on
U
{\displaystyle U}
or
V
{\displaystyle V}
form a Lie algebra under the Lie bracket.
=== First and second fundamental forms, the shape operator, and the curvature ===
Let S be a regular surface in ℝ3. Given a local parametrization f : V → S and a unit normal vector field n to f(V), one defines the following objects as real-valued or matrix-valued functions on V. The first fundamental form depends only on f, and not on n. The fourth column records the way in which these functions depend on f, by relating the functions E ′, F ′, G ′, L ′, etc., arising for a different choice of local parametrization, f ′ : V ′ → S, to those arising for f. Here A denotes the Jacobian matrix of f –1 ∘ f ′. The key relation in establishing the formulas of the fourth column is then
(
∂
f
′
∂
u
∂
f
′
∂
v
)
=
A
(
∂
f
∂
u
∂
f
∂
v
)
,
{\displaystyle {\begin{pmatrix}{\frac {\partial f'}{\partial u}}\\{\frac {\partial f'}{\partial v}}\end{pmatrix}}=A{\begin{pmatrix}{\frac {\partial f}{\partial u}}\\{\frac {\partial f}{\partial v}}\end{pmatrix}},}
as follows by the chain rule.
By a direct calculation with the matrix defining the shape operator, it can be checked that the Gaussian curvature is the determinant of the shape operator, the mean curvature is half of the trace of the shape operator, and the principal curvatures are the eigenvalues of the shape operator; moreover the Gaussian curvature is the product of the principal curvatures and the mean curvature is their sum. These observations can also be formulated as definitions of these objects. These observations also make clear that the last three rows of the fourth column follow immediately from the previous row, as similar matrices have identical determinant, trace, and eigenvalues. It is fundamental to note E, G, and EG − F2 are all necessarily positive. This ensures that the matrix inverse in the definition of the shape operator is well-defined, and that the principal curvatures are real numbers.
Note also that a negation of the choice of unit normal vector field will negate the second fundamental form, the shape operator, the mean curvature, and the principal curvatures, but will leave the Gaussian curvature unchanged. In summary, this has shown that, given a regular surface S, the Gaussian curvature of S can be regarded as a real-valued function on S; relative to a choice of unit normal vector field on all of S, the two principal curvatures and the mean curvature are also real-valued functions on S.
Geometrically, the first and second fundamental forms can be viewed as giving information on how f(u, v) moves around in ℝ3 as (u, v) moves around in V. In particular, the first fundamental form encodes how quickly f moves, while the second fundamental form encodes the extent to which its motion is in the direction of the normal vector n. In other words, the second fundamental form at a point p encodes the length of the orthogonal projection from S to the tangent plane to S at p; in particular it gives the quadratic function which best approximates this length. This thinking can be made precise by the formulas
lim
(
h
,
k
)
→
(
0
,
0
)
|
f
(
u
+
h
,
v
+
k
)
−
f
(
u
,
v
)
|
2
−
(
E
h
2
+
2
F
h
k
+
G
k
2
)
h
2
+
k
2
=
0
lim
(
h
,
k
)
→
(
0
,
0
)
(
f
(
u
+
h
,
v
+
k
)
−
f
(
u
,
v
)
)
⋅
n
−
1
2
(
L
h
2
+
2
M
h
k
+
N
k
2
)
h
2
+
k
2
=
0
,
{\displaystyle {\begin{aligned}\lim _{(h,k)\to (0,0)}{\frac {{\big |}f(u+h,v+k)-f(u,v){\big |}^{2}-{\big (}Eh^{2}+2Fhk+Gk^{2}{\big )}}{h^{2}+k^{2}}}&=0\\\lim _{(h,k)\to (0,0)}{\frac {{\big (}f(u+h,v+k)-f(u,v){\big )}\cdot n-{\frac {1}{2}}{\big (}Lh^{2}+2Mhk+Nk^{2}{\big )}}{h^{2}+k^{2}}}&=0,\end{aligned}}}
as follows directly from the definitions of the fundamental forms and Taylor's theorem in two dimensions. The principal curvatures can be viewed in the following way. At a given point p of S, consider the collection of all planes which contain the orthogonal line to S. Each such plane has a curve of intersection with S, which can be regarded as a plane curve inside of the plane itself. The two principal curvatures at p are the maximum and minimum possible values of the curvature of this plane curve at p, as the plane under consideration rotates around the normal line.
The following summarizes the calculation of the above quantities relative to a Monge patch f(u, v) = (u, v, h(u, v)). Here hu and hv denote the two partial derivatives of h, with analogous notation for the second partial derivatives. The second fundamental form and all subsequent quantities are calculated relative to the given choice of unit normal vector field.
=== Christoffel symbols, Gauss–Codazzi equations, and the Theorema Egregium ===
Let S be a regular surface in ℝ3. The Christoffel symbols assign, to each local parametrization f : V → S, eight functions on V, defined by
(
Γ
11
1
Γ
12
1
Γ
21
1
Γ
22
1
Γ
11
2
Γ
12
2
Γ
21
2
Γ
22
2
)
=
(
E
F
F
G
)
−
1
(
1
2
∂
E
∂
u
1
2
∂
E
∂
v
1
2
∂
E
∂
v
∂
F
∂
v
−
1
2
∂
G
∂
u
∂
F
∂
u
−
1
2
∂
E
∂
v
1
2
∂
G
∂
u
1
2
∂
G
∂
u
1
2
∂
G
∂
v
)
.
{\displaystyle {\begin{pmatrix}\Gamma _{11}^{1}&\Gamma _{12}^{1}&\Gamma _{21}^{1}&\Gamma _{22}^{1}\\\Gamma _{11}^{2}&\Gamma _{12}^{2}&\Gamma _{21}^{2}&\Gamma _{22}^{2}\end{pmatrix}}={\begin{pmatrix}E&F\\F&G\end{pmatrix}}^{-1}{\begin{pmatrix}{\frac {1}{2}}{\frac {\partial E}{\partial u}}&{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {\partial F}{\partial v}}-{\frac {1}{2}}{\frac {\partial G}{\partial u}}\\{\frac {\partial F}{\partial u}}-{\frac {1}{2}}{\frac {\partial E}{\partial v}}&{\frac {1}{2}}{\frac {\partial G}{\partial u}}&{\frac {1}{2}}{\frac {\partial G}{\partial u}}&{\frac {1}{2}}{\frac {\partial G}{\partial v}}\end{pmatrix}}.}
They can also be defined by the following formulas, in which n is a unit normal vector field along f(V) and L, M, N are the corresponding components of the second fundamental form:
∂
2
f
∂
u
2
=
Γ
11
1
∂
f
∂
u
+
Γ
11
2
∂
f
∂
v
+
L
n
∂
2
f
∂
u
∂
v
=
Γ
12
1
∂
f
∂
u
+
Γ
12
2
∂
f
∂
v
+
M
n
∂
2
f
∂
v
2
=
Γ
22
1
∂
f
∂
u
+
Γ
22
2
∂
f
∂
v
+
N
n
.
{\displaystyle {\begin{aligned}{\frac {\partial ^{2}f}{\partial u^{2}}}&=\Gamma _{11}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{11}^{2}{\frac {\partial f}{\partial v}}+Ln\\{\frac {\partial ^{2}f}{\partial u\partial v}}&=\Gamma _{12}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{12}^{2}{\frac {\partial f}{\partial v}}+Mn\\{\frac {\partial ^{2}f}{\partial v^{2}}}&=\Gamma _{22}^{1}{\frac {\partial f}{\partial u}}+\Gamma _{22}^{2}{\frac {\partial f}{\partial v}}+Nn.\end{aligned}}}
The key to this definition is that ∂f/∂u, ∂f/∂v, and n form a basis of ℝ3 at each point, relative to which each of the three equations uniquely specifies the Christoffel symbols as coordinates of the second partial derivatives of f. The choice of unit normal has no effect on the Christoffel symbols, since if n is exchanged for its negation, then the components of the second fundamental form are also negated, and so the signs of Ln, Mn, Nn are left unchanged.
The second definition shows, in the context of local parametrizations, that the Christoffel symbols are geometrically natural. Although the formulas in the first definition appear less natural, they have the importance of showing that the Christoffel symbols can be calculated from the first fundamental form, which is not immediately apparent from the second definition. The equivalence of the definitions can be checked by directly substituting the first definition into the second, and using the definitions of E, F, G.
The Codazzi equations assert that
∂
L
∂
v
−
∂
M
∂
u
=
L
Γ
12
1
+
M
(
Γ
12
2
−
Γ
11
1
)
−
N
Γ
11
2
∂
M
∂
v
−
∂
N
∂
u
=
L
Γ
22
1
+
M
(
Γ
22
2
−
Γ
12
1
)
−
N
Γ
12
2
.
{\displaystyle {\begin{aligned}{\frac {\partial L}{\partial v}}-{\frac {\partial M}{\partial u}}&=L\Gamma _{12}^{1}+M(\Gamma _{12}^{2}-\Gamma _{11}^{1})-N\Gamma _{11}^{2}\\{\frac {\partial M}{\partial v}}-{\frac {\partial N}{\partial u}}&=L\Gamma _{22}^{1}+M(\Gamma _{22}^{2}-\Gamma _{12}^{1})-N\Gamma _{12}^{2}.\end{aligned}}}
These equations can be directly derived from the second definition of Christoffel symbols given above; for instance, the first Codazzi equation is obtained by differentiating the first equation with respect to v, the second equation with respect to u, subtracting the two, and taking the dot product with n. The Gauss equation asserts that
K
E
=
∂
Γ
11
2
∂
v
−
∂
Γ
21
2
∂
u
+
Γ
21
2
Γ
11
1
+
Γ
22
2
Γ
11
2
−
Γ
11
2
Γ
21
1
−
Γ
12
2
Γ
21
2
K
F
=
∂
Γ
12
2
∂
v
−
∂
Γ
22
2
∂
u
+
Γ
21
2
Γ
12
1
−
Γ
11
2
Γ
22
1
K
G
=
∂
Γ
22
1
∂
u
−
∂
Γ
12
1
∂
v
+
Γ
11
1
Γ
22
1
+
Γ
12
1
Γ
22
2
−
Γ
21
1
Γ
12
1
−
Γ
22
1
Γ
12
2
{\displaystyle {\begin{aligned}KE&={\frac {\partial \Gamma _{11}^{2}}{\partial v}}-{\frac {\partial \Gamma _{21}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{11}^{1}+\Gamma _{22}^{2}\Gamma _{11}^{2}-\Gamma _{11}^{2}\Gamma _{21}^{1}-\Gamma _{12}^{2}\Gamma _{21}^{2}\\KF&={\frac {\partial \Gamma _{12}^{2}}{\partial v}}-{\frac {\partial \Gamma _{22}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{12}^{1}-\Gamma _{11}^{2}\Gamma _{22}^{1}\\KG&={\frac {\partial \Gamma _{22}^{1}}{\partial u}}-{\frac {\partial \Gamma _{12}^{1}}{\partial v}}+\Gamma _{11}^{1}\Gamma _{22}^{1}+\Gamma _{12}^{1}\Gamma _{22}^{2}-\Gamma _{21}^{1}\Gamma _{12}^{1}-\Gamma _{22}^{1}\Gamma _{12}^{2}\end{aligned}}}
These can be similarly derived as the Codazzi equations, with one using the Weingarten equations instead of taking the dot product with n. Although these are written as three separate equations, they are identical when the definitions of the Christoffel symbols, in terms of the first fundamental form, are substituted in. There are many ways to write the resulting expression, one of them derived in 1852 by Brioschi using a skillful use of determinants:
K
=
1
(
E
G
−
F
2
)
2
det
(
−
1
2
∂
2
E
∂
v
2
+
∂
2
F
∂
u
∂
v
−
1
2
∂
2
G
∂
u
2
1
2
∂
E
∂
u
∂
F
∂
u
−
1
2
∂
E
∂
v
∂
F
∂
v
−
1
2
∂
G
∂
u
E
F
1
2
∂
G
∂
v
F
G
)
−
1
(
E
G
−
F
2
)
2
det
(
0
1
2
∂
E
∂
v
1
2
∂
G
∂
u
1
2
∂
E
∂
v
E
F
1
2
∂
G
∂
u
F
G
)
.
{\displaystyle K={\frac {1}{(EG-F^{2})^{2}}}\det {\begin{pmatrix}-{1 \over 2}{\frac {\partial ^{2}E}{\partial v^{2}}}+{\frac {\partial ^{2}F}{\partial u\partial v}}-{1 \over 2}{\frac {\partial ^{2}G}{\partial u^{2}}}&{1 \over 2}{\frac {\partial E}{\partial u}}&{\frac {\partial F}{\partial u}}-{1 \over 2}{\frac {\partial E}{\partial v}}\\{\frac {\partial F}{\partial v}}-{1 \over 2}{\frac {\partial G}{\partial u}}&E&F\\{1 \over 2}{\frac {\partial G}{\partial v}}&F&G\end{pmatrix}}-{\frac {1}{(EG-F^{2})^{2}}}\det {\begin{pmatrix}0&{1 \over 2}{\frac {\partial E}{\partial v}}&{1 \over 2}{\frac {\partial G}{\partial u}}\\{1 \over 2}{\frac {\partial E}{\partial v}}&E&F\\{1 \over 2}{\frac {\partial G}{\partial u}}&F&G\end{pmatrix}}.}
When the Christoffel symbols are considered as being defined by the first fundamental form, the Gauss and Codazzi equations represent certain constraints between the first and second fundamental forms. The Gauss equation is particularly noteworthy, as it shows that the Gaussian curvature can be computed directly from the first fundamental form, without the need for any other information; equivalently, this says that LN − M2 can actually be written as a function of E, F, G, even though the individual components L, M, N cannot. This is known as the theorema egregium, and was a major discovery of Carl Friedrich Gauss. It is particularly striking when one recalls the geometric definition of the Gaussian curvature of S as being defined by the maximum and minimum radii of osculating circles; they seem to be fundamentally defined by the geometry of how S bends within ℝ3. Nevertheless, the theorem shows that their product can be determined from the "intrinsic" geometry of S, having only to do with the lengths of curves along S and the angles formed at their intersections. As said by Marcel Berger:
This theorem is baffling. [...] It is the kind of theorem which could have waited dozens of years more before being discovered by another mathematician since, unlike so much of intellectual history, it was absolutely not in the air. [...] To our knowledge there is no simple geometric proof of the theorema egregium today.
The Gauss-Codazzi equations can also be succinctly expressed and derived in the language of connection forms due to Élie Cartan. In the language of tensor calculus, making use of natural metrics and connections on tensor bundles, the Gauss equation can be written as H2 − |h|2 = R and the two Codazzi equations can be written as ∇1 h12 = ∇2 h11 and ∇1 h22 = ∇2 h12; the complicated expressions to do with Christoffel symbols and the first fundamental form are completely absorbed into the definitions of the covariant tensor derivative ∇h and the scalar curvature R. Pierre Bonnet proved that two quadratic forms satisfying the Gauss-Codazzi equations always uniquely determine an embedded surface locally. For this reason the Gauss-Codazzi equations are often called the fundamental equations for embedded surfaces, precisely identifying where the intrinsic and extrinsic curvatures come from. They admit generalizations to surfaces embedded in more general Riemannian manifolds.
=== Isometries ===
A diffeomorphism
φ
{\displaystyle \varphi }
between open sets
U
{\displaystyle U}
and
V
{\displaystyle V}
in a regular surface
S
{\displaystyle S}
is said to be an isometry if it preserves the metric, i.e. the first fundamental form. Thus for every point
p
{\displaystyle p}
in
U
{\displaystyle U}
and tangent vectors
w
1
,
w
2
{\displaystyle w_{1},\,\,w_{2}}
at
p
{\displaystyle p}
, there are equalities
E
(
p
)
w
1
⋅
w
1
+
2
F
(
p
)
w
1
⋅
w
2
+
G
(
p
)
w
2
⋅
w
2
=
E
(
φ
(
p
)
)
φ
′
(
w
1
)
⋅
φ
′
(
w
1
)
+
2
F
(
φ
(
p
)
)
φ
′
(
w
1
)
⋅
φ
′
(
w
2
)
+
G
(
φ
(
p
)
)
φ
′
(
w
1
)
⋅
φ
′
(
w
2
)
.
{\displaystyle E(p)w_{1}\cdot w_{1}+2F(p)w_{1}\cdot w_{2}+G(p)w_{2}\cdot w_{2}=E(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{1})+2F(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{2})+G(\varphi (p))\varphi ^{\prime }(w_{1})\cdot \varphi ^{\prime }(w_{2}).}
In terms of the inner product coming from the first fundamental form, this can be rewritten as
(
w
1
,
w
2
)
p
=
(
φ
′
(
w
1
)
,
φ
′
(
w
2
)
)
φ
(
p
)
{\displaystyle (w_{1},w_{2})_{p}=(\varphi ^{\prime }(w_{1}),\varphi ^{\prime }(w_{2}))_{\varphi (p)}}
.
On the other hand, the length of a parametrized curve
γ
(
t
)
=
(
x
(
t
)
,
y
(
t
)
)
{\displaystyle \gamma (t)=(x(t),y(t))}
can be calculated as
L
(
γ
)
=
∫
a
b
E
x
˙
⋅
x
˙
+
2
F
x
˙
⋅
y
˙
+
G
y
˙
⋅
y
˙
d
t
{\displaystyle L(\gamma )=\int _{a}^{b}{\sqrt {E{\dot {x}}\cdot {\dot {x}}+2F{\dot {x}}\cdot {\dot {y}}+G{\dot {y}}\cdot {\dot {y}}}}\,dt}
and, if the curve lies in
U
{\displaystyle U}
, the rules for change of variables show that
L
(
φ
∘
γ
)
=
L
(
γ
)
.
{\displaystyle L(\varphi \circ \gamma )=L(\gamma ).}
Conversely if
φ
{\displaystyle \varphi }
preserves the lengths of all parametrized in curves then
φ
{\displaystyle \varphi }
is an isometry. Indeed, for suitable choices of
γ
{\displaystyle \gamma }
, the tangent vectors
x
˙
{\displaystyle {\dot {x}}}
and
y
˙
{\displaystyle {\dot {y}}}
give arbitrary tangent vectors
w
1
{\displaystyle w_{1}}
and
w
2
{\displaystyle w_{2}}
. The equalities must hold for all choice of tangent vectors
w
1
{\displaystyle w_{1}}
and
w
2
{\displaystyle w_{2}}
as well as
φ
′
(
w
1
)
{\displaystyle \varphi ^{\prime }(w_{1})}
and
φ
′
(
w
2
)
{\displaystyle \varphi ^{\prime }(w_{2})}
, so that
(
φ
′
(
w
1
)
,
φ
′
(
w
2
)
)
φ
(
p
)
=
(
w
1
,
w
1
)
p
{\displaystyle (\varphi ^{\prime }(w_{1}),\varphi ^{\prime }(w_{2}))_{\varphi (p)}=(w_{1},w_{1})_{p}}
.
A simple example of an isometry is provided by two parametrizations
f
1
{\displaystyle f_{1}}
and
f
2
{\displaystyle f_{2}}
of an open set
U
{\displaystyle U}
into regular surfaces
S
1
{\displaystyle S_{1}}
and
S
2
{\displaystyle S_{2}}
. If
E
1
=
E
2
{\displaystyle E_{1}=E_{2}}
,
F
1
=
F
2
{\displaystyle F_{1}=F_{2}}
and
G
1
=
G
2
{\displaystyle G_{1}=G_{2}}
, then
φ
=
f
2
∘
f
1
−
1
{\displaystyle \varphi =f_{2}\circ f_{1}^{-1}}
is an isometry of
f
1
(
U
)
{\displaystyle f_{1}(U)}
onto
f
2
(
U
)
{\displaystyle f_{2}(U)}
.
The cylinder and the plane give examples of surfaces that are locally isometric but which cannot be extended to an isometry for topological reasons. As another example, the catenoid and helicoid are locally isometric.
=== Covariant derivatives ===
A tangential vector field X on S assigns, to each p in S, a tangent vector Xp to S at p. According to the "intrinsic" definition of tangent vectors given above, a tangential vector field X then assigns, to each local parametrization f : V → S, two real-valued functions X1 and X2 on V, so that
X
p
=
X
1
(
f
−
1
(
p
)
)
∂
f
∂
u
|
f
−
1
(
p
)
+
X
2
(
f
−
1
(
p
)
)
∂
f
∂
v
|
f
−
1
(
p
)
{\displaystyle X_{p}=X^{1}{\big (}f^{-1}(p){\big )}{\frac {\partial f}{\partial u}}{\Big |}_{f^{-1}(p)}+X^{2}{\big (}f^{-1}(p){\big )}{\frac {\partial f}{\partial v}}{\Big |}_{f^{-1}(p)}}
for each p in S. One says that X is smooth if the functions X1 and X2 are smooth, for any choice of f. According to the other definitions of tangent vectors given above, one may also regard a tangential vector field X on S as a map X : S → ℝ3 such that X(p) is contained in the tangent space TpS ⊂ ℝ3 for each p in S. As is common in the more general situation of smooth manifolds, tangential vector fields can also be defined as certain differential operators on the space of smooth functions on S.
The covariant derivatives (also called "tangential derivatives") of Tullio Levi-Civita and Gregorio Ricci-Curbastro provide a means of differentiating smooth tangential vector fields. Given a tangential vector field X and a tangent vector Y to S at p, the covariant derivative ∇YX is a certain tangent vector to S at p. Consequently, if X and Y are both tangential vector fields, then ∇YX can also be regarded as a tangential vector field; iteratively, if X, Y, and Z are tangential vector fields, the one may compute ∇Z∇YX, which will be another tangential vector field. There are a few ways to define the covariant derivative; the first below uses the Christoffel symbols and the "intrinsic" definition of tangent vectors, and the second is more manifestly geometric.
Given a tangential vector field X and a tangent vector Y to S at p, one defines ∇YX to be the tangent vector to p which assigns to a local parametrization f : V → S the two numbers
(
∇
Y
X
)
k
=
D
(
Y
1
,
Y
2
)
X
k
|
f
−
1
(
p
)
+
∑
i
=
1
2
∑
j
=
1
2
(
Γ
i
j
k
X
j
)
|
f
−
1
(
p
)
Y
i
,
(
k
=
1
,
2
)
{\displaystyle (\nabla _{Y}X)^{k}=D_{(Y^{1},Y^{2})}X^{k}{\Big |}_{f^{-1}(p)}+\sum _{i=1}^{2}\sum _{j=1}^{2}{\big (}\Gamma _{ij}^{k}X^{j}{\big )}{\Big |}_{f^{-1}(p)}Y^{i},\qquad (k=1,2)}
where D(Y1, Y2) is the directional derivative. This is often abbreviated in the less cumbersome form (∇YX)k = ∂Y(X k) + Y iΓkijX j, making use of Einstein notation and with the locations of function evaluation being implicitly understood. This follows a standard prescription in Riemannian geometry for obtaining a connection from a Riemannian metric. It is a fundamental fact that the vector
(
∇
Y
X
)
1
∂
f
∂
u
+
(
∇
Y
X
)
2
∂
f
∂
v
{\displaystyle (\nabla _{Y}X)^{1}{\frac {\partial f}{\partial u}}+(\nabla _{Y}X)^{2}{\frac {\partial f}{\partial v}}}
in ℝ3 is independent of the choice of local parametization f, although this is rather tedious to check.
One can also define the covariant derivative by the following geometric approach, which does not make use of Christoffel symbols or local parametrizations. Let X be a vector field on S, viewed as a function S → ℝ3. Given any curve c : (a, b) → S, one may consider the composition X ∘ c : (a, b) → ℝ3. As a map between Euclidean spaces, it can be differentiated at any input value to get an element (X ∘ c)′(t) of ℝ3. The orthogonal projection of this vector onto Tc(t)S defines the covariant derivative ∇c ′(t)X. Although this is a very geometrically clean definition, it is necessary to show that the result only depends on c′(t) and X, and not on c and X; local parametrizations can be used for this small technical argument.
It is not immediately apparent from the second definition that covariant differentiation depends only on the first fundamental form of S; however, this is immediate from the first definition, since the Christoffel symbols can be defined directly from the first fundamental form. It is straightforward to check that the two definitions are equivalent. The key is that when one regards X1∂f/∂u + X2∂f/∂v as a ℝ3-valued function, its differentiation along a curve results in second partial derivatives ∂2f; the Christoffel symbols enter with orthogonal projection to the tangent space, due to the formulation of the Christoffel symbols as the tangential components of the second derivatives of f relative to the basis ∂f/∂u, ∂f/∂v, n. This is discussed in the above section.
The right-hand side of the three Gauss equations can be expressed using covariant differentiation. For instance, the right-hand side
∂
Γ
11
2
∂
v
−
∂
Γ
21
2
∂
u
+
Γ
21
2
Γ
11
1
+
Γ
22
2
Γ
11
2
−
Γ
11
2
Γ
21
1
−
Γ
12
2
Γ
21
2
{\displaystyle {\frac {\partial \Gamma _{11}^{2}}{\partial v}}-{\frac {\partial \Gamma _{21}^{2}}{\partial u}}+\Gamma _{21}^{2}\Gamma _{11}^{1}+\Gamma _{22}^{2}\Gamma _{11}^{2}-\Gamma _{11}^{2}\Gamma _{21}^{1}-\Gamma _{12}^{2}\Gamma _{21}^{2}}
can be recognized as the second coordinate of
∇
∂
f
∂
v
∇
∂
f
∂
u
∂
f
∂
u
−
∇
∂
f
∂
u
∇
∂
f
∂
v
∂
f
∂
u
{\displaystyle \nabla _{\frac {\partial f}{\partial v}}\nabla _{\frac {\partial f}{\partial u}}{\frac {\partial f}{\partial u}}-\nabla _{\frac {\partial f}{\partial u}}\nabla _{\frac {\partial f}{\partial v}}{\frac {\partial f}{\partial u}}}
relative to the basis ∂f/∂u, ∂f/∂v, as can be directly verified using the definition of covariant differentiation by Christoffel symbols. In the language of Riemannian geometry, this observation can also be phrased as saying that the right-hand sides of the Gauss equations are various components of the Ricci curvature of the Levi-Civita connection of the first fundamental form, when interpreted as a Riemannian metric.
== Examples ==
=== Surfaces of revolution ===
A surface of revolution is obtained by rotating a curve in the xz-plane about the z-axis. Such surfaces include spheres, cylinders, cones, tori, and the catenoid. The general ellipsoids, hyperboloids, and paraboloids are not. Suppose that the curve is parametrized by
x
=
c
1
(
s
)
,
z
=
c
2
(
s
)
{\displaystyle x=c_{1}(s),\,\,z=c_{2}(s)}
with s drawn from an interval (a, b). If c1 is never zero, if c1′ and c2′ are never both equal to zero, and if c1 and c2 are both smooth, then the corresponding surface of revolution
S
=
{
(
c
1
(
s
)
cos
t
,
c
1
(
s
)
sin
t
,
c
2
(
s
)
)
:
s
∈
(
a
,
b
)
and
t
∈
R
}
{\displaystyle S={\Big \{}{\big (}c_{1}(s)\cos t,c_{1}(s)\sin t,c_{2}(s){\big )}\colon s\in (a,b){\text{ and }}t\in \mathbb {R} {\Big \}}}
will be a regular surface in ℝ3. A local parametrization f : (a, b) × (0, 2π) → S is given by
f
(
s
,
t
)
=
(
c
1
(
s
)
cos
t
,
c
1
(
s
)
sin
t
,
c
2
(
s
)
)
.
{\displaystyle f(s,t)={\big (}c_{1}(s)\cos t,c_{1}(s)\sin t,c_{2}(s){\big )}.}
Relative to this parametrization, the geometric data is:
In the special case that the original curve is parametrized by arclength, i.e. (c1′(s))2 + (c2′(s))2 = 1, one can differentiate to find c1′(s)c1′′(s) + c2′(s)c2′′(s) = 0. On substitution into the Gaussian curvature, one has the simplified
K
=
−
c
1
″
(
s
)
c
1
(
s
)
and
H
=
c
1
′
(
s
)
c
2
″
(
s
)
−
c
2
′
(
s
)
c
1
″
(
s
)
+
c
2
′
(
s
)
c
1
(
s
)
.
{\displaystyle K=-{\frac {c_{1}''(s)}{c_{1}(s)}}\qquad {\text{and}}\qquad H=c_{1}'(s)c_{2}''(s)-c_{2}'(s)c_{1}''(s)+{\frac {c_{2}'(s)}{c_{1}(s)}}.}
The simplicity of this formula makes it particularly easy to study the class of rotationally symmetric surfaces with constant Gaussian curvature. By reduction to the alternative case that c2(s) = s, one can study the rotationally symmetric minimal surfaces, with the result that any such surface is part of a plane or a scaled catenoid.
Each constant-t curve on S can be parametrized as a geodesic; a constant-s curve on S can be parametrized as a geodesic if and only if c1′(s) is equal to zero. Generally, geodesics on S are governed by Clairaut's relation.
=== Quadric surfaces ===
Consider the quadric surface defined by
x
2
a
+
y
2
b
+
z
2
c
=
1.
{\displaystyle {x^{2} \over a}+{y^{2} \over b}+{z^{2} \over c}=1.}
This surface admits a parametrization
x
=
a
(
a
−
u
)
(
a
−
v
)
(
a
−
b
)
(
a
−
c
)
,
y
=
b
(
b
−
u
)
(
b
−
v
)
(
b
−
a
)
(
b
−
c
)
,
z
=
c
(
c
−
u
)
(
c
−
v
)
(
c
−
b
)
(
c
−
a
)
.
{\displaystyle x={\sqrt {a(a-u)(a-v) \over (a-b)(a-c)}},\,\,y={\sqrt {b(b-u)(b-v) \over (b-a)(b-c)}},\,\,z={\sqrt {c(c-u)(c-v) \over (c-b)(c-a)}}.}
The Gaussian curvature and mean curvature are given by
K
=
a
b
c
u
2
v
2
,
K
m
=
−
(
u
+
v
)
a
b
c
u
3
v
3
.
{\displaystyle K={abc \over u^{2}v^{2}},\,\,K_{m}=-(u+v){\sqrt {abc \over u^{3}v^{3}}}.}
=== Ruled surfaces ===
A ruled surface is one which can be generated by the motion of a straight line in E3. Choosing a directrix on the surface, i.e. a smooth unit speed curve c(t) orthogonal to the straight lines, and then choosing u(t) to be unit vectors along the curve in the direction of the lines, the velocity vector v = ct and u satisfy
u
⋅
v
=
0
,
‖
u
‖
=
1
,
‖
v
‖
=
1.
{\displaystyle u\cdot v=0,\,\,\|u\|=1,\,\,\|v\|=1.}
The surface consists of points
c
(
t
)
+
s
⋅
u
(
t
)
{\displaystyle c(t)+s\cdot u(t)}
as s and t vary.
Then, if
a
=
‖
u
t
‖
,
b
=
u
t
⋅
v
,
α
=
−
b
a
2
,
β
=
a
2
−
b
2
a
2
,
{\displaystyle a=\|u_{t}\|,\,\,b=u_{t}\cdot v,\,\,\alpha =-{\frac {b}{a^{2}}},\,\,\beta ={\frac {\sqrt {a^{2}-b^{2}}}{a^{2}}},}
the Gaussian and mean curvature are given by
K
=
−
β
2
(
(
s
−
α
)
2
+
β
2
)
2
,
K
m
=
−
r
[
(
s
−
α
)
2
+
β
2
)
]
+
β
t
(
s
−
α
)
+
β
α
t
[
(
s
−
α
)
2
+
β
2
]
3
2
.
{\displaystyle K=-{\beta ^{2} \over ((s-\alpha )^{2}+\beta ^{2})^{2}},\,\,K_{m}=-{r[(s-\alpha )^{2}+\beta ^{2})]+\beta _{t}(s-\alpha )+\beta \alpha _{t} \over [(s-\alpha )^{2}+\beta ^{2}]^{\frac {3}{2}}}.}
The Gaussian curvature of the ruled surface vanishes if and only if ut and v are proportional, This condition is equivalent to the surface being the envelope of the planes along the curve containing the tangent vector v and the orthogonal vector u, i.e. to the surface being developable along the curve. More generally a surface in E3 has vanishing Gaussian curvature near a point if and only if it is developable near that point. (An equivalent condition is given below in terms of the metric.)
=== Minimal surfaces ===
In 1760 Lagrange extended Euler's results on the calculus of variations involving integrals in one variable to two variables. He had in mind the following problem:
Given a closed curve in E3, find a surface having the curve as boundary with minimal area.
Such a surface is called a minimal surface.
In 1776 Jean Baptiste Meusnier showed that the differential equation derived by Lagrange was equivalent to the vanishing of the mean curvature of the surface:
A surface is minimal if and only if its mean curvature vanishes.
Minimal surfaces have a simple interpretation in real life: they are the shape a soap film will assume if a wire frame shaped like the curve is dipped into a soap solution and then carefully lifted out. The question as to whether a minimal surface with given boundary exists is called Plateau's problem after the Belgian physicist Joseph Plateau who carried out experiments on soap films in the mid-nineteenth century. In 1930 Jesse Douglas and Tibor Radó gave an affirmative answer to Plateau's problem (Douglas was awarded one of the first Fields medals for this work in 1936).
Many explicit examples of minimal surface are known explicitly, such as the catenoid, the helicoid, the Scherk surface and the Enneper surface. There has been extensive research in this area, summarised in Osserman (2002). In particular a result of Osserman shows that if a minimal surface is non-planar, then its image under the Gauss map is dense in S2.
=== Surfaces of constant Gaussian curvature ===
If a surface has constant Gaussian curvature, it is called a surface of constant curvature.
The unit sphere in E3 has constant Gaussian curvature +1.
The Euclidean plane and the cylinder both have constant Gaussian curvature 0.
A unit pseudosphere has constant Gaussian curvature -1 (apart from its equator, that is singular). Pseudosphere can be obtained by rotating a tractrix around its asymptote. In 1868 Eugenio Beltrami showed that the geometry of the pseudosphere was directly related to that of the more abstract hyperbolic plane, discovered independently by Lobachevsky (1830) and Bolyai (1832). Already in 1840, F. Minding, a student of Gauss, had obtained trigonometric formulas for the pseudosphere identical to those for the hyperbolic plane. The intrinsic geometry of this surface is now better understood in terms of the Poincaré metric on the upper half plane or the unit disc, and has been described by other models such as the Klein model or the hyperboloid model, obtained by considering the two-sheeted hyperboloid q(x, y, z) = −1 in three-dimensional Minkowski space, where q(x, y, z) = x2 + y2 – z2.
The sphere, the plane and the hyperbolic plane have transitive Lie group of symmetries. This group theoretic fact has far-reaching consequences, all the more remarkable because of the central role these special surfaces play in the geometry of surfaces, due to Poincaré's uniformization theorem (see below).
Other examples of surfaces with Gaussian curvature 0 include cones, tangent developables, and more generally any developable surface.
== Local metric structure ==
For any surface embedded in Euclidean space of dimension 3 or higher, it is possible to measure the length of a curve on the surface, the angle between two curves and the area of a region on the surface. This structure is encoded infinitesimally in a Riemannian metric on the surface through line elements and area elements. Classically in the nineteenth and early twentieth centuries only surfaces embedded in R3 were considered and the metric was given as a 2×2 positive definite matrix varying smoothly from point to point in a local parametrization of the surface. The idea of local parametrization and change of coordinate was later formalized through the current abstract notion of a manifold, a topological space where the smooth structure is given by local charts on the manifold, exactly as the planet Earth is mapped by atlases today. Changes of coordinates between different charts of the same region are required to be smooth. Just as contour lines on real-life maps encode changes in elevation, taking into account local distortions of the Earth's surface to calculate true distances, so the Riemannian metric describes distances and areas "in the small" in each local chart. In each local chart a Riemannian metric is given by smoothly assigning a 2×2 positive definite matrix to each point; when a different chart is taken, the matrix is transformed according to the Jacobian matrix of the coordinate change. The manifold then has the structure of a 2-dimensional Riemannian manifold.
=== Shape operator ===
The differential dn of the Gauss map n can be used to define a type of extrinsic curvature, known as the shape operator or Weingarten map. This operator first appeared implicitly in the work of Wilhelm Blaschke and later explicitly in a treatise by Burali-Forti and Burgati. Since at each point x of the surface, the tangent space is an inner product space, the shape operator Sx can be defined as a linear operator on this space by the formula
(
S
x
v
,
w
)
=
(
d
n
(
v
)
,
w
)
{\displaystyle (S_{x}v,w)=(dn(v),w)}
for tangent vectors v, w (the inner product makes sense because dn(v) and w both lie in E3). The right hand side is symmetric in v and w, so the shape operator is self-adjoint on the tangent space. The eigenvalues of Sx are just the principal curvatures k1 and k2 at x. In particular the determinant of the shape operator at a point is the Gaussian curvature, but it also contains other information, since the mean curvature is half the trace of the shape operator. The mean curvature is an extrinsic invariant. In intrinsic geometry, a cylinder is developable, meaning that every piece of it is intrinsically indistinguishable from a piece of a plane since its Gauss curvature vanishes identically. Its mean curvature is not zero, though; hence extrinsically it is different from a plane.
Equivalently, the shape operator can be defined as a linear operator on tangent spaces,
S
p
:
T
p
M
→
T
p
M
{\displaystyle S_{p}:T_{p}M\rightarrow T_{p}M}
. If n is a unit normal field to M and v is a tangent vector then
S
(
v
)
=
±
∇
v
n
{\displaystyle S(v)=\pm \nabla _{v}n}
(there is no standard agreement whether to use + or − in the definition).
In general, the eigenvectors and eigenvalues of the shape operator at each point determine the directions in which the surface bends at each point. The eigenvalues correspond to the principal curvatures of the surface and the eigenvectors are the corresponding principal directions. The principal directions specify the directions that a curve embedded in the surface must travel to have maximum and minimum curvature, these being given by the principal curvatures.
== Geodesic curves on a surface ==
Curves on a surface which minimize length between the endpoints are called geodesics; they are the shape that an elastic band stretched between the two points would take. Mathematically they are described using ordinary differential equations and the calculus of variations. The differential geometry of surfaces revolves around the study of geodesics. It is still an open question whether every Riemannian metric on a 2-dimensional local chart arises from an embedding in 3-dimensional Euclidean space: the theory of geodesics has been used to show this is true in the important case when the components of the metric are analytic.
=== Geodesics ===
Given a piecewise smooth path
c
(
t
)
=
(
x
(
t
)
,
y
(
t
)
)
{\displaystyle c(t)=(x(t),y(t))}
in the chart for
t
{\displaystyle t}
in
[
a
,
b
]
{\displaystyle [a,b]}
, its length is defined by
L
(
c
)
=
∫
a
b
(
E
x
˙
2
+
2
F
x
˙
y
˙
+
G
y
˙
2
)
1
2
d
t
{\displaystyle L(c)=\int _{a}^{b}(E{\dot {x}}^{2}+2F{\dot {x}}{\dot {y}}+G{\dot {y}}^{2})^{\frac {1}{2}}\,dt}
and energy by
E
(
c
)
=
∫
a
b
(
E
x
˙
2
+
2
F
x
˙
y
˙
+
G
y
˙
2
)
d
t
.
{\displaystyle E(c)=\int _{a}^{b}(E{\dot {x}}^{2}+2F{\dot {x}}{\dot {y}}+G{\dot {y}}^{2})\,dt.}
The length is independent of the parametrization of a path. By the Euler–Lagrange equations, if c(t) is a path minimising length, parametrized by arclength, it must satisfy the Euler equations
x
¨
+
Γ
11
1
x
˙
2
+
2
Γ
12
1
x
˙
y
˙
+
Γ
22
1
y
˙
2
=
0
{\displaystyle {\ddot {x}}+\Gamma _{11}^{1}{\dot {x}}^{2}+2\Gamma _{12}^{1}{\dot {x}}{\dot {y}}+\Gamma _{22}^{1}{\dot {y}}^{2}=0}
y
¨
+
Γ
11
2
x
˙
2
+
2
Γ
12
2
x
˙
y
˙
+
Γ
22
2
y
˙
2
=
0
{\displaystyle {\ddot {y}}+\Gamma _{11}^{2}{\dot {x}}^{2}+2\Gamma _{12}^{2}{\dot {x}}{\dot {y}}+\Gamma _{22}^{2}{\dot {y}}^{2}=0}
where the Christoffel symbols Γkij are given by
Γ
i
j
k
=
1
2
g
k
m
(
∂
j
g
i
m
+
∂
i
g
j
m
−
∂
m
g
i
j
)
{\displaystyle \Gamma _{ij}^{k}={\tfrac {1}{2}}g^{km}(\partial _{j}g_{im}+\partial _{i}g_{jm}-\partial _{m}g_{ij})}
where g11 = E, g12 = F, g22 = G and gij is the inverse matrix to gij. A path satisfying the Euler equations is called a geodesic. By the Cauchy–Schwarz inequality a path minimising energy is just a geodesic parametrised by arc length; and, for any geodesic, the parameter t is proportional to arclength.
=== Geodesic curvature ===
The geodesic curvature kg at a point of a curve c(t), parametrised by arc length, on an oriented surface is defined to be
k
g
=
c
¨
(
t
)
⋅
n
(
t
)
.
{\displaystyle k_{g}={\ddot {c}}(t)\cdot \mathbf {n} (t).}
where n(t) is the "principal" unit normal to the curve in the surface, constructed by rotating the unit tangent vector ċ(t) through an angle of +90°.
The geodesic curvature at a point is an intrinsic invariant depending only on the metric near the point.
A unit speed curve on a surface is a geodesic if and only if its geodesic curvature vanishes at all points on the curve.
A unit speed curve c(t) in an embedded surface is a geodesic if and only if its acceleration vector c̈(t) is normal to the surface.
The geodesic curvature measures in a precise way how far a curve on the surface is from being a geodesic.
=== Orthogonal coordinates ===
When F = 0 throughout a coordinate chart, such as with the geodesic polar coordinates discussed below, the images of lines parallel to the x- and y-axes are orthogonal and provide orthogonal coordinates. If H = (EG)1⁄2, then the Gaussian curvature is given by
K
=
−
1
2
H
[
∂
x
(
G
x
H
)
+
∂
y
(
E
y
H
)
]
.
{\displaystyle K=-{1 \over 2H}\left[\partial _{x}\left({\frac {G_{x}}{H}}\right)+\partial _{y}\left({\frac {E_{y}}{H}}\right)\right].}
If in addition E = 1, so that H = G1⁄2, then the angle φ at the intersection between geodesic (x(t),y(t)) and the line y = constant is given by the equation
tan
φ
=
H
⋅
y
˙
x
˙
.
{\displaystyle \tan \varphi =H\cdot {\frac {\dot {y}}{\dot {x}}}.}
The derivative of φ is given by a classical derivative formula of Gauss:
φ
˙
=
−
H
x
⋅
y
˙
.
{\displaystyle {\dot {\varphi }}=-H_{x}\cdot {\dot {y}}.}
== Geodesic polar coordinates ==
Once a metric is given on a surface and a base point is fixed, there is a unique geodesic connecting the base point to each sufficiently nearby point. The direction of the geodesic at the base point and the distance uniquely determine the other endpoint. These two bits of data, a direction and a magnitude, thus determine a tangent vector at the base point. The map from tangent vectors to endpoints smoothly sweeps out a neighbourhood of the base point and defines what is called the exponential map, defining a local coordinate chart at that base point. The neighbourhood swept out has similar properties to balls in Euclidean space, namely any two points in it are joined by a unique geodesic. This property is called "geodesic convexity" and the coordinates are called normal coordinates. The explicit calculation of normal coordinates can be accomplished by considering the differential equation satisfied by geodesics. The convexity properties are consequences of Gauss's lemma and its generalisations. Roughly speaking this lemma states that geodesics starting at the base point must cut the spheres of fixed radius centred on the base point at right angles. Geodesic polar coordinates are obtained by combining the exponential map with polar coordinates on tangent vectors at the base point. The Gaussian curvature of the surface is then given by the second order deviation of the metric at the point from the Euclidean metric. In particular the Gaussian curvature is an invariant of the metric, Gauss's celebrated Theorema Egregium. A convenient way to understand the curvature comes from an ordinary differential equation, first considered by Gauss and later generalized by Jacobi, arising from the change of normal coordinates about two different points. The Gauss–Jacobi equation provides another way of computing the Gaussian curvature. Geometrically it explains what happens to geodesics from a fixed base point as the endpoint varies along a small curve segment through data recorded in the Jacobi field, a vector field along the geodesic. One and a quarter centuries after Gauss and Jacobi, Marston Morse gave a more conceptual interpretation of the Jacobi field in terms of second derivatives of the energy function on the infinite-dimensional Hilbert manifold of paths.
=== Exponential map ===
The theory of ordinary differential equations shows that if f(t, v) is smooth then the differential equation dv/dt = f(t, v) with initial condition v(0) = v0 has a unique solution for |t| sufficiently small and the solution depends smoothly on t and v0. This implies that for sufficiently small tangent vectors v at a given point p = (x0, y0), there is a geodesic cv(t) defined on (−2, 2) with cv(0) = (x0, y0) and ċv(0) = v. Moreover, if |s| ≤ 1, then csv = cv(st). The exponential map is defined by
expp(v) = cv (1)
and gives a diffeomorphism between a disc ‖v‖ < δ and a neighbourhood of p; more generally the map sending (p, v) to expp(v) gives a local diffeomorphism onto a neighbourhood of (p, p). The exponential map gives geodesic normal coordinates near p.
=== Computation of normal coordinates ===
There is a standard technique (see for example Berger (2004)) for computing the change of variables to normal coordinates u, v at a point as a formal Taylor series expansion. If the coordinates x, y at (0,0) are locally orthogonal, write
x(u,v) = αu + L(u,v) + λ(u,v) + …
y(u,v) = βv + M(u,v) + μ(u,v) + …
where L, M are quadratic and λ, μ cubic homogeneous polynomials in u and v. If u and v are fixed, x(t) = x(tu,tv) and y(t) = y(tu, tv) can be considered as formal power series solutions of the Euler equations: this uniquely determines α, β, L, M, λ and μ.
=== Gauss's lemma ===
In these coordinates the matrix g(x) satisfies g(0) = I and the lines t ↦ tv are geodesics through 0. Euler's equations imply the matrix equation
g(v)v = v,
a key result, usually called the Gauss lemma. Geometrically it states that
Taking polar coordinates (r,θ), it follows that the metric has the form
ds2 = dr2 + G(r,θ) dθ2.
In geodesic coordinates, it is easy to check that the geodesics through zero minimize length. The topology on the Riemannian manifold is then given by a distance function d(p,q), namely the infimum of the lengths of piecewise smooth paths between p and q. This distance is realised locally by geodesics, so that in normal coordinates d(0,v) = ‖v‖. If the radius δ is taken small enough, a slight sharpening of the Gauss lemma shows that the image U of the disc ‖v‖ < δ under the exponential map is geodesically convex, i.e. any two points in U are joined by a unique geodesic lying entirely inside U.
=== Theorema Egregium ===
Gauss's Theorema Egregium, the "Remarkable Theorem", shows that the Gaussian curvature of a surface can be computed solely in terms of the metric and is thus an intrinsic invariant of the surface, independent of any isometric embedding in E3 and unchanged under coordinate transformations. In particular, isometries and local isometries of surfaces preserve Gaussian curvature.
This theorem can expressed in terms of the power series expansion of the metric, ds, is given in normal coordinates (u, v) as
ds2 = du2 + dv2 − K(u dv – v du)2/12 + ….
=== Gauss–Jacobi equation ===
Taking a coordinate change from normal coordinates at p to normal coordinates at a nearby point q, yields the Sturm–Liouville equation satisfied by H(r,θ) = G(r,θ)1⁄2, discovered by Gauss and later generalised by Jacobi,
Hrr = –KH.
The Jacobian of this coordinate change at q is equal to Hr. This gives another way of establishing the intrinsic nature of Gaussian curvature. Because H(r,θ) can be interpreted as the length of the line element in the θ direction, the Gauss–Jacobi equation shows that the Gaussian curvature measures the spreading of geodesics on a geometric surface as they move away from a point.
=== Laplace–Beltrami operator ===
On a surface with local metric
d
s
2
=
E
d
x
2
+
2
F
d
x
d
y
+
G
d
y
2
{\displaystyle ds^{2}=E\,dx^{2}+2F\,dx\,dy+G\,dy^{2}}
and Laplace–Beltrami operator
Δ
f
=
1
H
(
∂
x
G
H
∂
x
f
−
∂
x
F
H
∂
y
f
−
∂
y
F
H
∂
x
f
+
∂
y
E
H
∂
y
f
)
,
{\displaystyle \Delta f={1 \over H}\left(\partial _{x}{G \over H}\partial _{x}f-\partial _{x}{F \over H}\partial _{y}f-\partial _{y}{F \over H}\partial _{x}f+\partial _{y}{E \over H}\partial _{y}f\right),}
where H2 = EG − F2, the Gaussian curvature at a point is given by the formula
K
=
−
3
lim
r
→
0
Δ
(
log
r
)
,
{\displaystyle K=-3\lim _{r\rightarrow 0}\Delta (\log r),}
where r denotes the geodesic distance from the point.
In isothermal coordinates, first considered by Gauss, the metric is required to be of the special form
d
s
2
=
e
φ
(
d
x
2
+
d
y
2
)
.
{\displaystyle ds^{2}=e^{\varphi }(dx^{2}+dy^{2}).\,}
In this case the Laplace–Beltrami operator is given by
Δ
=
e
−
φ
(
∂
2
∂
x
2
+
∂
2
∂
y
2
)
{\displaystyle \Delta =e^{-\varphi }\left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)}
and φ satisfies Liouville's equation
Δ
φ
=
−
2
K
.
{\displaystyle \Delta \varphi =-2K.\,}
Isothermal coordinates are known to exist in a neighbourhood of any point on the surface, although all proofs to date rely on non-trivial results on partial differential equations. There is an elementary proof for minimal surfaces.
== Gauss–Bonnet theorem ==
On a sphere or a hyperboloid, the area of a geodesic triangle, i.e. a triangle all the sides of which are geodesics, is proportional to the difference of the sum of the interior angles and π. The constant of proportionality is just the Gaussian curvature, a constant for these surfaces. For the torus, the difference is zero, reflecting the fact that its Gaussian curvature is zero. These are standard results in spherical, hyperbolic and high school trigonometry (see below). Gauss generalised these results to an arbitrary surface by showing that the integral of the Gaussian curvature over the interior of a geodesic triangle is also equal to this angle difference or excess. His formula showed that the Gaussian curvature could be calculated near a point as the limit of area over angle excess for geodesic triangles shrinking to the point. Since any closed surface can be decomposed up into geodesic triangles, the formula could also be used to compute the integral of the curvature over the whole surface. As a special case of what is now called the Gauss–Bonnet theorem, Gauss proved that this integral was remarkably always 2π times an integer, a topological invariant of the surface called the Euler characteristic. This invariant is easy to compute combinatorially in terms of the number of vertices, edges, and faces of the triangles in the decomposition, also called a triangulation. This interaction between analysis and topology was the forerunner of many later results in geometry, culminating in the Atiyah-Singer index theorem. In particular properties of the curvature impose restrictions on the topology of the surface.
=== Geodesic triangles ===
Gauss proved that, if Δ is a geodesic triangle on a surface with angles α, β and γ at vertices A, B and C, then
∫
Δ
K
d
A
=
α
+
β
+
γ
−
π
.
{\displaystyle \int _{\Delta }K\,dA=\alpha +\beta +\gamma -\pi .}
In fact taking geodesic polar coordinates with origin A and AB, AC the radii at polar angles 0 and α:
∫
Δ
K
d
A
=
∫
Δ
K
H
d
r
d
θ
=
−
∫
0
α
∫
0
r
θ
H
r
r
d
r
d
θ
=
∫
0
α
1
−
H
r
(
r
θ
,
θ
)
d
θ
=
∫
0
α
d
θ
+
∫
π
−
β
γ
d
φ
=
α
+
β
+
γ
−
π
,
{\displaystyle {\begin{aligned}\int _{\Delta }K\,dA&=\int _{\Delta }KH\,dr\,d\theta =-\int _{0}^{\alpha }\int _{0}^{r_{\theta }}\!H_{rr}\,dr\,d\theta \\&=\int _{0}^{\alpha }1-H_{r}(r_{\theta },\theta )\,d\theta =\int _{0}^{\alpha }d\theta +\int _{\pi -\beta }^{\gamma }\!\!d\varphi \\&=\alpha +\beta +\gamma -\pi ,\end{aligned}}}
where the second equality follows from the Gauss–Jacobi equation and the fourth from Gauss's derivative formula in the orthogonal coordinates (r,θ).
Gauss's formula shows that the curvature at a point can be calculated as the limit of angle excess α + β + γ − π over area for successively smaller geodesic triangles near the point. Qualitatively a surface is positively or negatively curved according to the sign of the angle excess for arbitrarily small geodesic triangles.
=== Gauss–Bonnet theorem ===
Since every compact oriented 2-manifold M can be triangulated by small geodesic triangles, it follows that
∫
M
K
d
A
=
2
π
χ
(
M
)
{\displaystyle \int _{M}KdA=2\pi \,\chi (M)}
where χ(M) denotes the Euler characteristic of the surface.
In fact if there are F faces, E edges and V vertices, then 3F = 2E and the left hand side equals 2πV – πF = 2π(V – E + F) = 2πχ(M).
This is the celebrated Gauss–Bonnet theorem: it shows that the integral of the Gaussian curvature is a topological invariant of the manifold, namely the Euler characteristic. This theorem can be interpreted in many ways; perhaps one of the most far-reaching has been as the index theorem for an elliptic differential operator on M, one of the simplest cases of the Atiyah-Singer index theorem. Another related result, which can be proved using the Gauss–Bonnet theorem, is the Poincaré-Hopf index theorem for vector fields on M which vanish at only a finite number of points: the sum of the indices at these points equals the Euler characteristic, where the index of a point is defined as follows: on a small circle round each isolated zero, the vector field defines a map into the unit circle; the index is just the winding number of this map.)
=== Curvature and embeddings ===
If the Gaussian curvature of a surface M is everywhere positive, then the Euler characteristic is positive so M is homeomorphic (and therefore diffeomorphic) to S2. If in addition the surface is isometrically embedded in E3, the Gauss map provides an explicit diffeomorphism. As Hadamard observed, in this case the surface is convex; this criterion for convexity can be viewed as a 2-dimensional generalisation of the well-known second derivative criterion for convexity of plane curves. Hilbert proved that every isometrically embedded closed surface must have a point of positive curvature. Thus a closed Riemannian 2-manifold of non-positive curvature can never be embedded isometrically in E3; however, as Adriano Garsia showed using the Beltrami equation for quasiconformal mappings, this is always possible for some conformally equivalent metric.
== Surfaces of constant curvature ==
The simply connected surfaces of constant curvature 0, +1 and –1 are the Euclidean plane, the unit sphere in E3, and the hyperbolic plane. Each of these has a transitive three-dimensional Lie group of orientation preserving isometries G, which can be used to study their geometry. Each of the two non-compact surfaces can be identified with the quotient G / K where K is a maximal compact subgroup of G. Here K is isomorphic to SO(2). Any other closed Riemannian 2-manifold M of constant Gaussian curvature, after scaling the metric by a constant factor if necessary, will have one of these three surfaces as its universal covering space. In the orientable case, the fundamental group Γ of M can be identified with a torsion-free uniform subgroup of G and M can then be identified with the double coset space Γ \ G / K. In the case of the sphere and the Euclidean plane, the only possible examples are the sphere itself and tori obtained as quotients of R2 by discrete rank 2 subgroups. For closed surfaces of genus g ≥ 2, the moduli space of Riemann surfaces obtained as Γ varies over all such subgroups, has real dimension 6g − 6. By Poincaré's uniformization theorem, any orientable closed 2-manifold is conformally equivalent to a surface of constant curvature 0, +1 or –1. In other words, by multiplying the metric by a positive scaling factor, the Gaussian curvature can be made to take exactly one of these values (the sign of the Euler characteristic of M).
=== Euclidean geometry ===
In the case of the Euclidean plane, the symmetry group is the Euclidean motion group, the semidirect product of
the two dimensional group of translations by the group of rotations. Geodesics are straight lines and the geometry is encoded in the elementary formulas of trigonometry, such as the cosine rule for a triangle with sides a, b, c and angles α, β, γ:
c
2
=
a
2
+
b
2
−
2
a
b
cos
γ
.
{\displaystyle c^{2}=a^{2}+b^{2}-2ab\,\cos \gamma .}
Flat tori can be obtained by taking the quotient of R2 by a lattice, i.e. a free Abelian subgroup of rank 2. These closed surfaces have no isometric embeddings in E3. They do nevertheless admit isometric embeddings in E4; in the easiest case this follows from the fact that the torus is a product of two circles and each circle can be isometrically embedded in E2.
=== Spherical geometry ===
The isometry group of the unit sphere S2 in E3 is the orthogonal group O(3), with the rotation group SO(3) as the subgroup of isometries preserving orientation. It is the direct product of SO(3) with the antipodal map, sending x to –x. The group SO(3) acts transitively on S2. The stabilizer subgroup of the unit vector (0,0,1) can be identified with SO(2), so that S2 = SO(3)/SO(2).
The geodesics between two points on the sphere are the great circle arcs with these given endpoints. If the points are not antipodal, there is a unique shortest geodesic between the points. The geodesics can also be described group theoretically: each geodesic through the North pole (0,0,1) is the orbit of the subgroup of rotations about an axis through antipodal points on the equator.
A spherical triangle is a geodesic triangle on the sphere. It is defined by points A, B, C on the sphere with sides BC, CA, AB formed from great circle arcs of length less than π. If the lengths of the sides are a, b, c and the angles between the sides α, β, γ, then the spherical cosine law states that
cos
c
=
cos
a
cos
b
+
sin
a
sin
b
cos
γ
.
{\displaystyle \cos c=\cos a\,\cos b+\sin a\,\sin b\,\cos \gamma .}
The area of the triangle is given by
Area = α + β + γ − π.
Using stereographic projection from the North pole, the sphere can be identified with the extended complex plane C ∪ {∞}. The explicit map is given by
π
(
x
,
y
,
z
)
=
x
+
i
y
1
−
z
≡
u
+
i
v
.
{\displaystyle \pi (x,y,z)={x+iy \over 1-z}\equiv u+iv.}
Under this correspondence every rotation of S2 corresponds to a Möbius transformation in SU(2), unique up to sign. With respect to the coordinates (u, v) in the complex plane, the spherical metric becomes
d
s
2
=
4
(
d
u
2
+
d
v
2
)
(
1
+
u
2
+
v
2
)
2
.
{\displaystyle ds^{2}={4(du^{2}+dv^{2}) \over (1+u^{2}+v^{2})^{2}}.}
The unit sphere is the unique closed orientable surface with constant curvature +1. The quotient SO(3)/O(2) can be identified with the real projective plane. It is non-orientable and can be described as the quotient of S2 by the antipodal map (multiplication by −1). The sphere is simply connected, while the real projective plane has fundamental group Z2. The finite subgroups of SO(3), corresponding to the finite subgroups of O(2) and the symmetry groups of the platonic solids, do not act freely on S2, so the corresponding quotients are not 2-manifolds, just orbifolds.
=== Hyperbolic geometry ===
Non-Euclidean geometry was first discussed in letters of Gauss, who made extensive computations at the turn of the nineteenth century which, although privately circulated, he decided not to put into print. In 1830 Lobachevsky and independently in 1832 Bolyai, the son of one Gauss's correspondents, published synthetic versions of this new geometry, for which they were severely criticized. However it was not until 1868 that Beltrami, followed by Klein in 1871 and Poincaré in 1882, gave concrete analytic models for what Klein dubbed hyperbolic geometry. The four models of 2-dimensional hyperbolic geometry that emerged were:
the Beltrami-Klein model;
the Poincaré disk;
the Poincaré upper half-plane;
the hyperboloid model of Wilhelm Killing in 3-dimensional Minkowski space.
The first model, based on a disk, has the advantage that geodesics are actually line segments (that is, intersections of Euclidean lines with the open unit disk). The last model has the advantage that it gives a construction which is completely parallel to that of the unit sphere in 3-dimensional Euclidean space. Because of their application in complex analysis and geometry, however, the models of Poincaré are the most widely used: they are interchangeable thanks to the Möbius transformations between the disk and the upper half-plane.
Let
D
=
{
z
:
|
z
|
<
1
}
{\displaystyle D=\{z\,\colon |z|<1\}}
be the Poincaré disk in the complex plane with Poincaré metric
d
s
2
=
4
(
d
x
2
+
d
y
2
)
(
1
−
x
2
−
y
2
)
2
.
{\displaystyle ds^{2}={4(dx^{2}+dy^{2}) \over (1-x^{2}-y^{2})^{2}}.}
In polar coordinates (r, θ) the metric is given by
d
s
2
=
4
(
d
r
2
+
r
2
d
θ
2
)
(
1
−
r
2
)
2
.
{\displaystyle ds^{2}={4(dr^{2}+r^{2}\,d\theta ^{2}) \over (1-r^{2})^{2}}.}
The length of a curve γ:[a,b] → D is given by the formula
ℓ
(
γ
)
=
∫
a
b
2
|
γ
′
(
t
)
|
d
t
1
−
|
γ
(
t
)
|
2
.
{\displaystyle \ell (\gamma )=\int _{a}^{b}{2|\gamma ^{\prime }(t)|\,dt \over 1-|\gamma (t)|^{2}}.}
The group G = SU(1,1) given by
G
=
{
(
α
β
β
¯
α
¯
)
:
α
,
β
∈
C
,
|
α
|
2
−
|
β
|
2
=
1
}
{\displaystyle G=\left\{{\begin{pmatrix}\alpha &\beta \\{\overline {\beta }}&{\overline {\alpha }}\end{pmatrix}}:\alpha ,\beta \in \mathbf {C} ,\,|\alpha |^{2}-|\beta |^{2}=1\right\}}
acts transitively by Möbius transformations on D and the stabilizer subgroup of 0 is the rotation group
K
=
{
(
ζ
0
0
ζ
¯
)
:
ζ
∈
C
,
|
ζ
|
=
1
}
.
{\displaystyle K=\left\{{\begin{pmatrix}\zeta &0\\0&{\overline {\zeta }}\end{pmatrix}}:\zeta \in \mathbf {C} ,\,|\zeta |=1\right\}.}
The quotient group SU(1,1)/±I is the group of orientation-preserving isometries of D. Any two points z, w in D are joined by a unique geodesic, given by the portion of the circle or straight line passing through z and w and orthogonal to the boundary circle. The distance between z and w is given by
d
(
z
,
w
)
=
2
tanh
−
1
|
z
−
w
|
|
1
−
w
¯
z
|
.
{\displaystyle d(z,w)=2\tanh ^{-1}{\frac {|z-w|}{|1-{\overline {w}}z|}}.}
In particular d(0,r) = 2 tanh−1 r and c(t) = 1/2tanh t is the geodesic through 0 along the real axis, parametrized by arclength.
The topology defined by this metric is equivalent to the usual Euclidean topology, although as a metric space (D,d) is complete.
A hyperbolic triangle is a geodesic triangle for this metric: any three points in D are vertices of a hyperbolic triangle. If the sides have length a, b, c with corresponding angles α, β, γ, then the hyperbolic cosine rule states that
cosh
c
=
cosh
a
cosh
b
−
sinh
a
sinh
b
cos
γ
.
{\displaystyle \cosh c=\cosh a\,\cosh b-\sinh a\,\sinh b\,\cos \gamma .}
The area of the hyperbolic triangle is given by
Area = π – α – β – γ.
The unit disk and the upper half-plane
H
=
{
w
=
x
+
i
y
:
y
>
0
}
{\displaystyle H=\{w=x+iy\,\colon \,y>0\}}
are conformally equivalent by the Möbius transformations
w
=
i
1
+
z
1
−
z
,
z
=
w
−
i
w
+
i
.
{\displaystyle w=i{1+z \over 1-z},\,\,z={w-i \over w+i}.}
Under this correspondence the action of SL(2,R) by Möbius transformations on H corresponds to that of SU(1,1) on D. The metric on H becomes
d
s
2
=
d
x
2
+
d
y
2
y
2
.
{\displaystyle ds^{2}={dx^{2}+dy^{2} \over y^{2}}.}
Since lines or circles are preserved under Möbius transformations, geodesics are again described by lines or circles orthogonal to the real axis.
The unit disk with the Poincaré metric is the unique simply connected oriented 2-dimensional Riemannian manifold with constant curvature −1. Any oriented closed surface M with this property has D as its universal covering space. Its fundamental group can be identified with a torsion-free concompact subgroup Γ of SU(1,1), in such a way that
M
=
Γ
∖
G
/
K
.
{\displaystyle M=\Gamma \backslash G/K.}
In this case Γ is a finitely presented group. The generators and relations are encoded in a geodesically convex fundamental geodesic polygon in D (or H) corresponding geometrically to closed geodesics on M.
Examples.
the Bolza surface of genus 2;
the Klein quartic of genus 3;
the Macbeath surface of genus 7;
the First Hurwitz triplet of genus 14.
=== Uniformization ===
Given an oriented closed surface M with Gaussian curvature K, the metric on M can be changed conformally by scaling it by a factor e2u. The new Gaussian curvature K′ is then given by
K
′
(
x
)
=
e
−
2
u
(
K
(
x
)
−
Δ
u
)
,
{\displaystyle K^{\prime }(x)=e^{-2u}(K(x)-\Delta u),}
where Δ is the Laplacian for the original metric. Thus to show that a given surface is conformally equivalent to a metric with constant curvature K′ it suffices to solve the following variant of Liouville's equation:
Δ
u
=
K
′
e
2
u
+
K
(
x
)
.
{\displaystyle \Delta u=K^{\prime }e^{2u}+K(x).}
When M has Euler characteristic 0, so is diffeomorphic to a torus, K′ = 0, so this amounts to solving
Δ
u
=
K
(
x
)
.
{\displaystyle \Delta u=K(x).}
By standard elliptic theory, this is possible because the integral of K over M is zero, by the Gauss–Bonnet theorem.
When M has negative Euler characteristic, K′ = −1, so the equation to be solved is:
Δ
u
=
−
e
2
u
+
K
(
x
)
.
{\displaystyle \Delta u=-e^{2u}+K(x).}
Using the continuity of the exponential map on Sobolev space due to Neil Trudinger, this non-linear equation can always be solved.
Finally in the case of the 2-sphere, K′ = 1 and the equation becomes:
Δ
u
=
e
2
u
+
K
(
x
)
.
{\displaystyle \Delta u=e^{2u}+K(x).}
So far this non-linear equation has not been analysed directly, although classical results such as the Riemann–Roch theorem imply that it always has a solution. The method of Ricci flow, developed by Richard S. Hamilton, gives another proof of existence based on non-linear partial differential equations to prove existence. In fact the Ricci flow on conformal metrics on S2 is defined on functions u(x, t) by
u
t
=
4
π
−
K
′
(
x
,
t
)
=
4
π
−
e
−
2
u
(
K
(
x
)
−
Δ
u
)
.
{\displaystyle u_{t}=4\pi -K'(x,t)=4\pi -e^{-2u}(K(x)-\Delta u).}
After finite time, Chow showed that K′ becomes positive; previous results of Hamilton could then be used to show that K′ converges to +1. Prior to these results on Ricci flow, Osgood, Phillips & Sarnak (1988) had given an alternative and technically simpler approach to uniformization based on the flow on Riemannian metrics g defined by log det Δg.
A proof using elliptic operators, discovered in 1988, can be found in Ding (2001). Let G be the Green's function on S2 satisfying ΔG = 1 + 4πδP, where δP is the point measure at a fixed point P of S2. The equation Δv = 2K – 2, has a smooth solution v, because the right hand side has integral 0 by the Gauss–Bonnet theorem. Thus φ = 2G + v satisfies Δφ = 2K away from P. It follows that g1 = eφg is a complete metric of constant curvature 0 on the complement of P, which is therefore isometric to the plane. Composing with stereographic projection, it follows that there is a smooth function u such that e2ug has Gaussian curvature +1 on the complement of P. The function u automatically extends to a smooth function on the whole of S2.
== Riemannian connection and parallel transport ==
The classical approach of Gauss to the differential geometry of surfaces was the standard elementary approach which predated the emergence of the concepts of Riemannian manifold initiated by Bernhard Riemann in the mid-nineteenth century and of connection developed by Tullio Levi-Civita, Élie Cartan and Hermann Weyl in the early twentieth century. The notion of connection, covariant derivative and parallel transport gave a more conceptual and uniform way of understanding curvature, which not only allowed generalisations to higher dimensional manifolds but also provided an important tool for defining new geometric invariants, called characteristic classes. The approach using covariant derivatives and connections is nowadays the one adopted in more advanced textbooks.
=== Covariant derivative ===
Connections on a surface can be defined from various equivalent but equally important points of view. The Riemannian connection or Levi-Civita connection. is perhaps most easily understood in terms of lifting vector fields, considered as first order differential operators acting on functions on the manifold, to differential operators on the tangent bundle or frame bundle. In the case of an embedded surface, the lift to an operator on vector fields, called the covariant derivative, is very simply described in terms of orthogonal projection. Indeed, a vector field on a surface embedded in R3 can be regarded as a function from the surface into R3. Another vector field acts as a differential operator component-wise. The resulting vector field will not be tangent to the surface, but this can be corrected taking its orthogonal projection onto the tangent space at each point of the surface. As Ricci and Levi-Civita realised at the turn of the twentieth century, this process depends only on the metric and can be locally expressed in terms of the Christoffel symbols.
=== Parallel transport ===
Parallel transport of tangent vectors along a curve in the surface was the next major advance in the subject, due to Levi-Civita. It is related to the earlier notion of covariant derivative, because it is the monodromy of the ordinary differential equation on the curve defined by the covariant derivative with respect to the velocity vector of the curve. Parallel transport along geodesics, the "straight lines" of the surface, can also easily be described directly. A vector in the tangent plane is transported along a geodesic as the unique vector field with constant length and making a constant angle with the velocity vector of the geodesic. For a general curve, this process has to be modified using the geodesic curvature, which measures how far the curve departs from being a geodesic.
A vector field v(t) along a unit speed curve c(t), with geodesic curvature kg(t), is said to be parallel along the curve if
it has constant length
the angle θ(t) that it makes with the velocity vector ċ(t) satisfies
θ
˙
(
t
)
=
−
k
g
(
t
)
{\displaystyle {\dot {\theta }}(t)=-k_{g}(t)}
This recaptures the rule for parallel transport along a geodesic or piecewise geodesic curve, because in that case kg = 0, so that the angle θ(t) should remain constant on any geodesic segment. The existence of parallel transport follows because θ(t) can be computed as the integral of the geodesic curvature. Since it therefore depends continuously on the L2 norm of kg, it follows that parallel transport for an arbitrary curve can be obtained as the limit of the parallel transport on approximating piecewise geodesic curves.
The connection can thus be described in terms of lifting paths in the manifold to paths in the tangent or orthonormal frame bundle, thus formalising the classical theory of the "moving frame", favoured by French authors. Lifts of loops about a point give rise to the holonomy group at that point. The Gaussian curvature at a point can be recovered from parallel transport around increasingly small loops at the point. Equivalently curvature can be calculated directly at an infinitesimal level in terms of Lie brackets of lifted vector fields.
=== Connection 1-form ===
The approach of Cartan and Weyl, using connection 1-forms on the frame bundle of M, gives a third way to understand the Riemannian connection. They noticed that parallel transport dictates that a path in the surface be lifted to a path in the frame bundle so that its tangent vectors lie in a special subspace of codimension one in the three-dimensional tangent space of the frame bundle. The projection onto this subspace is defined by a differential 1-form on the orthonormal frame bundle, the connection form. This enabled the curvature properties of the surface to be encoded in differential forms on the frame bundle and formulas involving their exterior derivatives.
This approach is particularly simple for an embedded surface. Thanks to a result of Kobayashi (1956), the connection 1-form on a surface embedded in Euclidean space E3 is just the pullback under the Gauss map of the connection 1-form on S2. Using the identification of S2 with the homogeneous space SO(3)/SO(2), the connection 1-form is just a component of the Maurer–Cartan 1-form on SO(3).
== Global differential geometry of surfaces ==
Although the characterisation of curvature involves only the local geometry of a surface, there are important global aspects such as the Gauss–Bonnet theorem, the uniformization theorem, the von Mangoldt-Hadamard theorem, and the embeddability theorem. There are other important aspects of the global geometry of surfaces. These include:
Injectivity radius, defined as the largest r such that two points at a distance less than r are joined by a unique geodesic. Wilhelm Klingenberg proved in 1959 that the injectivity radius of a closed surface is bounded below by the minimum of δ = π/√sup K and the length of its smallest closed geodesic. This improved a theorem of Bonnet who showed in 1855 that the diameter of a closed surface of positive Gaussian curvature is always bounded above by δ; in other words a geodesic realising the metric distance between two points cannot have length greater than δ.
Rigidity. In 1927 Cohn-Vossen proved that two ovaloids – closed surfaces with positive Gaussian curvature – that are isometric are necessarily congruent by an isometry of E3. Moreover, a closed embedded surface with positive Gaussian curvature and constant mean curvature is necessarily a sphere; likewise a closed embedded surface of constant Gaussian curvature must be a sphere (Liebmann 1899). Heinz Hopf showed in 1950 that a closed embedded surface with constant mean curvature and genus 0, i.e. homeomorphic to a sphere, is necessarily a sphere; five years later Alexandrov removed the topological assumption. In the 1980s, Wente constructed immersed tori of constant mean curvature in Euclidean 3-space.
Carathéodory conjecture: This conjecture states that a closed convex three times differentiable surface admits at least two umbilic points. The first work on this conjecture was in 1924 by Hans Hamburger, who noted that it follows from the following stronger claim: the half-integer valued index of the principal curvature foliation of an isolated umbilic is at most one. It was proven for smooth surfaces by Brendan Guilfoyle and Wilhelm Klingenberg in three parts concluding in 2024, the centenary of the conjecture.
Zero Gaussian curvature: a complete surface in E3 with zero Gaussian curvature must be a cylinder or a plane.
Hilbert's theorem (1901): no complete surface with constant negative curvature can be immersed isometrically in E3.
The Willmore conjecture. This conjecture states that the integral of the square of the mean curvature of a torus immersed in E3 should be bounded below by 2π2. It is known that the integral is Moebius invariant. It was solved in 2012 by Fernando Codá Marques and André Neves.
Isoperimetric inequalities. In 1939 Schmidt proved that the classical isoperimetric inequality for curves in the Euclidean plane is also valid on the sphere or in the hyperbolic plane: namely he showed that among all closed curves bounding a domain of fixed area, the perimeter is minimized by when the curve is a circle for the metric. In one dimension higher, it is known that among all closed surfaces in E3 arising as the boundary of a bounded domain of unit volume, the surface area is minimized for a Euclidean ball.
Systolic inequalities for curves on surfaces. Given a closed surface, its systole is defined to be the smallest length of any non-contractible closed curve on the surface. In 1949 Loewner proved a torus inequality for metrics on the torus, namely that the area of the torus over the square of its systole is bounded below by √3/2, with equality in the flat (constant curvature) case. A similar result is given by Pu's inequality for the real projective plane from 1952, with a lower bound of 2/π also attained in the constant curvature case. For the Klein bottle, Blatter and Bavard later obtained a lower bound of √8/π. For a closed surface of genus g, Hebda and Burago showed that the ratio is bounded below by 1/2. Three years later Mikhail Gromov found a lower bound given by a constant times g1⁄2, although this is not optimal. Asymptotically sharp upper and lower bounds given by constant times g/(log g)2 are due to Gromov and Buser-Sarnak, and can be found in Katz (2007). There is also a version for metrics on the sphere, taking for the systole the length of the smallest closed geodesic. Gromov conjectured a lower bound of 1/2√3 in 1980: the best result so far is the lower bound of 1/8 obtained by Regina Rotman in 2006.
== Reading guide ==
One of the most comprehensive introductory surveys of the subject, charting the historical development from before Gauss to modern times, is by Berger (2004). Accounts of the classical theory are given in Eisenhart (2004), Kreyszig (1991) and Struik (1988); the more modern copiously illustrated undergraduate textbooks by Gray, Abbena & Salamon (2006), Pressley (2001) and Wilson (2008) might be found more accessible. An accessible account of the classical theory can be found in Hilbert & Cohn-Vossen (1952). More sophisticated graduate-level treatments using the Riemannian connection on a surface can be found in Singer & Thorpe (1967), do Carmo (2016) and O'Neill (2006).
== See also ==
Tangent vector
Zoll surface
== Notes ==
== References ==
Andrews, Ben; Bryan, Paul (2010), "Curvature bounds by isoperimetric comparison for normalized Ricci flow on the two-sphere", Calc. Var. Partial Differential Equations, 39 (3–4): 419–428, arXiv:0908.3606, doi:10.1007/s00526-010-0315-5, S2CID 1095459
Arnold, V.I. (1989), Mathematical methods of classical mechanics, Graduate Texts in Mathematics, vol. 60 (2nd ed.), New York: Springer-Verlag, ISBN 978-0-387-90314-9; translated from the Russian by K. Vogtmann and A. Weinstein.
Berger, Marcel (2004), A Panoramic View of Riemannian Geometry, Springer-Verlag, ISBN 978-3-540-65317-2
Berger, Melvyn S. (1977), Nonlinearity and Functional Analysis, Academic Press, ISBN 978-0-12-090350-4
Bonola, Roberto; Carslaw, H. S.; Enriques, F. (1955), Non-Euclidean Geometry: A Critical and Historical Study of Its Development, Dover, ISBN 978-0-486-60027-7 {{citation}}: ISBN / Date incompatibility (help)
Boothby, William M. (1986), An introduction to differentiable manifolds and Riemannian geometry, Pure and Applied Mathematics, vol. 120 (2nd ed.), Academic Press, ISBN 0121160521
Cartan, Élie (1983), Geometry of Riemannian Spaces, Math Sci Press, ISBN 978-0-915692-34-7; translated from 2nd edition of Leçons sur la géométrie des espaces de Riemann (1951) by James Glazebrook.
Cartan, Élie (2001), Riemannian Geometry in an Orthogonal Frame (from lectures delivered by É Cartan at the Sorbonne in 1926-27), World Scientific, doi:10.1142/4808, ISBN 978-981-02-4746-1; translated from Russian by V. V. Goldberg with a foreword by S. S. Chern.
Cartan, Henri (1971), Calcul Differentiel (in French), Hermann, ISBN 9780395120330
Chen, Xiuxiong; Lu, Peng; Tian, Gang (2006), "A note on uniformization of Riemann surfaces by Ricci flow", Proc. AMS, 134 (11): 3391–3393, doi:10.1090/S0002-9939-06-08360-2
Chern, S. S. (1967), Curves and Surfaces in Euclidean Spaces, MAA Studies in Mathematics, Mathematical Association of America
Chow, B. (1991), "The Ricci flow on a 2-sphere", J. Diff. Geom., 33 (2): 325–334, doi:10.4310/jdg/1214446319
Courant, Richard (1950), Dirichlet's Principle, Conformal Mapping and Minimal Surfaces, John Wiley & Sons, ISBN 978-0-486-44552-6 {{citation}}: ISBN / Date incompatibility (help)
Darboux, Gaston, Leçons sur la théorie générale des surfaces, Gauthier-Villars Volume I (1887), Volume II (1915) [1889], Volume III (1894), Volume IV (1896).
Ding, W. (2001), "A proof of the uniformization theorem on S2", J. Partial Differential Equations, 14: 247–250
do Carmo, Manfredo P. (2016), Differential Geometry of Curves and Surfaces (revised & updated 2nd ed.), Mineola, NY: Dover Publications, Inc., ISBN 978-0-486-80699-0
do Carmo, Manfredo (1992), Riemannian geometry, Mathematics: Theory & Applications, Birkhäuser, ISBN 0-8176-3490-8
Eisenhart, Luther Pfahler (2004), A Treatise on the Differential Geometry of Curves and Surfaces (reprint of the 1909 ed.), Dover Publications, Inc., ISBN 0-486-43820-1
Euler, Leonhard (1760), "Recherches sur la courbure des surfaces", Mémoires de l'Académie des Sciences de Berlin, 16 (published 1767): 119–143.
Euler, Leonhard (1771), "De solidis quorum superficiem in planum explicare licet", Novi Commentarii Academiae Scientiarum Petropolitanae, 16 (published 1772): 3–34.
Gauss, Carl Friedrich (1902), General Investigations of Curved Surfaces of 1825 and 1827, Princeton University Library translated by A.M. Hiltebeitel and J.C. Morehead; "Disquisitiones generales circa superficies curvas", Commentationes Societatis Regiae Scientiarum Gottingesis Recentiores Vol. VI (1827), pp. 99–146.
Gauss, Carl Friedrich (1965), General Investigations of Curved Surfaces, translated by A.M. Hiltebeitel; J.C. Morehead, Hewlett, NY: Raven Press, OCLC 228665499.
Gauss, Carl Friedrich (2005), General Investigations of Curved Surfaces, edited with a new introduction and notes by Peter Pesic, Mineola, NY: Dover Publications, ISBN 978-0-486-44645-5.
Gray, Alfred; Abbena, Elsa; Salamon, Simon (2006), Modern Differential Geometry of Curves And Surfaces With Mathematica®, Studies in Advanced Mathematics (3rd ed.), Boca Raton, FL: Chapman & Hall/CRC, ISBN 978-1-58488-448-4
Han, Qing; Hong, Jia-Xing (2006), Isometric Embedding of Riemannian Manifolds in Euclidean Spaces, American Mathematical Society, ISBN 978-0-8218-4071-9
Helgason, Sigurdur (1978), Differential Geometry,Lie Groups, and Symmetric Spaces, Academic Press, New York, ISBN 978-0-12-338460-7
Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8 {{citation}}: ISBN / Date incompatibility (help)
Hitchin, Nigel (2013), Geometry of Surfaces (PDF)
Hopf, Heinz (1989), Lectures on Differential Geometry in the Large, Lecture Notes in Mathematics, vol. 1000, Springer-Verlag, ISBN 978-3-540-51497-8
Imayoshi, Y.; Taniguchi, M. (1992), An Introduction to Techmüller spaces, Springer-Verlag, ISBN 978-0-387-70088-5
Ivey, Thomas A.; Landsberg, J.M. (2003), Cartan for Beginners: Differential Geometry via Moving Frames and Exterior Systems, Graduate Studies in Mathematics, vol. 61, American Mathematical Society, ISBN 978-0-8218-3375-9
Katz, Mikhail G. (2007), Systolic geometry and topology, Mathematical Surveys and Monographs, vol. 137, American Mathematical Society, ISBN 978-0-8218-4177-8
Milnor, J. (1963). Morse theory. Annals of Mathematics Studies. Vol. 51. Princeton, N.J.: Princeton University Press. MR 0163331. Zbl 0108.10401.
Kobayashi, Shoshichi (1956), "Induced connections and imbedded Riemannian space", Nagoya Math. J., 10: 15–25, doi:10.1017/S0027763000000052
Kobayashi, Shoshichi (1957), "Theory of connections", Annali di Matematica Pura ed Applicata, Series 4, 43: 119–194, doi:10.1007/BF02411907, S2CID 120972987,
Kobayashi, Shoshichi; Nomizu, Katsumi (1963), Foundations of Differential Geometry, Vol. I, Wiley Interscience, ISBN 978-0-470-49648-0 {{citation}}: ISBN / Date incompatibility (help)
Kobayashi, Shoshichi; Nomizu, Katsumi (1969), Foundations of Differential Geometry, Vol. II, Wiley Interscience, ISBN 978-0-470-49648-0
Kreyszig, Erwin (1991), Differential Geometry, Dover, ISBN 978-0-486-66721-8
Kühnel, Wolfgang (2006), Differential Geometry: Curves - Surfaces - Manifolds, American Mathematical Society, ISBN 978-0-8218-3988-1
Levi-Civita, Tullio (1917), "Nozione di parallelismo in una varietà qualunque", Rend. Circ. Mat. Palermo, 42: 173–205, doi:10.1007/BF03014898, S2CID 122088291
O'Neill, Barrett (2006), Elementary Differential Geometry (revised 2nd ed.), Amsterdam: Elsevier/Academic Press, ISBN 0-12-088735-5
Osgood, B.; Phillips, R.; Sarnak, P. (1988), "Extremals of determinants of Laplacians", J. Funct. Anal., 80: 148–211, doi:10.1016/0022-1236(88)90070-5
Osserman, Robert (2002), A Survey of Minimal Surfaces, Dover, ISBN 978-0-486-49514-9
Ian R. Porteous (2001) Geometric Differentiation: for the intelligence of curves and surfaces, Cambridge University Press ISBN 0-521-00264-8.
Pressley, Andrew (2001), Elementary Differential Geometry, Springer Undergraduate Mathematics Series, Springer-Verlag, ISBN 978-1-85233-152-8
Sacks, J.; Uhlenbeck, Karen (1981), "The existence of minimal immersions of 2-spheres", Ann. of Math., 112 (1): 1–24, doi:10.2307/1971131, JSTOR 1971131
Singer, Isadore M.; Thorpe, John A. (1967), Lecture Notes on Elementary Topology and Geometry, Springer-Verlag, ISBN 978-0-387-90202-9
Spivak, Michael (1965), Calculus on manifolds. A modern approach to classical theorems of advanced calculus, W. A. Benjamin
Stillwell, John (1996), Sources of Hyperbolic Geometry, American Mathematical Society, ISBN 978-0-8218-0558-9
Struik, Dirk (1987), A Concise History of Mathematics (4th ed.), Dover Publications, ISBN 0486602559
Struik, Dirk J. (1988) [1961], Lectures on Classical Differential Geometry (reprint of 2nd ed.), New York: Dover Publications, Inc., ISBN 0-486-65609-8
Taylor, Michael E. (1996a), Partial Differential Equations II: Qualitative Studies of Linear Equations, Springer-Verlag, ISBN 978-1-4419-7051-0
Taylor, Michael E. (1996b), Partial Differential Equations III: Nonlinear equations, Springer-Verlag, ISBN 978-1-4419-7048-0
Thorpe, John A. (1994), Elementary topics in differential geometry, Undergraduate Texts in Mathematics, Springer-Verlag, ISBN 0387903577
Toponogov, Victor A. (2005), Differential Geometry of Curves and Surfaces: A Concise Guide, Springer-Verlag, ISBN 978-0-8176-4384-3
Valiron, Georges (1986), The Classical Differential Geometry of Curves and Surfaces, Math Sci Press, ISBN 978-0-915692-39-2 Full text of book
Warner, Frank W. (1983), Foundations of differentiable manifolds and Lie groups, Graduate Texts in Mathematics, vol. 94, Springer, ISBN 0-387-90894-3
Wells, R. O. (2017), Differential and complex geometry: origins, abstractions and embeddings, Springer, ISBN 9783319581842
Wilson, Pelham (2008), Curved Space: From Classical Geometries to Elementary Differential Geometry, Cambridge University Press, ISBN 978-0-521-71390-0
== External links ==
Media related to Differential geometry of surfaces at Wikimedia Commons | Wikipedia/Differential_geometry_of_surfaces |
Statistical graphics, also known as statistical graphical techniques, are graphics used in the field of statistics for data visualization.
== Overview ==
Whereas statistics and data analysis procedures generally yield their output in numeric or tabular form, graphical techniques allow such results to be displayed in some sort of pictorial form. They include plots such as scatter plots, histograms, probability plots, spaghetti plots, residual plots, box plots, block plots and biplots.
Exploratory data analysis (EDA) relies heavily on such techniques. They can also provide insight into a data set to help with testing assumptions, model selection and regression model validation, estimator selection, relationship identification, factor effect determination, and outlier detection. In addition, the choice of appropriate statistical graphics can provide a convincing means of communicating the underlying message that is present in the data to others.
Graphical statistical methods have four objectives:
The exploration of the content of a data set
The use to find structure in data
Checking assumptions in statistical models
Communicate the results of an analysis.
If one is not using statistical graphics, then one is forfeiting insight into one or more aspects of the underlying structure of the data.
== History ==
Statistical graphics have been central to the development of science and date to the earliest attempts to analyse data. Many familiar forms, including bivariate plots, statistical maps, bar charts, and coordinate paper were used in the 18th century. Statistical graphics developed through attention to four problems:
Spatial organization in the 17th and 18th century
Discrete comparison in the 18th and early 19th century
Continuous distribution in the 19th century and
Multivariate distribution and correlation in the late 19th and 20th century.
Since the 1970s statistical graphics have been re-emerging as an important analytic tool with the revitalisation of computer graphics and related technologies.
== Examples ==
Famous graphics were designed by:
William Playfair who produced what could be called the first line, bar, pie, and area charts. For example, in 1786 he published the well known diagram that depicts the evolution of England's imports and exports,
James Watt and his employee John Southern, who around 1790 invented the steam indicator, a device for plotting pressure variations within a steam engine cylinder through its stroke,
Florence Nightingale, who used statistical graphics to persuade the British Government to improve army hygiene,
John Snow who plotted deaths from cholera in London in 1854 to detect the source of the disease, and
Charles Joseph Minard who designed a large portfolio of maps of which the one depicting Napoleon's campaign in Russia is the best known.
See the plots page for many more examples of statistical graphics.
== See also ==
Data Presentation Architecture
List of graphical methods
Visual inspection
Chart
List of charting software
== References ==
Citations
Attribution
This article incorporates public domain material from the National Institute of Standards and Technology
== Further reading ==
Cleveland, W. S. (1993). Visualizing Data. Summit, NJ, USA: Hobart Press. ISBN 0-9634884-0-6.
Cleveland, W. S. (1994). The Elements of Graphing Data. Summit, NJ, USA: Hobart Press. ISBN 0-9634884-1-4.
Lewi, Paul J. (2006). Speaking of Graphics.
Tufte, Edward R. (2001) [1983]. The Visual Display of Quantitative Information (2nd ed.). Cheshire, CT, USA: Graphics Press. ISBN 0-9613921-4-2.
Tufte, Edward R. (1992) [1990]. Envisioning Information. Cheshire, CT, USA: Graphics Press. ISBN 0-9613921-1-8.
== External links ==
Trend Compass
Alphabetic gallery of graphical techniques
DataScope a website devoted to data visualization and statistical graphics | Wikipedia/Statistical_graphics |
A Boolean-valued function (sometimes called a predicate or a proposition) is a function of the type f : X → B, where X is an arbitrary set and where B is a Boolean domain, i.e. a generic two-element set, (for example B = {0, 1}), whose elements are interpreted as logical values, for example, 0 = false and 1 = true, i.e., a single bit of information.
In the formal sciences, mathematics, mathematical logic, statistics, and their applied disciplines, a Boolean-valued function may also be referred to as a characteristic function, indicator function, predicate, or proposition. In all of these uses, it is understood that the various terms refer to a mathematical object and not the corresponding semiotic sign or syntactic expression.
In formal semantic theories of truth, a truth predicate is a predicate on the sentences of a formal language, interpreted for logic, that formalizes the intuitive concept that is normally expressed by saying that a sentence is true. A truth predicate may have additional domains beyond the formal language domain, if that is what is required to determine a final truth value.
== See also ==
== References ==
Brown, Frank Markham (2003), Boolean Reasoning: The Logic of Boolean Equations, 1st edition, Kluwer Academic Publishers, Norwell, MA. 2nd edition, Dover Publications, Mineola, NY, 2003.
Kohavi, Zvi (1978), Switching and Finite Automata Theory, 1st edition, McGraw–Hill, 1970. 2nd edition, McGraw–Hill, 1978. 3rd edition, McGraw–Hill, 2010.
Korfhage, Robert R. (1974), Discrete Computational Structures, Academic Press, New York, NY.
Mathematical Society of Japan, Encyclopedic Dictionary of Mathematics, 2nd edition, 2 vols., Kiyosi Itô (ed.), MIT Press, Cambridge, MA, 1993. Cited as EDM.
Minsky, Marvin L., and Papert, Seymour, A. (1988), Perceptrons, An Introduction to Computational Geometry, MIT Press, Cambridge, MA, 1969. Revised, 1972. Expanded edition, 1988. | Wikipedia/Boolean-valued_function |
In mathematics, the term linear function refers to two distinct but related notions:
In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For distinguishing such a linear function from the other concept, the term affine function is often used.
In linear algebra, mathematical analysis, and functional analysis, a linear function is a linear map.
== As a polynomial function ==
In calculus, analytic geometry and related areas, a linear function is a polynomial of degree one or less, including the zero polynomial (the latter not being considered to have degree zero).
When the function is of only one variable, it is of the form
f
(
x
)
=
a
x
+
b
,
{\displaystyle f(x)=ax+b,}
where a and b are constants, often real numbers. The graph of such a function of one variable is a nonvertical line. a is frequently referred to as the slope of the line, and b as the intercept.
If a > 0 then the gradient is positive and the graph slopes upwards.
If a < 0 then the gradient is negative and the graph slopes downwards.
For a function
f
(
x
1
,
…
,
x
k
)
{\displaystyle f(x_{1},\ldots ,x_{k})}
of any finite number of variables, the general formula is
f
(
x
1
,
…
,
x
k
)
=
b
+
a
1
x
1
+
⋯
+
a
k
x
k
,
{\displaystyle f(x_{1},\ldots ,x_{k})=b+a_{1}x_{1}+\cdots +a_{k}x_{k},}
and the graph is a hyperplane of dimension k.
A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial. Its graph, when there is only one variable, is a horizontal line.
In this context, a function that is also a linear map (the other meaning) may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, the polynomial functions of degree 0 or 1 are the scalar-valued affine maps.
== As a linear map ==
In linear algebra, a linear function is a map f between two vector spaces such that
f
(
x
+
y
)
=
f
(
x
)
+
f
(
y
)
{\displaystyle f(\mathbf {x} +\mathbf {y} )=f(\mathbf {x} )+f(\mathbf {y} )}
f
(
a
x
)
=
a
f
(
x
)
.
{\displaystyle f(a\mathbf {x} )=af(\mathbf {x} ).}
Here a denotes a constant belonging to some field K of scalars (for example, the real numbers) and x and y are elements of a vector space, which might be K itself.
In other terms the linear function preserves vector addition and scalar multiplication.
Some authors use "linear function" only for linear maps that take values in the scalar field; these are more commonly called linear forms.
The "linear functions" of calculus qualify as "linear maps" when (and only when) f(0, ..., 0) = 0, or, equivalently, when the constant b equals zero in the one-degree polynomial above. Geometrically, the graph of the function must pass through the origin.
== See also ==
Homogeneous function
Nonlinear system
Piecewise linear function
Linear approximation
Linear interpolation
Discontinuous linear map
Linear least squares
== Notes ==
== References ==
Izrail Moiseevich Gelfand (1961), Lectures on Linear Algebra, Interscience Publishers, Inc., New York. Reprinted by Dover, 1989. ISBN 0-486-66082-6
Shores, Thomas S. (2007). Applied Linear Algebra and Matrix Analysis. Undergraduate Texts in Mathematics. Springer. ISBN 978-0-387-33195-9.
Stewart, James (2012). Calculus: Early Transcendentals (7E ed.). Brooks/Cole. ISBN 978-0-538-49790-9.
Leonid N. Vaserstein (2006), "Linear Programming", in Leslie Hogben, ed., Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap. 50. ISBN 1-584-88510-6 | Wikipedia/Linear_function |
In mathematics, a multivalued function, multiple-valued function, many-valued function, or multifunction, is a function that has two or more values in its range for at least one point in its domain. It is a set-valued function with additional properties depending on context; some authors do not distinguish between set-valued functions and multifunctions, but English Wikipedia currently does, having a separate article for each.
A multivalued function of sets f : X → Y is a subset
Γ
f
⊆
X
×
Y
.
{\displaystyle \Gamma _{f}\ \subseteq \ X\times Y.}
Write f(x) for the set of those y ∈ Y with (x,y) ∈ Γf. If f is an ordinary function, it is a multivalued function by taking its graph
Γ
f
=
{
(
x
,
f
(
x
)
)
:
x
∈
X
}
.
{\displaystyle \Gamma _{f}\ =\ \{(x,f(x))\ :\ x\in X\}.}
They are called single-valued functions to distinguish them.
== Motivation ==
The term multivalued function originated in complex analysis, from analytic continuation. It often occurs that one knows the value of a complex analytic function
f
(
z
)
{\displaystyle f(z)}
in some neighbourhood of a point
z
=
a
{\displaystyle z=a}
. This is the case for functions defined by the implicit function theorem or by a Taylor series around
z
=
a
{\displaystyle z=a}
. In such a situation, one may extend the domain of the single-valued function
f
(
z
)
{\displaystyle f(z)}
along curves in the complex plane starting at
a
{\displaystyle a}
. In doing so, one finds that the value of the extended function at a point
z
=
b
{\displaystyle z=b}
depends on the chosen curve from
a
{\displaystyle a}
to
b
{\displaystyle b}
; since none of the new values is more natural than the others, all of them are incorporated into a multivalued function.
For example, let
f
(
z
)
=
z
{\displaystyle f(z)={\sqrt {z}}\,}
be the usual square root function on positive real numbers. One may extend its domain to a neighbourhood of
z
=
1
{\displaystyle z=1}
in the complex plane, and then further along curves starting at
z
=
1
{\displaystyle z=1}
, so that the values along a given curve vary continuously from
1
=
1
{\displaystyle {\sqrt {1}}=1}
. Extending to negative real numbers, one gets two opposite values for the square root—for example ±i for −1—depending on whether the domain has been extended through the upper or the lower half of the complex plane. This phenomenon is very frequent, occurring for nth roots, logarithms, and inverse trigonometric functions.
To define a single-valued function from a complex multivalued function, one may distinguish one of the multiple values as the principal value, producing a single-valued function on the whole plane which is discontinuous along certain boundary curves. Alternatively, dealing with the multivalued function allows having something that is everywhere continuous, at the cost of possible value changes when one follows a closed path (monodromy). These problems are resolved in the theory of Riemann surfaces: to consider a multivalued function
f
(
z
)
{\displaystyle f(z)}
as an ordinary function without discarding any values, one multiplies the domain into a many-layered covering space, a manifold which is the Riemann surface associated to
f
(
z
)
{\displaystyle f(z)}
.
== Inverses of functions ==
If f : X → Y is an ordinary function, then its inverse is the multivalued function
Γ
f
−
1
⊆
Y
×
X
{\displaystyle \Gamma _{f^{-1}}\ \subseteq \ Y\times X}
defined as Γf, viewed as a subset of X × Y. When f is a differentiable function between manifolds, the inverse function theorem gives conditions for this to be single-valued locally in X.
For example, the complex logarithm log(z) is the multivalued inverse of the exponential function ez : C → C×, with graph
Γ
log
(
z
)
=
{
(
z
,
w
)
:
w
=
log
(
z
)
}
⊆
C
×
C
×
.
{\displaystyle \Gamma _{\log(z)}\ =\ \{(z,w)\ :\ w=\log(z)\}\ \subseteq \ \mathbf {C} \times \mathbf {C} ^{\times }.}
It is not single valued, given a single w with w = log(z), we have
log
(
z
)
=
w
+
2
π
i
Z
.
{\displaystyle \log(z)\ =\ w\ +\ 2\pi i\mathbf {Z} .}
Given any holomorphic function on an open subset of the complex plane C, its analytic continuation is always a multivalued function.
== Concrete examples ==
Every real number greater than zero has two real square roots, so that square root may be considered a multivalued function. For example, we may write
4
=
±
2
=
{
2
,
−
2
}
{\displaystyle {\sqrt {4}}=\pm 2=\{2,-2\}}
; although zero has only one square root,
0
=
{
0
}
{\displaystyle {\sqrt {0}}=\{0\}}
. Note that
x
{\displaystyle {\sqrt {x}}}
usually denotes only the principal square root of
x
{\displaystyle x}
.
Each nonzero complex number has two square roots, three cube roots, and in general n nth roots. The only nth root of 0 is 0.
The complex logarithm function is multiple-valued. The values assumed by
log
(
a
+
b
i
)
{\displaystyle \log(a+bi)}
for real numbers
a
{\displaystyle a}
and
b
{\displaystyle b}
are
log
a
2
+
b
2
+
i
arg
(
a
+
b
i
)
+
2
π
n
i
{\displaystyle \log {\sqrt {a^{2}+b^{2}}}+i\arg(a+bi)+2\pi ni}
for all integers
n
{\displaystyle n}
.
Inverse trigonometric functions are multiple-valued because trigonometric functions are periodic. We have
tan
(
π
4
)
=
tan
(
5
π
4
)
=
tan
(
−
3
π
4
)
=
tan
(
(
2
n
+
1
)
π
4
)
=
⋯
=
1.
{\displaystyle \tan \left({\tfrac {\pi }{4}}\right)=\tan \left({\tfrac {5\pi }{4}}\right)=\tan \left({\tfrac {-3\pi }{4}}\right)=\tan \left({\tfrac {(2n+1)\pi }{4}}\right)=\cdots =1.}
As a consequence, arctan(1) is intuitively related to several values: π/4, 5π/4, −3π/4, and so on. We can treat arctan as a single-valued function by restricting the domain of tan x to −π/2 < x < π/2 – a domain over which tan x is monotonically increasing. Thus, the range of arctan(x) becomes −π/2 < y < π/2. These values from a restricted domain are called principal values.
The antiderivative can be considered as a multivalued function. The antiderivative of a function is the set of functions whose derivative is that function. The constant of integration follows from the fact that the derivative of a constant function is 0.
Inverse hyperbolic functions over the complex domain are multiple-valued because hyperbolic functions are periodic along the imaginary axis. Over the reals, they are single-valued, except for arcosh and arsech.
These are all examples of multivalued functions that come about from non-injective functions. Since the original functions do not preserve all the information of their inputs, they are not reversible. Often, the restriction of a multivalued function is a partial inverse of the original function.
== Branch points ==
Multivalued functions of a complex variable have branch points. For example, for the nth root and logarithm functions, 0 is a branch point; for the arctangent function, the imaginary units i and −i are branch points. Using the branch points, these functions may be redefined to be single-valued functions, by restricting the range. A suitable interval may be found through use of a branch cut, a kind of curve that connects pairs of branch points, thus reducing the multilayered Riemann surface of the function to a single layer. As in the case with real functions, the restricted range may be called the principal branch of the function.
== Applications ==
In physics, multivalued functions play an increasingly important role. They form the mathematical basis for Dirac's magnetic monopoles, for the theory of defects in crystals and the resulting plasticity of materials, for vortices in superfluids and superconductors, and for phase transitions in these systems, for instance melting and quark confinement. They are the origin of gauge field structures in many branches of physics.
== See also ==
Relation (mathematics)
Function (mathematics)
Binary relation
Set-valued function
== Further reading ==
H. Kleinert, Multivalued Fields in Condensed Matter, Electrodynamics, and Gravitation, World Scientific (Singapore, 2008) (also available online)
H. Kleinert, Gauge Fields in Condensed Matter, Vol. I: Superflow and Vortex Lines, 1–742, Vol. II: Stresses and Defects, 743–1456, World Scientific, Singapore, 1989 (also available online: Vol. I and Vol. II)
== References == | Wikipedia/Multivalued_function |
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following:
takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure),
returns a function as its result.
All other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example, since it maps a function to its derivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, see Functor (disambiguation).
In the untyped lambda calculus, all functions are higher-order; in a typed lambda calculus, from which most functional programming languages are derived, higher-order functions that take one function as argument are values with types of the form
(
τ
1
→
τ
2
)
→
τ
3
{\displaystyle (\tau _{1}\to \tau _{2})\to \tau _{3}}
.
== General examples ==
map function, found in many functional programming languages, is one example of a higher-order function. It takes arguments as a function f and a collection of elements, and as the result, returns a new collection with f applied to each element from the collection.
Sorting functions, which take a comparison function as a parameter, allowing the programmer to separate the sorting algorithm from the comparisons of the items being sorted. The C standard function qsort is an example of this.
filter
fold
scan
apply
Function composition
Integration
Callback
Tree traversal
Montague grammar, a semantic theory of natural language, uses higher-order functions
== Support in programming languages ==
=== Direct support ===
The examples are not intended to compare and contrast programming languages, but to serve as examples of higher-order function syntax
In the following examples, the higher-order function twice takes a function, and applies the function to some value twice. If twice has to be applied several times for the same f it preferably should return a function rather than a value. This is in line with the "don't repeat yourself" principle.
==== APL ====
Or in a tacit manner:
==== C++ ====
Using std::function in C++11:
Or, with generic lambdas provided by C++14:
==== C# ====
Using just delegates:
Or equivalently, with static methods:
==== Clojure ====
==== ColdFusion Markup Language (CFML) ====
==== Common Lisp ====
==== D ====
==== Dart ====
==== Elixir ====
In Elixir, you can mix module definitions and anonymous functions
Alternatively, we can also compose using pure anonymous functions.
==== Erlang ====
In this Erlang example, the higher-order function or_else/2 takes a list of functions (Fs) and argument (X). It evaluates the function F with the argument X as argument. If the function F returns false then the next function in Fs will be evaluated. If the function F returns {false, Y} then the next function in Fs with argument Y will be evaluated. If the function F returns R the higher-order function or_else/2 will return R. Note that X, Y, and R can be functions. The example returns false.
==== F# ====
==== Go ====
Notice a function literal can be defined either with an identifier (twice) or anonymously (assigned to variable plusThree).
==== Groovy ====
==== Haskell ====
==== J ====
Explicitly,
or tacitly,
==== Java (1.8+) ====
Using just functional interfaces:
Or equivalently, with static methods:
==== JavaScript ====
With arrow functions:
Or with classical syntax:
==== Julia ====
==== Kotlin ====
==== Lua ====
==== MATLAB ====
==== OCaml ====
==== PHP ====
or with all functions in variables:
Note that arrow functions implicitly capture any variables that come from the parent scope, whereas anonymous functions require the use keyword to do the same.
==== Perl ====
or with all functions in variables:
==== Python ====
Python decorator syntax is often used to replace a function with the result of passing that function through a higher-order function. E.g., the function g could be implemented equivalently:
==== R ====
==== Raku ====
In Raku, all code objects are closures and therefore can reference inner "lexical" variables from an outer scope because the lexical variable is "closed" inside of the function. Raku also supports "pointy block" syntax for lambda expressions which can be assigned to a variable or invoked anonymously.
==== Ruby ====
==== Rust ====
==== Scala ====
==== Scheme ====
==== Swift ====
==== Tcl ====
Tcl uses apply command to apply an anonymous function (since 8.6).
==== XACML ====
The XACML standard defines higher-order functions in the standard to apply a function to multiple values of attribute bags.
The list of higher-order functions in XACML can be found here.
==== XQuery ====
=== Alternatives ===
==== Function pointers ====
Function pointers in languages such as C, C++, Fortran, and Pascal allow programmers to pass around references to functions. The following C code computes an approximation of the integral of an arbitrary function:
The qsort function from the C standard library uses a function pointer to emulate the behavior of a higher-order function.
==== Macros ====
Macros can also be used to achieve some of the effects of higher-order functions. However, macros cannot easily avoid the problem of variable capture; they may also result in large amounts of duplicated code, which can be more difficult for a compiler to optimize. Macros are generally not strongly typed, although they may produce strongly typed code.
==== Dynamic code evaluation ====
In other imperative programming languages, it is possible to achieve some of the same algorithmic results as are obtained via higher-order functions by dynamically executing code (sometimes called Eval or Execute operations) in the scope of evaluation. There can be significant drawbacks to this approach:
The argument code to be executed is usually not statically typed; these languages generally rely on dynamic typing to determine the well-formedness and safety of the code to be executed.
The argument is usually provided as a string, the value of which may not be known until run-time. This string must either be compiled during program execution (using just-in-time compilation) or evaluated by interpretation, causing some added overhead at run-time, and usually generating less efficient code.
==== Objects ====
In object-oriented programming languages that do not support higher-order functions, objects can be an effective substitute. An object's methods act in essence like functions, and a method may accept objects as parameters and produce objects as return values. Objects often carry added run-time overhead compared to pure functions, however, and added boilerplate code for defining and instantiating an object and its method(s). Languages that permit stack-based (versus heap-based) objects or structs can provide more flexibility with this method.
An example of using a simple stack based record in Free Pascal with a function that returns a function:
The function a() takes a Txy record as input and returns the integer value of the sum of the record's x and y fields (3 + 7).
==== Defunctionalization ====
Defunctionalization can be used to implement higher-order functions in languages that lack first-class functions:
In this case, different types are used to trigger different functions via function overloading. The overloaded function in this example has the signature auto apply.
== See also ==
First-class function
Combinatory logic
Function-level programming
Functional programming
Kappa calculus - a formalism for functions which excludes higher-order functions
Strategy pattern
Higher order messages
== References == | Wikipedia/Higher-order_function |
In mathematical analysis, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form:
f
(
x
1
,
x
2
,
x
3
,
…
,
x
n
;
u
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
;
I
1
(
u
)
,
I
2
(
u
)
,
I
3
(
u
)
,
…
,
I
m
(
u
)
)
=
0
{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});I^{1}(u),I^{2}(u),I^{3}(u),\ldots ,I^{m}(u))=0}
where
I
i
(
u
)
{\displaystyle I^{i}(u)}
is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:
f
(
x
1
,
x
2
,
x
3
,
…
,
x
n
;
u
(
x
1
,
x
2
,
x
3
,
…
,
x
n
)
;
D
1
(
u
)
,
D
2
(
u
)
,
D
3
(
u
)
,
…
,
D
m
(
u
)
)
=
0
{\displaystyle f(x_{1},x_{2},x_{3},\ldots ,x_{n};u(x_{1},x_{2},x_{3},\ldots ,x_{n});D^{1}(u),D^{2}(u),D^{3}(u),\ldots ,D^{m}(u))=0}
where
D
i
(
u
)
{\displaystyle D^{i}(u)}
may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
== Classification and overview ==
Various classification methods for integral equations exist. A few standard classifications include distinctions between linear and nonlinear; homogeneous and inhomogeneous; Fredholm and Volterra; first order, second order, and third order; and singular and regular integral equations. These distinctions usually rest on some fundamental property such as the consideration of the linearity of the equation or the homogeneity of the equation. These comments are made concrete through the following definitions and examples:
=== Linearity ===
Linear: An integral equation is linear if the unknown function u(x) and its integrals appear linearly in the equation. Hence, an example of a linear equation would be:
u
(
x
)
=
f
(
x
)
+
λ
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
(
t
)
d
t
{\displaystyle u(x)=f(x)+\lambda \int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt}
As a note on naming convention: i) u(x) is called the unknown function, ii) f(x) is called a known function, iii) K(x,t) is a function of two variables and often called the Kernel function, and iv) λ is an unknown factor or parameter, which plays the same role as the eigenvalue in linear algebra.
Nonlinear: An integral equation is nonlinear if the unknown function ''u(x) or any of its integrals appear nonlinear in the equation. Hence, examples of nonlinear equations would be the equation above if we replaced u(t) with
u
2
(
x
)
,
cos
(
u
(
x
)
)
,
or
e
u
(
x
)
{\displaystyle u^{2}(x),\,\,\cos(u(x)),\,{\text{or }}\,e^{u(x)}}
, such as:
u
(
x
)
=
f
(
x
)
+
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
2
(
t
)
d
t
{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u^{2}(t)\,dt}
Certain kinds of nonlinear integral equations have specific names. A selection of such equations are:
Nonlinear Volterra integral equations of the second kind which have the general form:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
x
K
(
x
,
t
)
F
(
x
,
t
,
u
(
t
)
)
d
t
,
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,F(x,t,u(t))\,dt,}
where F is a known function.
Nonlinear Fredholm integral equations of the second kind which have the general form:
f
(
x
)
=
F
(
x
,
∫
a
b
K
(
x
,
y
,
f
(
x
)
,
f
(
y
)
)
d
y
)
{\displaystyle f(x)=F\left(x,\int _{a}^{b}K(x,y,f(x),f(y))\,dy\right)}
.
A special type of nonlinear Fredholm integral equations of the second kind is given by the form:
f
(
x
)
=
g
(
x
)
+
∫
a
b
K
(
x
,
y
,
f
(
x
)
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}K(x,y,f(x),f(y))\,dy}
, which has the two special subclasses:
Urysohn equation:
f
(
x
)
=
g
(
x
)
+
∫
a
b
k
(
x
,
y
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}k(x,y,f(y))\,dy}
.
Hammerstein equation:
f
(
x
)
=
g
(
x
)
+
∫
a
b
k
(
x
,
y
)
G
(
y
,
f
(
y
)
)
d
y
{\displaystyle f(x)=g(x)+\int _{a}^{b}k(x,y)\,G(y,f(y))\,dy}
.
More information on the Hammerstein equation and different versions of the Hammerstein equation can be found in the Hammerstein section below.
=== Location of the unknown equation ===
First kind: An integral equation is called an integral equation of the first kind if the unknown function appears only under the integral sign. An example would be:
f
(
x
)
=
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{b}K(x,t)\,u(t)\,dt}
.
Second kind: An integral equation is called an integral equation of the second kind if the unknown function also appears outside the integral.
Third kind: An integral equation is called an integral equation of the third kind if it is a linear Integral equation of the following form:
g
(
t
)
u
(
t
)
+
λ
∫
a
b
K
(
t
,
x
)
u
(
x
)
d
x
=
f
(
t
)
{\displaystyle g(t)u(t)+\lambda \int _{a}^{b}K(t,x)u(x)\,dx=f(t)}
where g(t) vanishes at least once in the interval [a,b] or where g(t) vanishes at a finite number of points in (a,b).
=== Limits of Integration ===
Fredholm: An integral equation is called a Fredholm integral equation if both of the limits of integration in all integrals are fixed and constant. An example would be that the integral is taken over a fixed subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
. Hence, the following two examples are Fredholm equations:
Fredholm equation of the first type:
f
(
x
)
=
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{b}K(x,t)\,u(t)\,dt}
.
Fredholm equation of the second type:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
.
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{b}K(x,t)\,u(t)\,dt.}
Note that we can express integral equations such as those above also using integral operator notation. For example, we can define the Fredholm integral operator as:
(
F
y
)
(
t
)
:=
∫
t
0
T
K
(
t
,
s
)
y
(
s
)
d
s
.
{\displaystyle ({\mathcal {F}}y)(t):=\int _{t_{0}}^{T}K(t,s)\,y(s)\,ds.}
Hence, the above Fredholm equation of the second kind may be written compactly as:
y
(
t
)
=
g
(
t
)
+
λ
(
F
y
)
(
t
)
.
{\displaystyle y(t)=g(t)+\lambda ({\mathcal {F}}y)(t).}
Volterra: An integral equation is called a Volterra integral equation if at least one of the limits of integration is a variable. Hence, the integral is taken over a domain varying with the variable of integration. Examples of Volterra equations would be:
Volterra integral equation of the first kind:
f
(
x
)
=
∫
a
x
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle f(x)=\int _{a}^{x}K(x,t)\,u(t)\,dt}
Volterra integral equation of the second kind:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
x
K
(
x
,
t
)
u
(
t
)
d
t
.
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{x}K(x,t)\,u(t)\,dt.}
As with Fredholm equations, we can again adopt operator notation. Thus, we can define the linear Volterra integral operator
V
:
C
(
I
)
→
C
(
I
)
{\displaystyle {\mathcal {V}}:C(I)\to C(I)}
, as follows:
(
V
φ
)
(
t
)
:=
∫
t
0
t
K
(
t
,
s
)
φ
(
s
)
d
s
{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
and K(t,s) is called the kernel and must be continuous on the interval
D
:=
{
(
t
,
s
)
:
0
≤
s
≤
t
≤
T
≤
∞
}
{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}
. Hence, the Volterra integral equation of the first kind may be written as:
(
V
y
)
(
t
)
=
g
(
t
)
{\displaystyle ({\mathcal {V}}y)(t)=g(t)}
with
g
(
0
)
=
0
{\displaystyle g(0)=0}
. In addition, a linear Volterra integral equation of the second kind for an unknown function
y
(
t
)
{\displaystyle y(t)}
and a given continuous function
g
(
t
)
{\displaystyle g(t)}
on the interval
I
{\displaystyle I}
where
t
∈
I
{\displaystyle t\in I}
:
y
(
t
)
=
g
(
t
)
+
(
V
y
)
(
t
)
.
{\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t).}
Volterra–Fredholm: In higher dimensions, integral equations such as Fredholm–Volterra integral equations (VFIE) exist. A VFIE has the form:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
(
T
u
)
(
t
,
x
)
{\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}
with
x
∈
Ω
{\displaystyle x\in \Omega }
and
Ω
{\displaystyle \Omega }
being a closed bounded region in
R
d
{\displaystyle \mathbb {R} ^{d}}
with piecewise smooth boundary. The Fredholm-Volterra Integral Operator
T
:
C
(
I
×
Ω
)
→
C
(
I
×
Ω
)
{\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}
is defined as:
(
T
u
)
(
t
,
x
)
:=
∫
0
t
∫
Ω
K
(
t
,
s
,
x
,
ξ
)
G
(
u
(
s
,
ξ
)
)
d
ξ
d
s
.
{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}
Note that while throughout this article, the bounds of the integral are usually written as intervals, this need not be the case. In general, integral equations don't always need to be defined over an interval
[
a
,
b
]
=
I
{\displaystyle [a,b]=I}
, but could also be defined over a curve or surface.
=== Homogeneity ===
Homogeneous: An integral equation is called homogeneous if the known function
f
{\displaystyle f}
is identically zero.
Inhomogeneous: An integral equation is called inhomogeneous if the known function
f
{\displaystyle f}
is nonzero.
=== Regularity ===
Regular: An integral equation is called regular if the integrals used are all proper integrals.
Singular or weakly singular: An integral equation is called singular or weakly singular if the integral is an improper integral. This could be either because at least one of the limits of integration is infinite or the kernel becomes unbounded, meaning infinite, on at least one point in the interval or domain over which is being integrated.
Examples include:
F
(
λ
)
=
∫
−
∞
∞
e
−
i
λ
x
u
(
x
)
d
x
{\displaystyle F(\lambda )=\int _{-\infty }^{\infty }e^{-i\lambda x}u(x)\,dx}
L
[
u
(
x
)
]
=
∫
0
∞
e
−
λ
x
u
(
x
)
d
x
{\displaystyle L[u(x)]=\int _{0}^{\infty }e^{-\lambda x}u(x)\,dx}
These two integral equations are the Fourier transform and the Laplace transform of u(x), respectively, with both being Fredholm equations of the first kind with kernel
K
(
x
,
t
)
=
e
−
i
λ
x
{\displaystyle K(x,t)=e^{-i\lambda x}}
and
K
(
x
,
t
)
=
e
−
λ
x
{\displaystyle K(x,t)=e^{-\lambda x}}
, respectively. Another example of a singular integral equation in which the kernel becomes unbounded is:
x
2
=
∫
0
x
1
x
−
t
u
(
t
)
d
t
.
{\displaystyle x^{2}=\int _{0}^{x}{\frac {1}{\sqrt {x-t}}}\,u(t)\,dt.}
This equation is a special form of the more general weakly singular Volterra integral equation of the first kind, called Abel's integral equation:
g
(
x
)
=
∫
a
x
f
(
y
)
x
−
y
d
y
{\displaystyle g(x)=\int _{a}^{x}{\frac {f(y)}{\sqrt {x-y}}}\,dy}
Strongly singular: An integral equation is called strongly singular if the integral is defined by a special regularisation, for example, by the Cauchy principal value.
=== Integro-differential equations ===
An Integro-differential equation, as the name suggests, combines differential and integral operators into one equation. There are many version including the Volterra integro-differential equation and delay type equations as defined below. For example, using the Volterra operator as defined above, the Volterra integro-differential equation may be written as:
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
+
(
V
α
y
)
(
t
)
{\displaystyle y'(t)=f(t,y(t))+(V_{\alpha }y)(t)}
For delay problems, we can define the delay integral operator
(
W
θ
,
α
y
)
{\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)}
as:
(
W
θ
,
α
y
)
(
t
)
:=
∫
θ
(
t
)
t
(
t
−
s
)
−
α
⋅
k
2
(
t
,
s
,
y
(
s
)
,
y
′
(
s
)
)
d
s
{\displaystyle ({\mathcal {W}}_{\theta ,\alpha }y)(t):=\int _{\theta (t)}^{t}(t-s)^{-\alpha }\cdot k_{2}(t,s,y(s),y'(s))\,ds}
where the delay integro-differential equation may be expressed as:
y
′
(
t
)
=
f
(
t
,
y
(
t
)
,
y
(
θ
(
t
)
)
)
+
(
W
θ
,
α
y
)
(
t
)
.
{\displaystyle y'(t)=f(t,y(t),y(\theta (t)))+({\mathcal {W}}_{\theta ,\alpha }y)(t).}
== Volterra integral equations ==
=== Uniqueness and existence theorems in 1D ===
The solution to a linear Volterra integral equation of the first kind, given by the equation:
(
V
y
)
(
t
)
=
g
(
t
)
{\displaystyle ({\mathcal {V}}y)(t)=g(t)}
can be described by the following uniqueness and existence theorem. Recall that the Volterra integral operator
V
:
C
(
I
)
→
C
(
I
)
{\displaystyle {\mathcal {V}}:C(I)\to C(I)}
, can be defined as follows:
(
V
φ
)
(
t
)
:=
∫
t
0
t
K
(
t
,
s
)
φ
(
s
)
d
s
{\displaystyle ({\mathcal {V}}\varphi )(t):=\int _{t_{0}}^{t}K(t,s)\,\varphi (s)\,ds}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
and K(t,s) is called the kernel and must be continuous on the interval
D
:=
{
(
t
,
s
)
:
0
≤
s
≤
t
≤
T
≤
∞
}
{\displaystyle D:=\{(t,s):0\leq s\leq t\leq T\leq \infty \}}
.
The solution to a linear Volterra integral equation of the second kind, given by the equation:
y
(
t
)
=
g
(
t
)
+
(
V
y
)
(
t
)
{\displaystyle y(t)=g(t)+({\mathcal {V}}y)(t)}
can be described by the following uniqueness and existence theorem.
=== Volterra integral equations in R2 ===
A Volterra Integral equation of the second kind can be expressed as follows:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
∫
0
x
∫
0
y
K
(
x
,
ξ
,
y
,
η
)
u
(
ξ
,
η
)
d
η
d
ξ
{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}K(x,\xi ,y,\eta )\,u(\xi ,\eta )\,d\eta \,d\xi }
where
(
x
,
y
)
∈
Ω
:=
[
0
,
X
]
×
[
0
,
Y
]
{\displaystyle (x,y)\in \Omega :=[0,X]\times [0,Y]}
,
g
∈
C
(
Ω
)
{\displaystyle g\in C(\Omega )}
,
K
∈
C
(
D
2
)
{\displaystyle K\in C(D_{2})}
and
D
2
:=
{
(
x
,
ξ
,
y
,
η
)
:
0
≤
ξ
≤
x
≤
X
,
0
≤
η
≤
y
≤
Y
}
{\displaystyle D_{2}:=\{(x,\xi ,y,\eta ):0\leq \xi \leq x\leq X,0\leq \eta \leq y\leq Y\}}
. This integral equation has a unique solution
u
∈
C
(
Ω
)
{\displaystyle u\in C(\Omega )}
given by:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
∫
0
x
∫
0
y
R
(
x
,
ξ
,
y
,
η
)
g
(
ξ
,
η
)
d
η
d
ξ
{\displaystyle u(t,x)=g(t,x)+\int _{0}^{x}\int _{0}^{y}R(x,\xi ,y,\eta )\,g(\xi ,\eta )\,d\eta \,d\xi }
where
R
{\displaystyle R}
is the resolvent kernel of K.
=== Uniqueness and existence theorems of Fredholm–Volterra equations ===
As defined above, a VFIE has the form:
u
(
t
,
x
)
=
g
(
t
,
x
)
+
(
T
u
)
(
t
,
x
)
{\displaystyle u(t,x)=g(t,x)+({\mathcal {T}}u)(t,x)}
with
x
∈
Ω
{\displaystyle x\in \Omega }
and
Ω
{\displaystyle \Omega }
being a closed bounded region in
R
d
{\displaystyle \mathbb {R} ^{d}}
with piecewise smooth boundary. The Fredholm–Volterrra Integral Operator
T
:
C
(
I
×
Ω
)
→
C
(
I
×
Ω
)
{\displaystyle {\mathcal {T}}:C(I\times \Omega )\to C(I\times \Omega )}
is defined as:
(
T
u
)
(
t
,
x
)
:=
∫
0
t
∫
Ω
K
(
t
,
s
,
x
,
ξ
)
G
(
u
(
s
,
ξ
)
)
d
ξ
d
s
.
{\displaystyle ({\mathcal {T}}u)(t,x):=\int _{0}^{t}\int _{\Omega }K(t,s,x,\xi )\,G(u(s,\xi ))\,d\xi \,ds.}
In the case where the Kernel K may be written as
K
(
t
,
s
,
x
,
ξ
)
=
k
(
t
−
s
)
H
(
x
,
ξ
)
{\displaystyle K(t,s,x,\xi )=k(t-s)H(x,\xi )}
, K is called the positive memory kernel. With this in mind, we can now introduce the following theorem:
=== Special Volterra equations ===
A special type of Volterra equation which is used in various applications is defined as follows:
y
(
t
)
=
g
(
t
)
+
(
V
α
y
)
(
t
)
{\displaystyle y(t)=g(t)+(V_{\alpha }y)(t)}
where
t
∈
I
=
[
t
0
,
T
]
{\displaystyle t\in I=[t_{0},T]}
, the function g(t) is continuous on the interval
I
{\displaystyle I}
, and the Volterra integral operator
(
V
α
t
)
{\displaystyle (V_{\alpha }t)}
is given by:
(
V
α
t
)
(
t
)
:=
∫
t
0
t
(
t
−
s
)
−
α
⋅
k
(
t
,
s
,
y
(
s
)
)
d
s
{\displaystyle (V_{\alpha }t)(t):=\int _{t_{0}}^{t}(t-s)^{-\alpha }\cdot k(t,s,y(s))\,ds}
with
(
0
≤
α
<
1
)
{\displaystyle (0\leq \alpha <1)}
.
== Converting IVP to integral equations ==
In the following section, we give an example of how to convert an initial value problem (IVP) into an integral equation. There are multiple motivations for doing so, among them being that integral equations can often be more readily solvable and are more suitable for proving existence and uniqueness theorems.
The following example was provided by Wazwaz on pages 1 and 2 in his book. We examine the IVP given by the equation:
u
′
(
t
)
=
2
t
u
(
t
)
,
x
≥
0
{\displaystyle u'(t)=2tu(t),\,\,\,\,\,\,\,x\geq 0}
and the initial condition:
u
(
0
)
=
1
{\displaystyle u(0)=1}
If we integrate both sides of the equation, we get:
∫
0
x
u
′
(
t
)
d
t
=
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle \int _{0}^{x}u'(t)\,dt=\int _{0}^{x}2tu(t)\,dt}
and by the fundamental theorem of calculus, we obtain:
u
(
x
)
−
u
(
0
)
=
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle u(x)-u(0)=\int _{0}^{x}2tu(t)\,dt}
Rearranging the equation above, we get the integral equation:
u
(
x
)
=
1
+
∫
0
x
2
t
u
(
t
)
d
t
{\displaystyle u(x)=1+\int _{0}^{x}2tu(t)\,dt}
which is a Volterra integral equation of the form:
u
(
x
)
=
f
(
x
)
+
∫
α
(
x
)
β
(
x
)
K
(
x
,
t
)
⋅
u
(
t
)
d
t
{\displaystyle u(x)=f(x)+\int _{\alpha (x)}^{\beta (x)}K(x,t)\cdot u(t)\,dt}
where K(x,t) is called the kernel and equal to 2t, and f(x) = 1.
== Numerical solution ==
It is worth noting that integral equations often do not have an analytical solution, and must be solved numerically. An example of this is evaluating the electric-field integral equation (EFIE) or magnetic-field integral equation (MFIE) over an arbitrarily shaped object in an electromagnetic scattering problem.
One method to solve numerically requires discretizing variables and replacing integral by a quadrature rule
∑
j
=
1
n
w
j
K
(
s
i
,
t
j
)
u
(
t
j
)
=
f
(
s
i
)
,
i
=
0
,
1
,
…
,
n
.
{\displaystyle \sum _{j=1}^{n}w_{j}K(s_{i},t_{j})u(t_{j})=f(s_{i}),\qquad i=0,1,\dots ,n.}
Then we have a system with n equations and n variables. By solving it we get the value of the n variables
u
(
t
0
)
,
u
(
t
1
)
,
…
,
u
(
t
n
)
.
{\displaystyle u(t_{0}),u(t_{1}),\dots ,u(t_{n}).}
== Integral equations as a generalization of eigenvalue equations ==
Certain homogeneous linear integral equations can be viewed as the continuum limit of eigenvalue equations. Using index notation, an eigenvalue equation can be written as
∑
j
M
i
,
j
v
j
=
λ
v
i
{\displaystyle \sum _{j}M_{i,j}v_{j}=\lambda v_{i}}
where M = [Mi,j] is a matrix, v is one of its eigenvectors, and λ is the associated eigenvalue.
Taking the continuum limit, i.e., replacing the discrete indices i and j with continuous variables x and y, yields
∫
K
(
x
,
y
)
φ
(
y
)
d
y
=
λ
φ
(
x
)
,
{\displaystyle \int K(x,y)\,\varphi (y)\,dy=\lambda \,\varphi (x),}
where the sum over j has been replaced by an integral over y and the matrix M and the vector v have been replaced by the kernel K(x, y) and the eigenfunction φ(y). (The limits on the integral are fixed, analogously to the limits on the sum over j.) This gives a linear homogeneous Fredholm equation of the second type.
In general, K(x, y) can be a distribution, rather than a function in the strict sense. If the distribution K has support only at the point x = y, then the integral equation reduces to a differential eigenfunction equation.
In general, Volterra and Fredholm integral equations can arise from a single differential equation, depending on which sort of conditions are applied at the boundary of the domain of its solution.
== Wiener–Hopf integral equations ==
y
(
t
)
=
λ
x
(
t
)
+
∫
0
∞
k
(
t
−
s
)
x
(
s
)
d
s
,
0
≤
t
<
∞
.
{\displaystyle y(t)=\lambda x(t)+\int _{0}^{\infty }k(t-s)\,x(s)\,ds,\qquad 0\leq t<\infty .}
Originally, such equations were studied in connection with problems in radiative transfer, and more recently, they have been related to the solution of boundary integral equations for planar problems in which the boundary is only piecewise smooth.
== Hammerstein equations ==
A Hammerstein equation is a nonlinear first-kind Volterra integral equation of the form:
g
(
t
)
=
∫
0
t
K
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
.
{\displaystyle g(t)=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds.}
Under certain regularity conditions, the equation is equivalent to the implicit Volterra integral equation of the second-kind:
G
(
t
,
y
(
t
)
)
=
g
1
(
t
)
−
∫
0
t
K
1
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
{\displaystyle G(t,y(t))=g_{1}(t)-\int _{0}^{t}K_{1}(t,s)\,G(s,y(s))\,ds}
where:
g
1
(
t
)
:=
g
′
(
t
)
K
(
t
,
t
)
and
K
1
(
t
,
s
)
:=
−
1
K
(
t
,
t
)
∂
K
(
t
,
s
)
∂
t
.
{\displaystyle g_{1}(t):={\frac {g'(t)}{K(t,t)}}\,\,\,\,\,\,\,{\text{and}}\,\,\,\,\,\,\,K_{1}(t,s):=-{\frac {1}{K(t,t)}}{\frac {\partial K(t,s)}{\partial t}}.}
The equation may however also be expressed in operator form which motivates the definition of the following operator called the nonlinear Volterra-Hammerstein operator:
(
H
y
)
(
t
)
:=
∫
0
t
K
(
t
,
s
)
G
(
s
,
y
(
s
)
)
d
s
{\displaystyle ({\mathcal {H}}y)(t):=\int _{0}^{t}K(t,s)\,G(s,y(s))\,ds}
Here
G
:
I
×
R
→
R
{\displaystyle G:I\times \mathbb {R} \to \mathbb {R} }
is a smooth function while the kernel K may be continuous, i.e. bounded, or weakly singular. The corresponding second-kind Volterra integral equation called the Volterra-Hammerstein Integral Equation of the second kind, or simply Hammerstein equation for short, can be expressed as:
y
(
t
)
=
g
(
t
)
+
(
H
y
)
(
t
)
{\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)}
In certain applications, the nonlinearity of the function G may be treated as being only semi-linear in the form of:
G
(
s
,
y
)
=
y
+
H
(
s
,
y
)
{\displaystyle G(s,y)=y+H(s,y)}
In this case, we the following semi-linear Volterra integral equation:
y
(
t
)
=
g
(
t
)
+
(
H
y
)
(
t
)
=
g
(
t
)
+
∫
0
t
K
(
t
,
s
)
[
y
(
s
)
+
H
(
s
,
y
(
s
)
)
]
d
s
{\displaystyle y(t)=g(t)+({\mathcal {H}}y)(t)=g(t)+\int _{0}^{t}K(t,s)[y(s)+H(s,y(s))]\,ds}
In this form, we can state an existence and uniqueness theorem for the semi-linear Hammerstein integral equation.
We can also write the Hammerstein equation using a different operator called the Niemytzki operator, or substitution operator,
N
{\displaystyle {\mathcal {N}}}
defined as follows:
(
N
φ
)
(
t
)
:=
G
(
t
,
φ
(
t
)
)
{\displaystyle ({\mathcal {N}}\varphi )(t):=G(t,\varphi (t))}
More about this can be found on page 75 of this book.
== Applications ==
Integral equations are important in many applications. Problems in which integral equations are encountered include radiative transfer, and the oscillation of a string, membrane, or axle. Oscillation problems may also be solved as differential equations.
Actuarial science (ruin theory)
Computational electromagnetics
Boundary element method
Inverse problems
Marchenko equation (inverse scattering transform)
Options pricing under jump-diffusion
Radiative transfer
Renewal theory
Viscoelasticity
Fluid mechanics
== See also ==
Differential equation
Integro-differential equation
Ruin theory
Volterra integral equation
== Bibliography ==
Agarwal, Ravi P., and Donal O'Regan. Integral and Integrodifferential Equations: Theory, Method and Applications. Gordon and Breach Science Publishers, 2000.
Brunner, Hermann. Collocation Methods for Volterra Integral and Related Functional Differential Equations. Cambridge University Press, 2004.
Burton, T. A. Volterra Integral and Differential Equations. Elsevier, 2005.
Chapter 7 It Mod 02-14-05 – Ira A. Fulton College of Engineering. https://www.et.byu.edu/~vps/ET502WWW/NOTES/CH7m.pdf.
Corduneanu, C. Integral Equations and Applications. Cambridge University Press, 2008.
Hackbusch, Wolfgang. Integral Equations Theory and Numerical Treatment. Birkhäuser, 1995.
Hochstadt, Harry. Integral Equations. Wiley-Interscience/John Wiley & Sons, 1989.
"Integral Equation." From Wolfram MathWorld, https://mathworld.wolfram.com/IntegralEquation.html.
"Integral Equation." Integral Equation – Encyclopedia of Mathematics, https://encyclopediaofmath.org/wiki/Integral_equation.
Jerri, Abdul J. Introduction to Integral Equations with Applications. Sampling Publishing, 2007.
Pipkin, A. C. A Course on Integral Equations. Springer-Verlag, 1991.
Polëiìanin A. D., and Alexander V. Manzhirov. Handbook of Integral Equations. Chapman & Hall/CRC, 2008.
Wazwaz, Abdul-Majid. A First Course in Integral Equations. World Scientific, 2015.
== References ==
== Further reading ==
Kendall E. Atkinson. The Numerical Solution of Integral Equations of the Second Kind. Cambridge Monographs on Applied and Computational Mathematics, 1997.
George Arfken and Hans Weber. Mathematical Methods for Physicists. Harcourt/Academic Press, 2000.
Harry Bateman (1910) History and Present State of the Theory of Integral Equations, Report of the British Association.
Andrei D. Polyanin and Alexander V. Manzhirov Handbook of Integral Equations. CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4.
E. T. Whittaker and G. N. Watson. A Course of Modern Analysis Cambridge Mathematical Library.
M. Krasnov, A. Kiselev, G. Makarenko, Problems and Exercises in Integral Equations, Mir Publishers, Moscow, 1971
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Chapter 19. Integral Equations and Inverse Theory". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17.
== External links ==
Integral Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
Integral Equations: Index at EqWorld: The World of Mathematical Equations.
"Integral equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Integral Equations (MIT OpenCourseWare) | Wikipedia/Integral_equation |
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.
The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions.
The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed.
== Notation ==
Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example sin(x). Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression
sin
x
+
y
{\displaystyle \sin x+y}
would typically be interpreted to mean
(
sin
x
)
+
y
,
{\displaystyle (\sin x)+y,}
so parentheses are required to express
sin
(
x
+
y
)
.
{\displaystyle \sin(x+y).}
A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example
sin
2
x
{\displaystyle \sin ^{2}x}
and
sin
2
(
x
)
{\displaystyle \sin ^{2}(x)}
denote
(
sin
x
)
2
,
{\displaystyle (\sin x)^{2},}
not
sin
(
sin
x
)
.
{\displaystyle \sin(\sin x).}
This differs from the (historically later) general functional notation in which
f
2
(
x
)
=
(
f
∘
f
)
(
x
)
=
f
(
f
(
x
)
)
.
{\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).}
In contrast, the superscript
−
1
{\displaystyle -1}
is commonly used to denote the inverse function, not the reciprocal. For example
sin
−
1
x
{\displaystyle \sin ^{-1}x}
and
sin
−
1
(
x
)
{\displaystyle \sin ^{-1}(x)}
denote the inverse trigonometric function alternatively written
arcsin
x
.
{\displaystyle \arcsin x\,.}
The equation
θ
=
sin
−
1
x
{\displaystyle \theta =\sin ^{-1}x}
implies
sin
θ
=
x
,
{\displaystyle \sin \theta =x,}
not
θ
⋅
sin
x
=
1.
{\displaystyle \theta \cdot \sin x=1.}
In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than
−
1
{\displaystyle {-1}}
are not in common use.
== Right-angled triangle definitions ==
If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ. Thus these six ratios define six functions of θ, which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ, and adjacent represents the side between the angle θ and the right angle.
Various mnemonics can be used to remember these definitions.
In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or π/2 radians. Therefore
sin
(
θ
)
{\displaystyle \sin(\theta )}
and
cos
(
90
∘
−
θ
)
{\displaystyle \cos(90^{\circ }-\theta )}
represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table.
== Radians versus degrees ==
In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics).
However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (tan, cot, sec, csc) can be defined as quotients and reciprocals of sin and cos, except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures.
When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2π (≈ 6.28) rad. For real number x, the notation sin x, cos x, etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (sin x°, cos x°, etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/π)°, so that, for example, sin π = sin 180° when we take x = π. In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π/180 ≈ 0.0175.
== Unit-circle definitions ==
The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and
π
2
{\textstyle {\frac {\pi }{2}}}
radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers.
Let
L
{\displaystyle {\mathcal {L}}}
be the ray obtained by rotating by an angle θ the positive half of the x-axis (counterclockwise rotation for
θ
>
0
,
{\displaystyle \theta >0,}
and clockwise rotation for
θ
<
0
{\displaystyle \theta <0}
). This ray intersects the unit circle at the point
A
=
(
x
A
,
y
A
)
.
{\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).}
The ray
L
,
{\displaystyle {\mathcal {L}},}
extended to a line if necessary, intersects the line of equation
x
=
1
{\displaystyle x=1}
at point
B
=
(
1
,
y
B
)
,
{\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),}
and the line of equation
y
=
1
{\displaystyle y=1}
at point
C
=
(
x
C
,
1
)
.
{\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).}
The tangent line to the unit circle at the point A, is perpendicular to
L
,
{\displaystyle {\mathcal {L}},}
and intersects the y- and x-axes at points
D
=
(
0
,
y
D
)
{\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })}
and
E
=
(
x
E
,
0
)
.
{\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).}
The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner.
The trigonometric functions cos and sin are defined, respectively, as the x- and y-coordinate values of point A. That is,
cos
θ
=
x
A
{\displaystyle \cos \theta =x_{\mathrm {A} }\quad }
and
sin
θ
=
y
A
.
{\displaystyle \quad \sin \theta =y_{\mathrm {A} }.}
In the range
0
≤
θ
≤
π
/
2
{\displaystyle 0\leq \theta \leq \pi /2}
, this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse. And since the equation
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
holds for all points
P
=
(
x
,
y
)
{\displaystyle \mathrm {P} =(x,y)}
on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity.
cos
2
θ
+
sin
2
θ
=
1.
{\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1.}
The other trigonometric functions can be found along the unit circle as
tan
θ
=
y
B
{\displaystyle \tan \theta =y_{\mathrm {B} }\quad }
and
cot
θ
=
x
C
,
{\displaystyle \quad \cot \theta =x_{\mathrm {C} },}
csc
θ
=
y
D
{\displaystyle \csc \theta \ =y_{\mathrm {D} }\quad }
and
sec
θ
=
x
E
.
{\displaystyle \quad \sec \theta =x_{\mathrm {E} }.}
By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is
tan
θ
=
sin
θ
cos
θ
,
cot
θ
=
cos
θ
sin
θ
,
sec
θ
=
1
cos
θ
,
csc
θ
=
1
sin
θ
.
{\displaystyle \tan \theta ={\frac {\sin \theta }{\cos \theta }},\quad \cot \theta ={\frac {\cos \theta }{\sin \theta }},\quad \sec \theta ={\frac {1}{\cos \theta }},\quad \csc \theta ={\frac {1}{\sin \theta }}.}
Since a rotation of an angle of
±
2
π
{\displaystyle \pm 2\pi }
does not change the position or size of a shape, the points A, B, C, D, and E are the same for two angles whose difference is an integer multiple of
2
π
{\displaystyle 2\pi }
. Thus trigonometric functions are periodic functions with period
2
π
{\displaystyle 2\pi }
. That is, the equalities
sin
θ
=
sin
(
θ
+
2
k
π
)
{\displaystyle \sin \theta =\sin \left(\theta +2k\pi \right)\quad }
and
cos
θ
=
cos
(
θ
+
2
k
π
)
{\displaystyle \quad \cos \theta =\cos \left(\theta +2k\pi \right)}
hold for any angle θ and any integer k. The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that
2
π
{\displaystyle 2\pi }
is the smallest value for which they are periodic (i.e.,
2
π
{\displaystyle 2\pi }
is the fundamental period of these functions). However, after a rotation by an angle
π
{\displaystyle \pi }
, the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of
π
{\displaystyle \pi }
. That is, the equalities
tan
θ
=
tan
(
θ
+
k
π
)
{\displaystyle \tan \theta =\tan(\theta +k\pi )\quad }
and
cot
θ
=
cot
(
θ
+
k
π
)
{\displaystyle \quad \cot \theta =\cot(\theta +k\pi )}
hold for any angle θ and any integer k.
== Algebraic values ==
The algebraic expressions for the most important angles are as follows:
sin
0
=
sin
0
∘
=
0
2
=
0
{\displaystyle \sin 0=\sin 0^{\circ }\quad ={\frac {\sqrt {0}}{2}}=0}
(zero angle)
sin
π
6
=
sin
30
∘
=
1
2
=
1
2
{\displaystyle \sin {\frac {\pi }{6}}=\sin 30^{\circ }={\frac {\sqrt {1}}{2}}={\frac {1}{2}}}
sin
π
4
=
sin
45
∘
=
2
2
=
1
2
{\displaystyle \sin {\frac {\pi }{4}}=\sin 45^{\circ }={\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}}
sin
π
3
=
sin
60
∘
=
3
2
{\displaystyle \sin {\frac {\pi }{3}}=\sin 60^{\circ }={\frac {\sqrt {3}}{2}}}
sin
π
2
=
sin
90
∘
=
4
2
=
1
{\displaystyle \sin {\frac {\pi }{2}}=\sin 90^{\circ }={\frac {\sqrt {4}}{2}}=1}
(right angle)
Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values.
Such simple expressions generally do not exist for other angles which are rational multiples of a right angle.
For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass.
For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable.
For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of nth roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic.
For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966.
If the sine of an angle is a rational number then the cosine is not necessarily a rational number, and vice-versa. However if the tangent of an angle is rational then both the sine and cosine of the double angle will be rational.
=== Simple algebraic values ===
The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees.
== Definitions in analysis ==
G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry.
Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include:
Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically.
By a power series, which is particularly well-suited to complex variables.
By using an infinite product expansion.
By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions.
As solutions of a differential equation.
=== Definition by differential equations ===
Sine and cosine can be defined as the unique solution to the initial value problem:
d
d
x
sin
x
=
cos
x
,
d
d
x
cos
x
=
−
sin
x
,
sin
(
0
)
=
0
,
cos
(
0
)
=
1.
{\displaystyle {\frac {d}{dx}}\sin x=\cos x,\ {\frac {d}{dx}}\cos x=-\sin x,\ \sin(0)=0,\ \cos(0)=1.}
Differentiating again,
d
2
d
x
2
sin
x
=
d
d
x
cos
x
=
−
sin
x
{\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x}
and
d
2
d
x
2
cos
x
=
−
d
d
x
sin
x
=
−
cos
x
{\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x}
, so both sine and cosine are solutions of the same ordinary differential equation
y
″
+
y
=
0
.
{\displaystyle y''+y=0\,.}
Sine is the unique solution with y(0) = 0 and y′(0) = 1; cosine is the unique solution with y(0) = 1 and y′(0) = 0.
One can then prove, as a theorem, that solutions
cos
,
sin
{\displaystyle \cos ,\sin }
are periodic, having the same period. Writing this period as
2
π
{\displaystyle 2\pi }
is then a definition of the real number
π
{\displaystyle \pi }
which is independent of geometry.
Applying the quotient rule to the tangent
tan
x
=
sin
x
/
cos
x
{\displaystyle \tan x=\sin x/\cos x}
,
d
d
x
tan
x
=
cos
2
x
+
sin
2
x
cos
2
x
=
1
+
tan
2
x
,
{\displaystyle {\frac {d}{dx}}\tan x={\frac {\cos ^{2}x+\sin ^{2}x}{\cos ^{2}x}}=1+\tan ^{2}x\,,}
so the tangent function satisfies the ordinary differential equation
y
′
=
1
+
y
2
.
{\displaystyle y'=1+y^{2}\,.}
It is the unique solution with y(0) = 0.
=== Power series expansion ===
The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions:
sin
x
=
x
−
x
3
3
!
+
x
5
5
!
−
x
7
7
!
+
⋯
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
!
x
2
n
+
1
cos
x
=
1
−
x
2
2
!
+
x
4
4
!
−
x
6
6
!
+
⋯
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
)
!
x
2
n
.
{\displaystyle {\begin{aligned}\sin x&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\\[8pt]\cos x&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}}
The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane.
Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation.
Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form
(
2
k
+
1
)
π
2
{\textstyle (2k+1){\frac {\pi }{2}}}
for the tangent and the secant, or
k
π
{\displaystyle k\pi }
for the cotangent and the cosecant, where k is an arbitrary integer.
Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets.
More precisely, defining
Un, the nth up/down number,
Bn, the nth Bernoulli number, and
En, is the nth Euler number,
one has the following series expansions:
tan
x
=
∑
n
=
0
∞
U
2
n
+
1
(
2
n
+
1
)
!
x
2
n
+
1
=
∑
n
=
1
∞
(
−
1
)
n
−
1
2
2
n
(
2
2
n
−
1
)
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
+
1
3
x
3
+
2
15
x
5
+
17
315
x
7
+
⋯
,
for
|
x
|
<
π
2
.
{\displaystyle {\begin{aligned}\tan x&{}=\sum _{n=0}^{\infty }{\frac {U_{2n+1}}{(2n+1)!}}x^{2n+1}\\[8mu]&{}=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}2^{2n}\left(2^{2n}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&{}=x+{\frac {1}{3}}x^{3}+{\frac {2}{15}}x^{5}+{\frac {17}{315}}x^{7}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}}
csc
x
=
∑
n
=
0
∞
(
−
1
)
n
+
1
2
(
2
2
n
−
1
−
1
)
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
−
1
+
1
6
x
+
7
360
x
3
+
31
15120
x
5
+
⋯
,
for
0
<
|
x
|
<
π
.
{\displaystyle {\begin{aligned}\csc x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n+1}2\left(2^{2n-1}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}+{\frac {1}{6}}x+{\frac {7}{360}}x^{3}+{\frac {31}{15120}}x^{5}+\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}}
sec
x
=
∑
n
=
0
∞
U
2
n
(
2
n
)
!
x
2
n
=
∑
n
=
0
∞
(
−
1
)
n
E
2
n
(
2
n
)
!
x
2
n
=
1
+
1
2
x
2
+
5
24
x
4
+
61
720
x
6
+
⋯
,
for
|
x
|
<
π
2
.
{\displaystyle {\begin{aligned}\sec x&=\sum _{n=0}^{\infty }{\frac {U_{2n}}{(2n)!}}x^{2n}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}\\[5mu]&=1+{\frac {1}{2}}x^{2}+{\frac {5}{24}}x^{4}+{\frac {61}{720}}x^{6}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}}
cot
x
=
∑
n
=
0
∞
(
−
1
)
n
2
2
n
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
−
1
−
1
3
x
−
1
45
x
3
−
2
945
x
5
−
⋯
,
for
0
<
|
x
|
<
π
.
{\displaystyle {\begin{aligned}\cot x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}2^{2n}B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}-{\frac {1}{3}}x-{\frac {1}{45}}x^{3}-{\frac {2}{945}}x^{5}-\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}}
=== Continued fraction expansion ===
The following continued fractions are valid in the whole complex plane:
sin
x
=
x
1
+
x
2
2
⋅
3
−
x
2
+
2
⋅
3
x
2
4
⋅
5
−
x
2
+
4
⋅
5
x
2
6
⋅
7
−
x
2
+
⋱
{\displaystyle \sin x={\cfrac {x}{1+{\cfrac {x^{2}}{2\cdot 3-x^{2}+{\cfrac {2\cdot 3x^{2}}{4\cdot 5-x^{2}+{\cfrac {4\cdot 5x^{2}}{6\cdot 7-x^{2}+\ddots }}}}}}}}}
cos
x
=
1
1
+
x
2
1
⋅
2
−
x
2
+
1
⋅
2
x
2
3
⋅
4
−
x
2
+
3
⋅
4
x
2
5
⋅
6
−
x
2
+
⋱
{\displaystyle \cos x={\cfrac {1}{1+{\cfrac {x^{2}}{1\cdot 2-x^{2}+{\cfrac {1\cdot 2x^{2}}{3\cdot 4-x^{2}+{\cfrac {3\cdot 4x^{2}}{5\cdot 6-x^{2}+\ddots }}}}}}}}}
tan
x
=
x
1
−
x
2
3
−
x
2
5
−
x
2
7
−
⋱
=
1
1
x
−
1
3
x
−
1
5
x
−
1
7
x
−
⋱
{\displaystyle \tan x={\cfrac {x}{1-{\cfrac {x^{2}}{3-{\cfrac {x^{2}}{5-{\cfrac {x^{2}}{7-\ddots }}}}}}}}={\cfrac {1}{{\cfrac {1}{x}}-{\cfrac {1}{{\cfrac {3}{x}}-{\cfrac {1}{{\cfrac {5}{x}}-{\cfrac {1}{{\cfrac {7}{x}}-\ddots }}}}}}}}}
The last one was used in the historically first proof that π is irrational.
=== Partial fraction expansion ===
There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match:
π
cot
π
x
=
lim
N
→
∞
∑
n
=
−
N
N
1
x
+
n
.
{\displaystyle \pi \cot \pi x=\lim _{N\to \infty }\sum _{n=-N}^{N}{\frac {1}{x+n}}.}
This identity can be proved with the Herglotz trick.
Combining the (–n)th with the nth term lead to absolutely convergent series:
π
cot
π
x
=
1
x
+
2
x
∑
n
=
1
∞
1
x
2
−
n
2
.
{\displaystyle \pi \cot \pi x={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {1}{x^{2}-n^{2}}}.}
Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions:
π
csc
π
x
=
∑
n
=
−
∞
∞
(
−
1
)
n
x
+
n
=
1
x
+
2
x
∑
n
=
1
∞
(
−
1
)
n
x
2
−
n
2
,
{\displaystyle \pi \csc \pi x=\sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{x+n}}={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{x^{2}-n^{2}}},}
π
2
csc
2
π
x
=
∑
n
=
−
∞
∞
1
(
x
+
n
)
2
,
{\displaystyle \pi ^{2}\csc ^{2}\pi x=\sum _{n=-\infty }^{\infty }{\frac {1}{(x+n)^{2}}},}
π
sec
π
x
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
(
n
+
1
2
)
2
−
x
2
,
{\displaystyle \pi \sec \pi x=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n+1)}{(n+{\tfrac {1}{2}})^{2}-x^{2}}},}
π
tan
π
x
=
2
x
∑
n
=
0
∞
1
(
n
+
1
2
)
2
−
x
2
.
{\displaystyle \pi \tan \pi x=2x\sum _{n=0}^{\infty }{\frac {1}{(n+{\tfrac {1}{2}})^{2}-x^{2}}}.}
=== Infinite product expansion ===
The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis:
sin
z
=
z
∏
n
=
1
∞
(
1
−
z
2
n
2
π
2
)
,
z
∈
C
.
{\displaystyle \sin z=z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .}
This may be obtained from the partial fraction decomposition of
cot
z
{\displaystyle \cot z}
given above, which is the logarithmic derivative of
sin
z
{\displaystyle \sin z}
. From this, it can be deduced also that
cos
z
=
∏
n
=
1
∞
(
1
−
z
2
(
n
−
1
/
2
)
2
π
2
)
,
z
∈
C
.
{\displaystyle \cos z=\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{(n-1/2)^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .}
=== Euler's formula and the exponential function ===
Euler's formula relates sine and cosine to the exponential function:
e
i
x
=
cos
x
+
i
sin
x
.
{\displaystyle e^{ix}=\cos x+i\sin x.}
This formula is commonly considered for real values of x, but it remains true for all complex values.
Proof: Let
f
1
(
x
)
=
cos
x
+
i
sin
x
,
{\displaystyle f_{1}(x)=\cos x+i\sin x,}
and
f
2
(
x
)
=
e
i
x
.
{\displaystyle f_{2}(x)=e^{ix}.}
One has
d
f
j
(
x
)
/
d
x
=
i
f
j
(
x
)
{\displaystyle df_{j}(x)/dx=if_{j}(x)}
for j = 1, 2. The quotient rule implies thus that
d
/
d
x
(
f
1
(
x
)
/
f
2
(
x
)
)
=
0
{\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0}
. Therefore,
f
1
(
x
)
/
f
2
(
x
)
{\displaystyle f_{1}(x)/f_{2}(x)}
is a constant function, which equals 1, as
f
1
(
0
)
=
f
2
(
0
)
=
1.
{\displaystyle f_{1}(0)=f_{2}(0)=1.}
This proves the formula.
One has
e
i
x
=
cos
x
+
i
sin
x
e
−
i
x
=
cos
x
−
i
sin
x
.
{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\[5pt]e^{-ix}&=\cos x-i\sin x.\end{aligned}}}
Solving this linear system in sine and cosine, one can express them in terms of the exponential function:
sin
x
=
e
i
x
−
e
−
i
x
2
i
cos
x
=
e
i
x
+
e
−
i
x
2
.
{\displaystyle {\begin{aligned}\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\[5pt]\cos x&={\frac {e^{ix}+e^{-ix}}{2}}.\end{aligned}}}
When x is real, this may be rewritten as
cos
x
=
Re
(
e
i
x
)
,
sin
x
=
Im
(
e
i
x
)
.
{\displaystyle \cos x=\operatorname {Re} \left(e^{ix}\right),\qquad \sin x=\operatorname {Im} \left(e^{ix}\right).}
Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity
e
a
+
b
=
e
a
e
b
{\displaystyle e^{a+b}=e^{a}e^{b}}
for simplifying the result.
Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set
U
{\displaystyle U}
of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group
R
/
Z
{\displaystyle \mathbb {R} /\mathbb {Z} }
, via an isomorphism
e
:
R
/
Z
→
U
.
{\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.}
In pedestrian terms
e
(
t
)
=
exp
(
2
π
i
t
)
{\displaystyle e(t)=\exp(2\pi it)}
, and this isomorphism is unique up to taking complex conjugates.
For a nonzero real number
a
{\displaystyle a}
(the base), the function
t
↦
e
(
t
/
a
)
{\displaystyle t\mapsto e(t/a)}
defines an isomorphism of the group
R
/
a
Z
→
U
{\displaystyle \mathbb {R} /a\mathbb {Z} \to U}
. The real and imaginary parts of
e
(
t
/
a
)
{\displaystyle e(t/a)}
are the cosine and sine, where
a
{\displaystyle a}
is used as the base for measuring angles. For example, when
a
=
2
π
{\displaystyle a=2\pi }
, we get the measure in radians, and the usual trigonometric functions. When
a
=
360
{\displaystyle a=360}
, we get the sine and cosine of angles measured in degrees.
Note that
a
=
2
π
{\displaystyle a=2\pi }
is the unique value at which the derivative
d
d
t
e
(
t
/
a
)
{\displaystyle {\frac {d}{dt}}e(t/a)}
becomes a unit vector with positive imaginary part at
t
=
0
{\displaystyle t=0}
. This fact can, in turn, be used to define the constant
2
π
{\displaystyle 2\pi }
.
=== Definition via integration ===
Another way to define the trigonometric functions in analysis is using integration. For a real number
t
{\displaystyle t}
, put
θ
(
t
)
=
∫
0
t
d
τ
1
+
τ
2
=
arctan
t
{\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t}
where this defines this inverse tangent function. Also,
π
{\displaystyle \pi }
is defined by
1
2
π
=
∫
0
∞
d
τ
1
+
τ
2
{\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}}
a definition that goes back to Karl Weierstrass.
On the interval
−
π
/
2
<
θ
<
π
/
2
{\displaystyle -\pi /2<\theta <\pi /2}
, the trigonometric functions are defined by inverting the relation
θ
=
arctan
t
{\displaystyle \theta =\arctan t}
. Thus we define the trigonometric functions by
tan
θ
=
t
,
cos
θ
=
(
1
+
t
2
)
−
1
/
2
,
sin
θ
=
t
(
1
+
t
2
)
−
1
/
2
{\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}}
where the point
(
t
,
θ
)
{\displaystyle (t,\theta )}
is on the graph of
θ
=
arctan
t
{\displaystyle \theta =\arctan t}
and the positive square root is taken.
This defines the trigonometric functions on
(
−
π
/
2
,
π
/
2
)
{\displaystyle (-\pi /2,\pi /2)}
. The definition can be extended to all real numbers by first observing that, as
θ
→
π
/
2
{\displaystyle \theta \to \pi /2}
,
t
→
∞
{\displaystyle t\to \infty }
, and so
cos
θ
=
(
1
+
t
2
)
−
1
/
2
→
0
{\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0}
and
sin
θ
=
t
(
1
+
t
2
)
−
1
/
2
→
1
{\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1}
. Thus
cos
θ
{\displaystyle \cos \theta }
and
sin
θ
{\displaystyle \sin \theta }
are extended continuously so that
cos
(
π
/
2
)
=
0
,
sin
(
π
/
2
)
=
1
{\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1}
. Now the conditions
cos
(
θ
+
π
)
=
−
cos
(
θ
)
{\displaystyle \cos(\theta +\pi )=-\cos(\theta )}
and
sin
(
θ
+
π
)
=
−
sin
(
θ
)
{\displaystyle \sin(\theta +\pi )=-\sin(\theta )}
define the sine and cosine as periodic functions with period
2
π
{\displaystyle 2\pi }
, for all real numbers.
Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First,
arctan
s
+
arctan
t
=
arctan
s
+
t
1
−
s
t
{\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}}
holds, provided
arctan
s
+
arctan
t
∈
(
−
π
/
2
,
π
/
2
)
{\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)}
, since
arctan
s
+
arctan
t
=
∫
−
s
t
d
τ
1
+
τ
2
=
∫
0
s
+
t
1
−
s
t
d
τ
1
+
τ
2
{\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}}
after the substitution
τ
→
s
+
τ
1
−
s
τ
{\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}}
. In particular, the limiting case as
s
→
∞
{\displaystyle s\to \infty }
gives
arctan
t
+
π
2
=
arctan
(
−
1
/
t
)
,
t
∈
(
−
∞
,
0
)
.
{\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).}
Thus we have
sin
(
θ
+
π
2
)
=
−
1
t
1
+
(
−
1
/
t
)
2
=
−
1
1
+
t
2
=
−
cos
(
θ
)
{\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )}
and
cos
(
θ
+
π
2
)
=
1
1
+
(
−
1
/
t
)
2
=
t
1
+
t
2
=
sin
(
θ
)
.
{\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).}
So the sine and cosine functions are related by translation over a quarter period
π
/
2
{\displaystyle \pi /2}
.
=== Definitions using functional equations ===
One can also define the trigonometric functions using various functional equations.
For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula
cos
(
x
−
y
)
=
cos
x
cos
y
+
sin
x
sin
y
{\displaystyle \cos(x-y)=\cos x\cos y+\sin x\sin y\,}
and the added condition
0
<
x
cos
x
<
sin
x
<
x
for
0
<
x
<
1.
{\displaystyle 0<x\cos x<\sin x<x\quad {\text{ for }}\quad 0<x<1.}
=== In the complex plane ===
The sine and cosine of a complex number
z
=
x
+
i
y
{\displaystyle z=x+iy}
can be expressed in terms of real sines, cosines, and hyperbolic functions as follows:
sin
z
=
sin
x
cosh
y
+
i
cos
x
sinh
y
cos
z
=
cos
x
cosh
y
−
i
sin
x
sinh
y
{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y\\[5pt]\cos z&=\cos x\cosh y-i\sin x\sinh y\end{aligned}}}
By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of
z
{\displaystyle z}
becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two.
== Periodicity and asymptotes ==
The sine and cosine functions are periodic, with period
2
π
{\displaystyle 2\pi }
, which is the smallest positive period:
sin
(
z
+
2
π
)
=
sin
(
z
)
,
cos
(
z
+
2
π
)
=
cos
(
z
)
.
{\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).}
Consequently, the cosecant and secant also have
2
π
{\displaystyle 2\pi }
as their period.
The functions sine and cosine also have semiperiods
π
{\displaystyle \pi }
, and
sin
(
z
+
π
)
=
−
sin
(
z
)
,
cos
(
z
+
π
)
=
−
cos
(
z
)
{\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)}
and consequently
tan
(
z
+
π
)
=
tan
(
z
)
,
cot
(
z
+
π
)
=
cot
(
z
)
.
{\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).}
Also,
sin
(
x
+
π
/
2
)
=
cos
(
x
)
,
cos
(
x
+
π
/
2
)
=
−
sin
(
x
)
{\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)}
(see Complementary angles).
The function
sin
(
z
)
{\displaystyle \sin(z)}
has a unique zero (at
z
=
0
{\displaystyle z=0}
) in the strip
−
π
<
ℜ
(
z
)
<
π
{\displaystyle -\pi <\Re (z)<\pi }
. The function
cos
(
z
)
{\displaystyle \cos(z)}
has the pair of zeros
z
=
±
π
/
2
{\displaystyle z=\pm \pi /2}
in the same strip. Because of the periodicity, the zeros of sine are
π
Z
=
{
…
,
−
2
π
,
−
π
,
0
,
π
,
2
π
,
…
}
⊂
C
.
{\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .}
There zeros of cosine are
π
2
+
π
Z
=
{
…
,
−
3
π
2
,
−
π
2
,
π
2
,
3
π
2
,
…
}
⊂
C
.
{\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .}
All of the zeros are simple zeros, and both functions have derivative
±
1
{\displaystyle \pm 1}
at each of the zeros.
The tangent function
tan
(
z
)
=
sin
(
z
)
/
cos
(
z
)
{\displaystyle \tan(z)=\sin(z)/\cos(z)}
has a simple zero at
z
=
0
{\displaystyle z=0}
and vertical asymptotes at
z
=
±
π
/
2
{\displaystyle z=\pm \pi /2}
, where it has a simple pole of residue
−
1
{\displaystyle -1}
. Again, owing to the periodicity, the zeros are all the integer multiples of
π
{\displaystyle \pi }
and the poles are odd multiples of
π
/
2
{\displaystyle \pi /2}
, all having the same residue. The poles correspond to vertical asymptotes
lim
x
→
π
−
tan
(
x
)
=
+
∞
,
lim
x
→
π
+
tan
(
x
)
=
−
∞
.
{\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .}
The cotangent function
cot
(
z
)
=
cos
(
z
)
/
sin
(
z
)
{\displaystyle \cot(z)=\cos(z)/\sin(z)}
has a simple pole of residue 1 at the integer multiples of
π
{\displaystyle \pi }
and simple zeros at odd multiples of
π
/
2
{\displaystyle \pi /2}
. The poles correspond to vertical asymptotes
lim
x
→
0
−
cot
(
x
)
=
−
∞
,
lim
x
→
0
+
cot
(
x
)
=
+
∞
.
{\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .}
== Basic identities ==
Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π/2], see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function.
=== Parity ===
The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is:
sin
(
−
x
)
=
−
sin
x
cos
(
−
x
)
=
cos
x
tan
(
−
x
)
=
−
tan
x
cot
(
−
x
)
=
−
cot
x
csc
(
−
x
)
=
−
csc
x
sec
(
−
x
)
=
sec
x
.
{\displaystyle {\begin{aligned}\sin(-x)&=-\sin x\\\cos(-x)&=\cos x\\\tan(-x)&=-\tan x\\\cot(-x)&=-\cot x\\\csc(-x)&=-\csc x\\\sec(-x)&=\sec x.\end{aligned}}}
=== Periods ===
All trigonometric functions are periodic functions of period 2π. This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k, one has
sin
(
x
+
2
k
π
)
=
sin
x
cos
(
x
+
2
k
π
)
=
cos
x
tan
(
x
+
k
π
)
=
tan
x
cot
(
x
+
k
π
)
=
cot
x
csc
(
x
+
2
k
π
)
=
csc
x
sec
(
x
+
2
k
π
)
=
sec
x
.
{\displaystyle {\begin{array}{lrl}\sin(x+&2k\pi )&=\sin x\\\cos(x+&2k\pi )&=\cos x\\\tan(x+&k\pi )&=\tan x\\\cot(x+&k\pi )&=\cot x\\\csc(x+&2k\pi )&=\csc x\\\sec(x+&2k\pi )&=\sec x.\end{array}}}
See Periodicity and asymptotes.
=== Pythagorean identity ===
The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is
sin
2
x
+
cos
2
x
=
1
{\displaystyle \sin ^{2}x+\cos ^{2}x=1}
.
Dividing through by either
cos
2
x
{\displaystyle \cos ^{2}x}
or
sin
2
x
{\displaystyle \sin ^{2}x}
gives
tan
2
x
+
1
=
sec
2
x
{\displaystyle \tan ^{2}x+1=\sec ^{2}x}
1
+
cot
2
x
=
csc
2
x
{\displaystyle 1+\cot ^{2}x=\csc ^{2}x}
and
sec
2
x
+
csc
2
x
=
sec
2
x
csc
2
x
{\displaystyle \sec ^{2}x+\csc ^{2}x=\sec ^{2}x\csc ^{2}x}
.
=== Sum and difference formulas ===
The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities). One can also produce them algebraically using Euler's formula.
Sum
sin
(
x
+
y
)
=
sin
x
cos
y
+
cos
x
sin
y
,
cos
(
x
+
y
)
=
cos
x
cos
y
−
sin
x
sin
y
,
tan
(
x
+
y
)
=
tan
x
+
tan
y
1
−
tan
x
tan
y
.
{\displaystyle {\begin{aligned}\sin \left(x+y\right)&=\sin x\cos y+\cos x\sin y,\\[5mu]\cos \left(x+y\right)&=\cos x\cos y-\sin x\sin y,\\[5mu]\tan(x+y)&={\frac {\tan x+\tan y}{1-\tan x\tan y}}.\end{aligned}}}
Difference
sin
(
x
−
y
)
=
sin
x
cos
y
−
cos
x
sin
y
,
cos
(
x
−
y
)
=
cos
x
cos
y
+
sin
x
sin
y
,
tan
(
x
−
y
)
=
tan
x
−
tan
y
1
+
tan
x
tan
y
.
{\displaystyle {\begin{aligned}\sin \left(x-y\right)&=\sin x\cos y-\cos x\sin y,\\[5mu]\cos \left(x-y\right)&=\cos x\cos y+\sin x\sin y,\\[5mu]\tan(x-y)&={\frac {\tan x-\tan y}{1+\tan x\tan y}}.\end{aligned}}}
When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae.
sin
2
x
=
2
sin
x
cos
x
=
2
tan
x
1
+
tan
2
x
,
cos
2
x
=
cos
2
x
−
sin
2
x
=
2
cos
2
x
−
1
=
1
−
2
sin
2
x
=
1
−
tan
2
x
1
+
tan
2
x
,
tan
2
x
=
2
tan
x
1
−
tan
2
x
.
{\displaystyle {\begin{aligned}\sin 2x&=2\sin x\cos x={\frac {2\tan x}{1+\tan ^{2}x}},\\[5mu]\cos 2x&=\cos ^{2}x-\sin ^{2}x=2\cos ^{2}x-1=1-2\sin ^{2}x={\frac {1-\tan ^{2}x}{1+\tan ^{2}x}},\\[5mu]\tan 2x&={\frac {2\tan x}{1-\tan ^{2}x}}.\end{aligned}}}
These identities can be used to derive the product-to-sum identities.
By setting
t
=
tan
1
2
θ
,
{\displaystyle t=\tan {\tfrac {1}{2}}\theta ,}
all trigonometric functions of
θ
{\displaystyle \theta }
can be expressed as rational fractions of
t
{\displaystyle t}
:
sin
θ
=
2
t
1
+
t
2
,
cos
θ
=
1
−
t
2
1
+
t
2
,
tan
θ
=
2
t
1
−
t
2
.
{\displaystyle {\begin{aligned}\sin \theta &={\frac {2t}{1+t^{2}}},\\[5mu]\cos \theta &={\frac {1-t^{2}}{1+t^{2}}},\\[5mu]\tan \theta &={\frac {2t}{1-t^{2}}}.\end{aligned}}}
Together with
d
θ
=
2
1
+
t
2
d
t
,
{\displaystyle d\theta ={\frac {2}{1+t^{2}}}\,dt,}
this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions.
=== Derivatives and antiderivatives ===
The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration.
Note: For
0
<
x
<
π
{\displaystyle 0<x<\pi }
the integral of
csc
x
{\displaystyle \csc x}
can also be written as
−
arsinh
(
cot
x
)
,
{\displaystyle -\operatorname {arsinh} (\cot x),}
and for the integral of
sec
x
{\displaystyle \sec x}
for
−
π
/
2
<
x
<
π
/
2
{\displaystyle -\pi /2<x<\pi /2}
as
arsinh
(
tan
x
)
,
{\displaystyle \operatorname {arsinh} (\tan x),}
where
arsinh
{\displaystyle \operatorname {arsinh} }
is the inverse hyperbolic sine.
Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule:
d
cos
x
d
x
=
d
d
x
sin
(
π
/
2
−
x
)
=
−
cos
(
π
/
2
−
x
)
=
−
sin
x
,
d
csc
x
d
x
=
d
d
x
sec
(
π
/
2
−
x
)
=
−
sec
(
π
/
2
−
x
)
tan
(
π
/
2
−
x
)
=
−
csc
x
cot
x
,
d
cot
x
d
x
=
d
d
x
tan
(
π
/
2
−
x
)
=
−
sec
2
(
π
/
2
−
x
)
=
−
csc
2
x
.
{\displaystyle {\begin{aligned}{\frac {d\cos x}{dx}}&={\frac {d}{dx}}\sin(\pi /2-x)=-\cos(\pi /2-x)=-\sin x\,,\\{\frac {d\csc x}{dx}}&={\frac {d}{dx}}\sec(\pi /2-x)=-\sec(\pi /2-x)\tan(\pi /2-x)=-\csc x\cot x\,,\\{\frac {d\cot x}{dx}}&={\frac {d}{dx}}\tan(\pi /2-x)=-\sec ^{2}(\pi /2-x)=-\csc ^{2}x\,.\end{aligned}}}
== Inverse functions ==
The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function.
The notations sin−1, cos−1, etc. are often used for arcsin and arccos, etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond".
Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms.
== Applications ==
=== Angles and sides of a triangle ===
In this section A, B, C denote the three (interior) angles of a triangle, and a, b, c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve.
==== Law of sines ====
The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C:
sin
A
a
=
sin
B
b
=
sin
C
c
=
2
Δ
a
b
c
,
{\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},}
where Δ is the area of the triangle,
or, equivalently,
a
sin
A
=
b
sin
B
=
c
sin
C
=
2
R
,
{\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,}
where R is the triangle's circumradius.
It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance.
==== Law of cosines ====
The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem:
c
2
=
a
2
+
b
2
−
2
a
b
cos
C
,
{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,}
or equivalently,
cos
C
=
a
2
+
b
2
−
c
2
2
a
b
.
{\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.}
In this formula the angle at C is opposite to the side c. This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem.
The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known.
==== Law of tangents ====
The law of tangents says that:
tan
A
−
B
2
tan
A
+
B
2
=
a
−
b
a
+
b
{\displaystyle {\frac {\tan {\frac {A-B}{2}}}{\tan {\frac {A+B}{2}}}}={\frac {a-b}{a+b}}}
.
==== Law of cotangents ====
If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that:
r
=
1
s
(
s
−
a
)
(
s
−
b
)
(
s
−
c
)
{\displaystyle r={\sqrt {{\frac {1}{s}}(s-a)(s-b)(s-c)}}}
.
The law of cotangents says that:
cot
A
2
=
s
−
a
r
{\displaystyle \cot {\frac {A}{2}}={\frac {s-a}{r}}}
It follows that
cot
A
2
s
−
a
=
cot
B
2
s
−
b
=
cot
C
2
s
−
c
=
1
r
.
{\displaystyle {\frac {\cot {\dfrac {A}{2}}}{s-a}}={\frac {\cot {\dfrac {B}{2}}}{s-b}}={\frac {\cot {\dfrac {C}{2}}}{s-c}}={\frac {1}{r}}.}
=== Periodic functions ===
The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion.
Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves.
Under rather general conditions, a periodic function f (x) can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by φk, the expansion of the periodic function f (t) takes the form:
f
(
t
)
=
∑
k
=
1
∞
c
k
φ
k
(
t
)
.
{\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).}
For example, the square wave can be written as the Fourier series
f
square
(
t
)
=
4
π
∑
k
=
1
∞
sin
(
(
2
k
−
1
)
t
)
2
k
−
1
.
{\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.}
In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath.
== History ==
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.)
All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho.
Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.)
The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates.
The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583).
The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie.
In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x. Though defined as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.).
A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions.
crd
θ
=
2
sin
1
2
θ
,
vers
θ
=
1
−
cos
θ
=
2
sin
2
1
2
θ
,
hav
θ
=
1
2
vers
θ
=
sin
2
1
2
θ
,
covers
θ
=
1
−
sin
θ
=
vers
(
1
2
π
−
θ
)
,
exsec
θ
=
sec
θ
−
1.
{\displaystyle {\begin{aligned}\operatorname {crd} \theta &=2\sin {\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {vers} \theta &=1-\cos \theta =2\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {hav} \theta &={\tfrac {1}{2}}\operatorname {vers} \theta =\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {covers} \theta &=1-\sin \theta =\operatorname {vers} {\bigl (}{\tfrac {1}{2}}\pi -\theta {\bigr )},\\[5mu]\operatorname {exsec} \theta &=\sec \theta -1.\end{aligned}}}
Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent.
== Etymology ==
The word sine derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin.
The choice was based on a misreading of the Arabic written form j-y-b (جيب), which itself originated as a transliteration from Sanskrit jīvā, which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string".
The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle.
The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly.
== See also ==
== Notes ==
== References ==
== External links ==
"Trigonometric functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Visionlearning Module on Wave Mathematics
GonioLab Visualization of the unit circle, trigonometric and hyperbolic functions
q-Sine Article about the q-analog of sin at MathWorld
q-Cosine Article about the q-analog of cos at MathWorld | Wikipedia/Trigonometric_function |
A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables. The plot can be drawn by hand or by a computer. In the past, sometimes mechanical or electronic plotters were used. Graphs are a visual representation of the relationship between variables, which are very useful for humans who can then quickly derive an understanding which may not have come from lists of values. Given a scale or ruler, graphs can also be used to read off the value of an unknown variable plotted as a function of a known one, but this can also be done with data presented in tabular form. Graphs of functions are used in mathematics, sciences, engineering, technology, finance, and other areas.
== Overview ==
Plots play an important role in statistics and data analysis. The procedures here can broadly be split into two parts: quantitative and graphical. Quantitative techniques are a set of statistical procedures that yield numeric or tabular output. Examples of quantitative techniques include:
hypothesis testing
analysis of variance
point estimates and confidence intervals
least squares regression
These and similar techniques are all valuable and are mainstream in terms of classical analysis. There are also many statistical tools generally referred to as graphical techniques. These include:
scatter plots
spectrum plots
histograms
probability plots
residual plots
box plots, and
block plots
Graphical procedures such as plots are a short path to gaining insight into a data set in terms of testing assumptions, model selection, model validation, estimator selection, relationship identification, factor effect determination, outlier detection. Statistical graphics give insight into aspects of the underlying structure of the data.
Graphs can also be used to solve some mathematical equations, typically by finding where two plots intersect.
== Types of plots ==
Biplot : These are a type of graph used in statistics. A biplot allows information on both samples and variables of a data matrix to be displayed graphically. Samples are displayed as points while variables are displayed either as vectors, linear axes or nonlinear trajectories. In the case of categorical variables, category level points may be used to represent the levels of a categorical variable. A generalised biplot displays information on both continuous and categorical variables.
Bland–Altman plot : In analytical chemistry and biostatistics this plot is a method of data plotting used in analysing the agreement between two different assays. It is identical to a Tukey mean-difference plot, which is what it is still known as in other fields, but was popularised in medical statistics by Bland and Altman.
Bode plots are used in control theory.
Box plot : In descriptive statistics, a boxplot, also known as a box-and-whisker diagram or plot, is a convenient way of graphically depicting groups of numerical data through their five-number summaries (the smallest observation, lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation). A boxplot may also indicate which observations, if any, might be considered outliers.
Carpet plot : A two-dimensional plot that illustrates the interaction between two and three independent variables and one to three dependent variables.
Comet plot : A two- or three-dimensional animated plot in which the data points are traced on the screen.
Contour plot : A two-dimensional plot which shows the one-dimensional curves, called contour lines on which the plotted quantity q is a constant. Optionally, the plotted values can be color-coded.
Dalitz plot : This a scatterplot often used in particle physics to represent the relative frequency of various (kinematically distinct) manners in which the products of certain (otherwise similar) three-body decays may move apart
Drain plot : A two-dimensional plot where the data are presented in a hierarchy with multiple levels. The levels are nested in the sense that the pieces in each pie chart add up to 100%. A waterfall or waterdrop metaphor is used to link each layer to the one below visually conveying the hierarchical structure. Drain Plot.
Funnel plot : This is a useful graph designed to check the existence of publication bias in meta-analyses. Funnel plots, introduced by Light and Pillemer in 1994 and discussed in detail by Egger and colleagues, are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment effect against a measure of study size. It is used primarily as a visual aid to detecting bias or systematic heterogeneity.
Dot plot (statistics) : A dot chart or dot plot is a statistical chart consisting of group of data points plotted on a simple scale. Dot plots are used for continuous, quantitative, univariate data. Data points may be labelled if there are few of them. Dot plots are one of the simplest plots available, and are suitable for small to moderate sized data sets. They are useful for highlighting clusters and gaps, as well as outliers.
Forest plot : is a graphical display that shows the strength of the evidence in quantitative scientific studies. It was developed for use in medical research as a means of graphically representing a meta-analysis of the results of randomized controlled trials. In the last twenty years, similar meta-analytical techniques have been applied in observational studies (e.g. environmental epidemiology) and forest plots are often used in presenting the results of such studies also.
Galbraith plot : In statistics, a Galbraith plot (also known as Galbraith's radial plot or just radial plot), is one way of displaying several estimates of the same quantity that have different standard errors. It can be used to examine heterogeneity in a meta-analysis, as an alternative or supplement to a forest plot.
Heat map
Lollipop plot
Nichols plot : This is a graph used in signal processing in which the logarithm of the magnitude is plotted against the phase of a frequency response on orthogonal axes.
Normal probability plot : The normal probability plot is a graphical technique for assessing whether or not a data set is approximately normally distributed. The data are plotted against a theoretical normal distribution in such a way that the points should form an approximate straight line. Departures from this straight line indicate departures from normality. The normal probability plot is a special case of the probability plot.
Nyquist plot : Plot is used in automatic control and signal processing for assessing the stability of a system with feedback. It is represented by a graph in polar coordinates in which the gain and phase of a frequency response are plotted. The plot of these phasor quantities shows the phase as the angle and the magnitude as the distance from the origin.
Partial regression plot : In applied statistics, a partial regression plot attempts to show the effect of adding another variable to the model (given that one or more independent variables are already in the model). Partial regression plots are also referred to as added variable plots, adjusted variable plots, and individual coefficient plots.
Partial residual plot : In applied statistics, a partial residual plot is a graphical technique that attempts to show the relationship between a given independent variable and the response variable given that other independent variables are also in the model.
Probability plot : The probability plot is a graphical technique for assessing whether or not a data set follows a given distribution such as the normal or Weibull, and for visually estimating the location and scale parameters of the chosen distribution. The data are plotted against a theoretical distribution in such a way that the points should form approximately a straight line. Departures from this straight line indicate departures from the specified distribution.
Ridgeline plot: Several line plots, vertically stacked and slightly overlapping.
Q–Q plot : In statistics, a Q–Q plot (Q stands for quantile) is a graphical method for diagnosing differences between the probability distribution of a statistical population from which a random sample has been taken and a comparison distribution. An example of the kind of differences that can be tested for is non-normality of the population distribution.
Recurrence plot : In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for a given moment in time, the times at which a phase space. In other words, it is a graph of
x
→
(
i
)
≈
x
→
(
j
)
,
{\displaystyle {\vec {x}}(i)\approx {\vec {x}}(j),\,}
showing
i
{\displaystyle i}
on a horizontal axis and
j
{\displaystyle j}
on a vertical axis, where
x
→
{\displaystyle {\vec {x}}}
is a phase space trajectory.
Scatterplot : A scatter graph or scatter plot is a type of display using variables for a set of data. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.
Shmoo plot : In electrical engineering, a shmoo plot is a graphical display of the response of a component or system varying over a range of conditions and inputs. Often used to represent the results of the testing of complex electronic systems such as computers, ASICs or microprocessors. The plot usually shows the range of conditions in which the device under test will operate.
Spaghetti plots are a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term. This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system.
Stemplot : A stemplot (or stem-and-leaf plot), in statistics, is a device for presenting quantitative data in a graphical format, similar to a histogram, to assist in visualizing the shape of a distribution. They evolved from Arthur Bowley's work in the early 1900s, and are useful tools in exploratory data analysis. Unlike histograms, stemplots retain the original data to at least two significant digits, and put the data in order, thereby easing the move to order-based inference and non-parametric statistics.
Star plot : A graphical method of displaying multivariate data. Each star represents a single observation. Typically, star plots are generated in a multi-plot format with many stars on each page and each star representing one observation.
Surface plot : In this visualization of the graph of a bivariate function, a surface is plotted to fit a set of data triplets (X, Y, Z), where Z if obtained by the function to be plotted Z=f(X, Y). Usually, the set of X and Y values are equally spaced. Optionally, the plotted values can be color-coded.
Ternary plot : A ternary plot, ternary graph, triangle plot, simplex plot, or de Finetti diagram is a barycentric plot on three variables which sum to a constant. It graphically depicts the ratios of the three variables as positions in an equilateral triangle. It is used in petrology, mineralogy, metallurgy, and other physical sciences to show the compositions of systems composed of three species. In population genetics, it is often called a de Finetti diagram. In game theory, it is often called a simplex plot.
Vector field : Vector field plots (or quiver plots) show the direction and the strength of a vector associated with a 2D or 3D points. They are typically used to show the strength of the gradient over the plane or a surface area.
Violin plot : Violin plots are a method of plotting numeric data. They are similar to box plots, except that they also show the probability density of the data at different values (in the simplest case this could be a histogram). Typically violin plots will include a marker for the median of the data and a box indicating the interquartile range, as in standard box plots. Overlaid on this box plot is a kernel density estimation. Violin plots are available as extensions to a number of software packages, including R through the vioplot library, and Stata through the vioplot add-in.
=== Plots for specific quantities ===
Arrhenius plot : This plot compares the logarithm of a reaction rate (
ln
(
k
)
{\displaystyle \ln(k)}
, ordinate axis) plotted against inverse temperature (
1
/
T
{\displaystyle 1/T}
, abscissa). Arrhenius plots are often used to analyze the effect of temperature on the rates of chemical reactions.
Dot plot (bioinformatics) : This plot compares two biological sequences and is a graphical method that allows the identification of regions of close similarity between them. It is a kind of recurrence plot.
Lineweaver–Burk plot : This plot compares the reciprocals of reaction rate and substrate concentration. It is used to represent and determine enzyme kinetics.
=== 3D plots ===
== Examples ==
Types of graphs and their uses vary very widely. A few typical examples are:
Simple graph: Supply and demand curves, simple graphs used in economics to relate supply and demand to price. The graphs can be used together to determine the economic equilibrium (essentially, to solve an equation).
Simple graph used for reading values: the bell-shaped normal or Gaussian probability distribution, from which, for example, the probability of a man's height being in a specified range can be derived, given data for the adult male population.
Very complex graph: the psychrometric chart, relating temperature, pressure, humidity, and other quantities.
Non-rectangular coordinates: the above all use two-dimensional rectangular coordinates; an example of a graph using polar coordinates, sometimes in three dimensions, is the antenna radiation pattern chart, which represents the power radiated in all directions by an antenna of specified type.
== See also ==
Chart
Diagram
Graph of a function
Line chart
List of charting software
List of graphical methods
Plotting software
Plotter
List of plotting programs
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology
== External links ==
Dataplot gallery of some useful graphical techniques at itl.nist.gov. | Wikipedia/Plot_(graphics) |
The Method of Mechanical Theorems (Greek: Περὶ μηχανικῶν θεωρημάτων πρὸς Ἐρατοσθένη ἔφοδος), also referred to as The Method, is one of the major surviving works of the ancient Greek polymath Archimedes. The Method takes the form of a letter from Archimedes to Eratosthenes, the chief librarian at the Library of Alexandria, and contains the first attested explicit use of indivisibles (indivisibles are geometric versions of infinitesimals). The work was originally thought to be lost, but in 1906 was rediscovered in the celebrated Archimedes Palimpsest. The palimpsest includes Archimedes' account of the "mechanical method", so called because it relies on the center of weights of figures (centroid) and the law of the lever, which were demonstrated by Archimedes in On the Equilibrium of Planes.
Archimedes did not admit the method of indivisibles as part of rigorous mathematics, and therefore did not publish his method in the formal treatises that contain the results. In these treatises, he proves the same theorems by exhaustion, finding rigorous upper and lower bounds which both converge to the answer required. Nevertheless, the mechanical method was what he used to discover the relations for which he later gave rigorous proofs.
== Area of a parabola ==
Archimedes' idea is to use the law of the lever to determine the areas of figures from the known center of mass of other figures.: 8 The simplest example in modern language is the area of the parabola. A modern approach would be to find this area by calculating the integral
∫
0
1
x
2
d
x
=
1
3
,
{\displaystyle \int _{0}^{1}x^{2}\,dx={\frac {1}{3}},}
which is an elementary result in integral calculus. Instead, the Archimedean method mechanically balances the parabola (the curved region being integrated above) with a certain triangle that is made of the same material. The parabola is the region in the
(
x
,
y
)
{\displaystyle (x,y)}
plane between the
x
{\displaystyle x}
-axis and the curve
y
=
x
2
{\displaystyle y=x^{2}}
as
x
{\displaystyle x}
varies from 0 to 1. The triangle is the region in the same plane between the
x
{\displaystyle x}
-axis and the line
y
=
x
{\displaystyle y=x}
, also as
x
{\displaystyle x}
varies from 0 to 1.
Slice the parabola and triangle into vertical slices, one for each value of
x
{\displaystyle x}
. Imagine that the
x
{\displaystyle x}
-axis is a lever, with a fulcrum at
x
=
0
{\displaystyle x=0}
. The law of the lever states that two objects on opposite sides of the fulcrum will balance if each has the same torque, where an object's torque equals its weight times its distance to the fulcrum. For each value of
x
{\displaystyle x}
, the slice of the triangle at position
x
{\displaystyle x}
has a mass equal to its height
x
{\displaystyle x}
, and is at a distance
x
{\displaystyle x}
from the fulcrum; so it would balance the corresponding slice of the parabola, of height
x
2
{\displaystyle x^{2}}
, if the latter were moved to
x
=
−
1
{\displaystyle x=-1}
, at a distance of 1 on the other side of the fulcrum.
Since each pair of slices balances, moving the whole parabola to
x
=
−
1
{\displaystyle x=-1}
would balance the whole triangle. This means that if the original uncut parabola is hung by a hook from the point
x
=
−
1
{\displaystyle x=-1}
(so that the whole mass of the parabola is attached to that point), it will balance the triangle sitting between
x
=
0
{\displaystyle x=0}
and
x
=
1
{\displaystyle x=1}
.
The center of mass of a triangle can be easily found by the following method, also due to Archimedes.: 14 If a median line is drawn from any one of the vertices of a triangle to the opposite edge
E
{\displaystyle E}
, the triangle will balance on the median, considered as a fulcrum. The reason is that if the triangle is divided into infinitesimal line segments parallel to
E
{\displaystyle E}
, each segment has equal length on opposite sides of the median, so balance follows by symmetry. This argument can be easily made rigorous by exhaustion by using little rectangles instead of infinitesimal lines, and this is what Archimedes does in On the Equilibrium of Planes.
So the center of mass of a triangle must be at the intersection point of the medians. For the triangle in question, one median is the line
y
=
x
/
2
{\displaystyle y=x/2}
, while a second median is the line
y
=
1
−
x
{\displaystyle y=1-x}
. Solving these equations, we see that the intersection of these two medians is above the point
x
=
2
/
3
{\displaystyle x=2/3}
, so that the total effect of the triangle on the lever is as if the total mass of the triangle were pushing down on (or hanging from) this point. The total torque exerted by the triangle is its area, 1/2, times the distance 2/3 of its center of mass from the fulcrum at
x
=
0
{\displaystyle x=0}
. This torque of 1/3 balances the parabola, which is at a distance 1 from the fulcrum. Hence, the area of the parabola must be 1/3 to give it the opposite torque.
This type of method can be used to find the area of an arbitrary section of a parabola, and similar arguments can be used to find the integral of any power of
x
{\displaystyle x}
, although higher powers become complicated without algebra. Archimedes only went as far as the integral of
x
3
{\displaystyle x^{3}}
, which he used to find the center of mass of a hemisphere, and in other work, the center of mass of a parabola.
== First proposition in the palimpsest ==
Consider the parabola in the figure to the right. Pick two points on the parabola and call them A and B.
Suppose the line segment AC is parallel to the axis of symmetry of the parabola. Further suppose that the line segment BC lies on a line that is tangent to the parabola at B.
The first proposition states:: 14
The area of the triangle ABC is exactly three times the area bounded by the parabola and the secant line AB.
Proof:: 15–18
Let D be the midpoint of AC. Construct a line segment JB through D, where the distance from J to D is equal to the distance from B to D. We will think of the segment JB as a "lever" with D as its fulcrum. As Archimedes had previously shown, the center of mass of the triangle is at the point I on the "lever" where DI :DB = 1:3. Therefore, it suffices to show that if the whole weight of the interior of the triangle rests at I, and the whole weight of the section of the parabola at J, the lever is in equilibrium.
Consider an infinitely small cross-section of the triangle given by the segment HE, where the point H lies on BC, the point E lies on AB, and HE is parallel to the axis of symmetry of the parabola. Call the intersection of HE and the parabola F and the intersection of HE and the lever G. If the weight of all such segments HE rest at the points G where they intersect the lever, then they exert the same torque on the lever as does the whole weight of the triangle resting at I. Thus, we wish to show that if the weight of the cross-section HE rests at G and the weight of the cross-section EF of the section of the parabola rests at J, then the lever is in equilibrium. In other words, it suffices to show that EF :GD = EH :JD. But that is a routine consequence of the equation of the parabola. Q.E.D.
== Volume of a sphere ==
Again, to illuminate the mechanical method, it is convenient to use a little bit of coordinate geometry. If a sphere of radius 1 is placed with its center at x = 1, the vertical cross sectional radius
ρ
S
{\displaystyle \rho _{S}}
at any x between 0 and 2 is given by the following formula:
ρ
S
(
x
)
=
x
(
2
−
x
)
.
{\displaystyle \rho _{S}(x)={\sqrt {x(2-x)}}.}
The mass of this cross section, for purposes of balancing on a lever, is proportional to the area:
π
ρ
S
(
x
)
2
=
2
π
x
−
π
x
2
.
{\displaystyle \pi \rho _{S}(x)^{2}=2\pi x-\pi x^{2}.}
Archimedes then considered rotating the triangular region between y = 0 and y = x and x = 2 on the x-y plane around the x-axis, to form a cone.: 18–21 The cross section of this cone is a circle of radius
ρ
C
{\displaystyle \rho _{C}}
ρ
C
(
x
)
=
x
{\displaystyle \rho _{C}(x)=x}
and the area of this cross section is
π
ρ
C
2
=
π
x
2
.
{\displaystyle \pi \rho _{C}^{2}=\pi x^{2}.}
So if slices of the cone and the sphere both are to be weighed together, the combined cross-sectional area is:
M
(
x
)
=
2
π
x
.
{\displaystyle M(x)=2\pi x.}
If the two slices are placed together at distance 1 from the fulcrum, their total weight would be exactly balanced by a circle of area
2
π
{\displaystyle 2\pi }
at a distance x from the fulcrum on the other side. This means that the cone and the sphere together, if all their material were moved to x = 1, would balance a cylinder of base radius 1 and length 2 on the other side.
As x ranges from 0 to 2, the cylinder will have a center of gravity a distance 1 from the fulcrum, so all the weight of the cylinder can be considered to be at position 1. The condition of balance ensures that the volume of the cone plus the volume of the sphere is equal to the volume of the cylinder.
The volume of the cylinder is the cross section area,
2
π
{\displaystyle 2\pi }
times the height, which is 2, or
4
π
{\displaystyle 4\pi }
. Archimedes could also find the volume of the cone using the mechanical method, since, in modern terms, the integral involved is exactly the same as the one for area of the parabola. The volume of the cone is 1/3 its base area times the height. The base of the cone is a circle of radius 2, with area
4
π
{\displaystyle 4\pi }
, while the height is 2, so the area is
8
π
/
3
{\displaystyle 8\pi /3}
. Subtracting the volume of the cone from the volume of the cylinder gives the volume of the sphere:
V
S
=
4
π
−
8
3
π
=
4
3
π
.
{\displaystyle V_{S}=4\pi -{8 \over 3}\pi ={4 \over 3}\pi .}
The dependence of the volume of the sphere on the radius is obvious from scaling, although that also was not trivial to make rigorous back then. The method then gives the familiar formula for the volume of a sphere. By scaling the dimensions linearly Archimedes easily extended the volume result to spheroids.: 21-23
Archimedes argument is nearly identical to the argument above, but his cylinder had a bigger radius, so that the cone and the cylinder hung at a greater distance from the fulcrum. He considered this argument to be his greatest achievement, requesting that the accompanying figure of the balanced sphere, cone, and cylinder be engraved upon his tombstone.
== Surface area of a sphere ==
To find the surface area of the sphere, Archimedes argued that just as the area of the circle could be thought of as infinitely many infinitesimal right triangles going around the circumference (see Measurement of the Circle), the volume of the sphere could be thought of as divided into many cones with height equal to the radius and base on the surface. The cones all have the same height, so their volume is 1/3 the base area times the height.
Archimedes states that the total volume of the sphere is equal to the volume of a cone whose base has the same surface area as the sphere and whose height is the radius.: 20-21 There are no details given for the argument, but the obvious reason is that the cone can be divided into infinitesimal cones by splitting the base area up, and then each cone makes a contribution according to its base area, just the same as in the sphere.
Let the surface of the sphere be S. The volume of the cone with base area S and height r is
S
r
/
3
{\displaystyle Sr/3}
, which must equal the volume of the sphere:
4
π
r
3
/
3
{\displaystyle 4\pi r^{3}/3}
. Therefore, the surface area of the sphere must be
4
π
r
2
{\displaystyle 4\pi r^{2}}
, or "four times its largest circle". Archimedes proves this rigorously in On the Sphere and Cylinder.
== Curvilinear shapes with rational volumes ==
One of the remarkable things about the Method is that Archimedes finds two shapes defined by sections of cylinders, whose volume does not involve
π
{\displaystyle \pi }
, despite the shapes having curvilinear boundaries. This is a central point of the investigation—certain curvilinear shapes could be rectified by ruler and compass, so that there are nontrivial rational relations between the volumes defined by the intersections of geometrical solids.
Archimedes emphasizes this in the beginning of the treatise, and invites the reader to try to reproduce the results by some other method. Unlike the other examples, the volume of these shapes is not rigorously computed in any of his other works. From fragments in the palimpsest, it appears that Archimedes did inscribe and circumscribe shapes to prove rigorous bounds for the volume, although the details have not been preserved.
The two shapes he considers are the intersection of two cylinders at right angles (the bicylinder), which is the region of (x, y, z) obeying:
x
2
+
y
2
<
1
,
y
2
+
z
2
<
1
,
{\displaystyle x^{2}+y^{2}<1,\quad y^{2}+z^{2}<1,}
and the circular prism, which is the region obeying:
x
2
+
y
2
<
1
,
0
<
z
<
y
.
{\displaystyle x^{2}+y^{2}<1,\quad 0<z<y.}
Both problems have a slicing which produces an easy integral for the mechanical method. For the circular prism, cut up the x-axis into slices. The region in the y-z plane at any x is the interior of a right triangle of side length
1
−
x
2
{\displaystyle {\sqrt {1-x^{2}}}}
whose area is
1
2
(
1
−
x
2
)
{\displaystyle {1 \over 2}(1-x^{2})}
, so that the total volume is:
∫
−
1
1
1
2
(
1
−
x
2
)
d
x
{\displaystyle \displaystyle \int _{-1}^{1}{1 \over 2}(1-x^{2})\,dx}
which can be easily rectified using the mechanical method. Adding to each triangular section a section of a triangular pyramid with area
x
2
/
2
{\displaystyle x^{2}/2}
balances a prism whose cross section is constant.
For the intersection of two cylinders, the slicing is lost in the manuscript, but it can be reconstructed in an obvious way in parallel to the rest of the document: if the x-z plane is the slice direction, the equations for the cylinder give that
x
2
<
1
−
y
2
{\displaystyle x^{2}<1-y^{2}}
while
z
2
<
1
−
y
2
{\displaystyle z^{2}<1-y^{2}}
, which defines a region which is a square in the x-z plane of side length
2
1
−
y
2
{\displaystyle 2{\sqrt {1-y^{2}}}}
, so that the total volume is:
∫
−
1
1
4
(
1
−
y
2
)
d
y
.
{\displaystyle \displaystyle \int _{-1}^{1}4(1-y^{2})\,dy.}
And this is the same integral as for the previous example. Jan Hogendijk argues that, besides the volume of the bicylinder, Archimedes knew its surface area, which is also rational.
== Other propositions in the palimpsest ==
A series of propositions of geometry are proved in the palimpsest by similar arguments. One theorem is that the location of a center of mass of a hemisphere is located 5/8 of the way from the pole to the center of the sphere. This problem is notable, because it is evaluating a cubic integral.
== See also ==
Archimedes Palimpsest
Method of indivisibles
Method of exhaustion
== References == | Wikipedia/The_Method_of_Mechanical_Theorems |
In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, unchanged. That is, when f is the identity function, the equality f(x) = x is true for all values of x to which f can be applied.
== Definition ==
Formally, if X is a set, the identity function f on X is defined to be a function with X as its domain and codomain, satisfying
In other words, the function value f(x) in the codomain X is always the same as the input element x in the domain X. The identity function on X is clearly an injective function as well as a surjective function (its codomain is also its range), so it is bijective.
The identity function f on X is often denoted by idX.
In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of X.
== Algebraic properties ==
If f : X → Y is any function, then f ∘ idX = f = idY ∘ f, where "∘" denotes function composition. In particular, idX is the identity element of the monoid of all functions from X to X (under function composition).
Since the identity element of a monoid is unique, one can alternately define the identity function on M to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, where the endomorphisms of M need not be functions.
== Properties ==
The identity function is a linear operator when applied to vector spaces.
In an n-dimensional vector space the identity function is represented by the identity matrix In, regardless of the basis chosen for the space.
The identity function on the positive integers is a completely multiplicative function (essentially multiplication by 1), considered in number theory.
In a metric space the identity function is trivially an isometry. An object without any symmetry has as its symmetry group the trivial group containing only this isometry (symmetry type C1).
In a topological space, the identity function is always continuous.
The identity function is idempotent.
== See also ==
Identity matrix
Inclusion map
Indicator function
== References == | Wikipedia/Identity_function |
A photograph (also known as a photo, or more generically referred to as an image or picture) is an image created by light falling on a photosensitive surface, usually photographic film or an electronic image sensor. The process and practice of creating such images is called photography.
Most photographs are now created using a smartphone or camera, which uses a lens to focus the scene's visible wavelengths of light into a reproduction of what the human eye would perceive.
== Etymology ==
The word photograph was coined in 1839 by Sir John Herschel and is based on the Greek φῶς (phos), meaning "light", and γραφή (graphê), meaning "drawing, writing", together meaning "drawing with light".
== History ==
The first permanent photograph, a contact-exposed copy of an engraving, was made in 1822 using the bitumen-based "heliography" process developed by Nicéphore Niépce. The first photographs of a real-world scene, made using a camera obscura, followed a few years later at Le Gras, France, in 1826, but Niépce's process was not sensitive enough to be practical for that application: a camera exposure lasting for hours or days was required. In 1829, Niépce entered into a partnership with Louis Daguerre, and the two collaborated to work out a similar, but more sensitive, and otherwise improved process.
After Niépce's death in 1833, Daguerre concentrated on silver halide-based alternatives. He exposed a silver-plated copper sheet to iodine vapor, creating a layer of light-sensitive silver iodide; exposed it in the camera for a few minutes; developed the resulting invisible latent image to visibility with mercury fumes; then bathed the plate in a hot salt solution to remove the remaining silver iodide, making the results light-fast. He named this first practical process for making photographs with a camera, the daguerreotype, after himself. Its existence was announced to the world on 7 January 1839, but working details were not made public until 19 August that year. Other inventors soon made drastic improvements that reduced the required amount of exposure time from a few minutes to just a few seconds, making portrait photography truly practical and widely popular during this time.
The daguerreotype had shortcomings, notably the fragility of the mirror-like image surface and the particular viewing conditions required to see the image properly. Each was a unique, opaque positive that could only be duplicated by copying it with a camera. Inventors set about working out improved processes that would be more practical. By the end of the 1850s, the daguerreotype had been replaced by the less expensive and more easily viewed ambrotype and tintype, which made use of the recently introduced collodion process. Glass plate collodion negatives used to make prints on albumen paper soon became the preferred photographic method and held that position for many years, even after the introduction of the more convenient gelatin process in 1871. Refinements of the gelatin process have remained the primary black-and-white photographic process to this day, differing primarily in the sensitivity of the emulsion and the support material used, which was originally glass, then a variety of flexible plastic films, along with various types of paper for the final prints.
Color photography is almost as old as black-and-white, with early experiments including John Herschel's Anthotype prints in 1842, the pioneering work of Louis Ducos du Hauron in the 1860s, and the Lippmann process unveiled in 1891, but for many years color photography remained little more than a laboratory curiosity. It first became a widespread commercial reality with the introduction of Autochrome plates in 1907, but the plates were very expensive and not suitable for casual snapshot-taking with hand-held cameras. The mid-1930s saw the introduction of Kodachrome and Agfacolor Neu, the first easy-to-use color films of the modern multi-layer chromogenic type. These early processes produced transparencies for use in slide projectors and viewing devices, but color prints became increasingly popular after the introduction of chromogenic color print paper in the 1940s. The needs of the motion picture industry generated a number of special processes and systems, perhaps the best-known being the now-obsolete three-strip Technicolor process.
== Types of photographs ==
Non-digital photographs are produced with a two-step chemical process. In the two-step process, the light-sensitive film captures a negative image (colors and lights/darks are inverted). To produce a positive image, the negative is most commonly transferred ('printed') onto photographic paper. Printing the negative onto transparent film stock is used to manufacture motion picture films.
Alternatively, the film is processed to invert the negative image, yielding positive transparency. Such positive images are usually mounted in frames, called slides. Before recent advances in digital photography, transparencies were widely used by professionals because of their sharpness and accuracy of color rendition. Most photographs published in magazines were taken on color transparency film.
Originally, all photographs were monochromatic or hand-painted in color. Although methods for developing color photos were available as early as 1861, they did not become widely available until the 1940s or 1950s, and even so, until the 1960s, most photographs were taken in black and white. Since then, color photography has dominated popular photography, although black-and-white is still used, being easier to develop than color.
Panoramic format images can be taken with cameras like the Hasselblad Xpan on standard film. Since the 1990s, panoramic photos have been available on the Advanced Photo System (APS) film. APS was developed by several of the major film manufacturers to provide a film with different formats and computerized options available, though APS panoramas were created using a mask in panorama-capable cameras, far less desirable than a true panoramic camera, which achieves its effect through a wider film format. APS has become less popular and has been discontinued.
The advent of the microcomputer and digital photography has led to the rise of digital prints. These prints are created from stored graphic formats such as JPEG, TIFF, and RAW. The types of printers used include inkjet printers, dye-sublimation printers, laser printers, and thermal printers. Inkjet prints are sometimes given the coined name "Giclée".
The Web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today, popular sites such as Flickr, PhotoBucket, and 500px are used by millions of people to share their pictures.
The first "selfie", or self-portrait, was taken by Robert Cornelious back in 1839. "Selfies" have become one of the most common photographs, especially among female young adults. Social media has become such a cultural advancement because of photography. People thrive off of the selfies of their favorite celebrities, many receive millions of likes on social media because of one simple selfie.
== Preservation ==
=== Paper folders ===
Ideal photograph storage involves placing each photo in an individual folder constructed from buffered, or acid-free paper. Buffered paper folders are especially recommended in cases when a photograph was previously mounted onto poor quality material or using an adhesive that will lead to even more acid creation. Store photographs measuring 8x10 inches or smaller vertically along the longer edge of the photo in the buffered paper folder, within a larger archival box, and label each folder with relevant information to identify it. The rigid nature of the folder protects the photo from slumping or creasing, as long as the box is not packed too tightly or under filled. Folder larger photos or brittle photos stacked flat within archival boxes with other materials of comparable size.
=== Polyester enclosures ===
The most stable of plastics used in photo preservation, polyester, does not generate any harmful chemical elements, nor does it have any capability to absorb acids generated by the photograph itself. Polyester sleeves and encapsulation have been praised for their ability to protect the photograph from humidity and environmental pollution, slowing the reaction between the item and the atmosphere. This is true, however the polyester just as frequently traps these elements next to the material it is intended to protect. This is especially risky in a storage environment that experiences drastic fluctuations in humidity or temperature, leading to ferrotyping, or sticking of the photograph to the plastic. Photographs sleeved or encapsulated in polyester cannot be stored vertically in boxes because they will slide down next to each other within the box, bending and folding, nor can the archivist write directly onto the polyester to identify the photograph. Therefore, it is necessary to either stack polyester protected photographs horizontally within a box, or bind them in a three ring binder. Stacking the photos horizontally within a flat box will greatly reduce ease of access, and binders leave three sides of the photo exposed to the effects of light and do not support the photograph evenly on both sides, leading to slumping and bending within the binder. The plastic used for enclosures has been manufactured to be as frictionless as possible to prevent scratching photos during insertion to the sleeves. Unfortunately, the slippery nature of the enclosure generates a build-up of static electricity, which attracts dust and lint particles. The static can attract the dust to the inside of the sleeve, as well, where it can scratch the photograph. Likewise, these components that aid in insertion of the photo, referred to as slip agents, can break down and transfer from the plastic to the photograph, where they deposit as an oily film, attracting further lint and dust. At this time, there is no test to evaluate the long-term effects of these components on photographs. In addition, the plastic sleeves can develop kinks or creases in the surface, which will scratch away at the emulsion during handling.
=== Handling and care ===
It is best to leave photographs lying flat on the table when viewing them. Do not pick it up from a corner, or even from two sides and hold it at eye level. Every time the photograph bends, even a little, this can break down the emulsion. The very nature of enclosing a photograph in plastic encourages users to pick it up; users tend to handle plastic enclosed photographs less gently than non-enclosed photographs, simply because they feel the plastic enclosure makes the photo impervious to all mishandling. As long as a photo is in its folder, there is no need to touch it; simply remove the folder from the box, lay it flat on the table, and open the folder. If for some reason the researchers or archivists do need to handle the actual photo, perhaps to examine the verso for writing, they can use gloves if there appears to be a risk from oils or dirt on the hands.
== Myths and beliefs ==
Because daguerreotypes were rendered on a mirrored surface, many spiritualists also became practitioners of the new art form. Spiritualists would claim that the human image on the mirrored surface was akin to looking into one's soul. The spiritualists also believed that it would open their souls and let demons in. Among some Muslims, it is makruh (disliked) to perform salah (worship) in a place decorated with photographs. Photography and darkroom anomalies and artifacts sometimes lead viewers to believe that spirits or demons have been captured in photos. Some have made a career out of taking pictures of "ghosts" or "spirits". There are many instances where people believe photos will bring bad luck either to the person taking the picture or people captured in the photo. For instance, a photograph taken of a pregnant woman will bring bad luck to the baby in the womb and photos taken of dead people will ensure that person is not successful in the afterlife.
== Legality ==
The production or distribution of certain types of photograph has been forbidden under modern laws, such as those of government buildings, highly classified regions, private property, copyrighted works, children's genitalia, child pornography and less commonly pornography overall. These laws vary greatly between jurisdictions.
In some public property owned by government, such as law courts, government buildings, libraries, civic centres and some of the museums in Hong Kong, photography is not allowed without permission from the government. It is illegal to equip or take photographs and recording in a place of public entertainment, such as cinemas and indoor theaters. In Hungary, from 15 March 2014 when the long-awaited Civil Code was published, the law re-stated what had been normal practice, namely, that a person had the right to refuse being photographed. However, implied consent exists: it is not illegal to photograph a person who does not actively object.
In South Africa photographing people in public is legal. Reproducing and selling photographs of people is legal for editorial and limited fair use commercial purposes. There exists no case law to define what the limits on commercial use are. In the United Kingdom there are no laws forbidding photography of private property from a public place. Persistent and aggressive photography of a single individual may come under the legal definition of harassment. A right to privacy came into existence in UK law as a consequence of the incorporation of the European Convention on Human Rights into domestic law through the Human Rights Act 1998. This can result in restrictions on the publication of photography.
== See also ==
Aerial photography
Archival science
Cinematographer
Conservation and restoration of photographs
Hand-colouring of photographs
List of largest photographs
List of most expensive photographs
List of photographs considered the most important
Photogram
Pseudo-photograph
Slide show
== References ==
== External links ==
Media related to Photographs at Wikimedia Commons
The dictionary definition of photograph at Wiktionary | Wikipedia/Photograph |
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions.
The concept can be extended to functions of several complex variables.
Complex analysis is contrasted with real analysis, which deals with the study of real numbers and functions of a real variable.
== History ==
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
== Complex functions ==
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values
z
{\displaystyle z}
from the domain and their images
f
(
z
)
{\displaystyle f(z)}
in the range may be separated into real and imaginary parts:
z
=
x
+
i
y
and
f
(
z
)
=
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
,
{\displaystyle z=x+iy\quad {\text{ and }}\quad f(z)=f(x+iy)=u(x,y)+iv(x,y),}
where
x
,
y
,
u
(
x
,
y
)
,
v
(
x
,
y
)
{\displaystyle x,y,u(x,y),v(x,y)}
are all real-valued.
In other words, a complex function
f
:
C
→
C
{\displaystyle f:\mathbb {C} \to \mathbb {C} }
may be decomposed into
u
:
R
2
→
R
{\displaystyle u:\mathbb {R} ^{2}\to \mathbb {R} \quad }
and
v
:
R
2
→
R
,
{\displaystyle \quad v:\mathbb {R} ^{2}\to \mathbb {R} ,}
i.e., into two real-valued functions (
u
{\displaystyle u}
,
v
{\displaystyle v}
) of two real variables (
x
{\displaystyle x}
,
y
{\displaystyle y}
).
Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into
R
2
.
{\displaystyle \mathbb {R} ^{2}.}
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
== Holomorphic functions ==
Complex functions that are differentiable at every point of an open subset
Ω
{\displaystyle \Omega }
of the complex plane are said to be holomorphic on
Ω
{\displaystyle \Omega }
. In the context of complex analysis, the derivative of
f
{\displaystyle f}
at
z
0
{\displaystyle z_{0}}
is defined to be
f
′
(
z
0
)
=
lim
z
→
z
0
f
(
z
)
−
f
(
z
0
)
z
−
z
0
.
{\displaystyle f'(z_{0})=\lim _{z\to z_{0}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}.}
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach
z
0
{\displaystyle z_{0}}
in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on
Ω
{\displaystyle \Omega }
can be approximated arbitrarily well by polynomials in some neighborhood of every point in
Ω
{\displaystyle \Omega }
. This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic.
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions
C
→
C
{\displaystyle \mathbb {C} \to \mathbb {C} }
, are holomorphic over the entire complex plane, making them entire functions, while rational functions
p
/
q
{\displaystyle p/q}
, where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions
z
↦
ℜ
(
z
)
{\displaystyle z\mapsto \Re (z)}
,
z
↦
|
z
|
{\displaystyle z\mapsto |z|}
, and
z
↦
z
¯
{\displaystyle z\mapsto {\bar {z}}}
are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If
f
:
C
→
C
{\displaystyle f:\mathbb {C} \to \mathbb {C} }
, defined by
f
(
z
)
=
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
{\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)}
, where
x
,
y
,
u
(
x
,
y
)
,
v
(
x
,
y
)
∈
R
{\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} }
, is holomorphic on a region
Ω
{\displaystyle \Omega }
, then for all
z
0
∈
Ω
{\displaystyle z_{0}\in \Omega }
,
∂
f
∂
z
¯
(
z
0
)
=
0
,
where
∂
∂
z
¯
:=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
.
{\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}(z_{0})=0,\ {\text{where }}{\frac {\partial }{\partial {\bar {z}}}}\mathrel {:=} {\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right).}
In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations
u
x
=
v
y
{\displaystyle u_{x}=v_{y}}
and
u
y
=
−
v
x
{\displaystyle u_{y}=-v_{x}}
, where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms:
C
{\displaystyle \mathbb {C} }
,
C
∖
{
z
0
}
{\displaystyle \mathbb {C} \setminus \{z_{0}\}}
, or
{
z
0
}
{\displaystyle \{z_{0}\}}
for some
z
0
∈
C
{\displaystyle z_{0}\in \mathbb {C} }
. In other words, if two distinct complex numbers
z
{\displaystyle z}
and
w
{\displaystyle w}
are not in the range of an entire function
f
{\displaystyle f}
, then
f
{\displaystyle f}
is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
== Conformal map ==
== Major results ==
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions.
== See also ==
Complex geometry
Hypercomplex analysis
Vector calculus
List of complex analysis topics
Monodromy theorem
Riemann–Roch theorem
Runge's theorem
== References ==
== Sources ==
Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003).
Ahlfors, L., Complex Analysis (McGraw-Hill, 1953).
Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963).
Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.]
Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966).
Conway, J. B., Functions of One Complex Variable. (Springer, 1973).
Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990).
Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893).
Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005).
Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916).
Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.]
Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962).
Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian).
Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.]
Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973).
Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/
Remmert, R., Theory of Complex Functions. (Springer, 1990).
Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966).
Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006).
Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003).
Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978).
Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932).
Wegert, E., Visual Complex Functions. (Birkhäuser, 2012).
Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920)
== External links ==
Wolfram Research's MathWorld Complex Analysis Page | Wikipedia/Complex-valued_function |
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, and applied mathematics, as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to the sum function given by its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable, that is, holomorphic functions.
The concept can be extended to functions of several complex variables.
Complex analysis is contrasted with real analysis, which deals with the study of real numbers and functions of a real variable.
== History ==
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines conformal invariants in quantum field theory.
== Complex functions ==
A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a (not necessarily proper) subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are generally assumed to have a domain that contains a nonempty open subset of the complex plane.
For any complex function, the values
z
{\displaystyle z}
from the domain and their images
f
(
z
)
{\displaystyle f(z)}
in the range may be separated into real and imaginary parts:
z
=
x
+
i
y
and
f
(
z
)
=
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
,
{\displaystyle z=x+iy\quad {\text{ and }}\quad f(z)=f(x+iy)=u(x,y)+iv(x,y),}
where
x
,
y
,
u
(
x
,
y
)
,
v
(
x
,
y
)
{\displaystyle x,y,u(x,y),v(x,y)}
are all real-valued.
In other words, a complex function
f
:
C
→
C
{\displaystyle f:\mathbb {C} \to \mathbb {C} }
may be decomposed into
u
:
R
2
→
R
{\displaystyle u:\mathbb {R} ^{2}\to \mathbb {R} \quad }
and
v
:
R
2
→
R
,
{\displaystyle \quad v:\mathbb {R} ^{2}\to \mathbb {R} ,}
i.e., into two real-valued functions (
u
{\displaystyle u}
,
v
{\displaystyle v}
) of two real variables (
x
{\displaystyle x}
,
y
{\displaystyle y}
).
Similarly, any complex-valued function f on an arbitrary set X (is isomorphic to, and therefore, in that sense, it) can be considered as an ordered pair of two real-valued functions: (Re f, Im f) or, alternatively, as a vector-valued function from X into
R
2
.
{\displaystyle \mathbb {R} ^{2}.}
Some properties of complex-valued functions (such as continuity) are nothing more than the corresponding properties of vector valued functions of two real variables. Other concepts of complex analysis, such as differentiability, are direct generalizations of the similar concepts for real functions, but may have very different properties. In particular, every differentiable complex function is analytic (see next section), and two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain (if the domains are connected). The latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including the complex exponential function, complex logarithm functions, and trigonometric functions.
== Holomorphic functions ==
Complex functions that are differentiable at every point of an open subset
Ω
{\displaystyle \Omega }
of the complex plane are said to be holomorphic on
Ω
{\displaystyle \Omega }
. In the context of complex analysis, the derivative of
f
{\displaystyle f}
at
z
0
{\displaystyle z_{0}}
is defined to be
f
′
(
z
0
)
=
lim
z
→
z
0
f
(
z
)
−
f
(
z
0
)
z
−
z
0
.
{\displaystyle f'(z_{0})=\lim _{z\to z_{0}}{\frac {f(z)-f(z_{0})}{z-z_{0}}}.}
Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in significantly different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we approach
z
0
{\displaystyle z_{0}}
in the complex plane. Consequently, complex differentiability has much stronger implications than real differentiability. For instance, holomorphic functions are infinitely differentiable, whereas the existence of the nth derivative need not imply the existence of the (n + 1)th derivative for real functions. Furthermore, all holomorphic functions satisfy the stronger condition of analyticity, meaning that the function is, at every point in its domain, locally given by a convergent power series. In essence, this means that functions holomorphic on
Ω
{\displaystyle \Omega }
can be approximated arbitrarily well by polynomials in some neighborhood of every point in
Ω
{\displaystyle \Omega }
. This stands in sharp contrast to differentiable real functions; there are infinitely differentiable real functions that are nowhere analytic; see Non-analytic smooth function § A smooth function which is nowhere real analytic.
Most elementary functions, including the exponential function, the trigonometric functions, and all polynomial functions, extended appropriately to complex arguments as functions
C
→
C
{\displaystyle \mathbb {C} \to \mathbb {C} }
, are holomorphic over the entire complex plane, making them entire functions, while rational functions
p
/
q
{\displaystyle p/q}
, where p and q are polynomials, are holomorphic on domains that exclude points where q is zero. Such functions that are holomorphic everywhere except a set of isolated points are known as meromorphic functions. On the other hand, the functions
z
↦
ℜ
(
z
)
{\displaystyle z\mapsto \Re (z)}
,
z
↦
|
z
|
{\displaystyle z\mapsto |z|}
, and
z
↦
z
¯
{\displaystyle z\mapsto {\bar {z}}}
are not holomorphic anywhere on the complex plane, as can be shown by their failure to satisfy the Cauchy–Riemann conditions (see below).
An important property of holomorphic functions is the relationship between the partial derivatives of their real and imaginary components, known as the Cauchy–Riemann conditions. If
f
:
C
→
C
{\displaystyle f:\mathbb {C} \to \mathbb {C} }
, defined by
f
(
z
)
=
f
(
x
+
i
y
)
=
u
(
x
,
y
)
+
i
v
(
x
,
y
)
{\displaystyle f(z)=f(x+iy)=u(x,y)+iv(x,y)}
, where
x
,
y
,
u
(
x
,
y
)
,
v
(
x
,
y
)
∈
R
{\displaystyle x,y,u(x,y),v(x,y)\in \mathbb {R} }
, is holomorphic on a region
Ω
{\displaystyle \Omega }
, then for all
z
0
∈
Ω
{\displaystyle z_{0}\in \Omega }
,
∂
f
∂
z
¯
(
z
0
)
=
0
,
where
∂
∂
z
¯
:=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
.
{\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}(z_{0})=0,\ {\text{where }}{\frac {\partial }{\partial {\bar {z}}}}\mathrel {:=} {\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right).}
In terms of the real and imaginary parts of the function, u and v, this is equivalent to the pair of equations
u
x
=
v
y
{\displaystyle u_{x}=v_{y}}
and
u
y
=
−
v
x
{\displaystyle u_{y}=-v_{x}}
, where the subscripts indicate partial differentiation. However, the Cauchy–Riemann conditions do not characterize holomorphic functions, without additional continuity conditions (see Looman–Menchoff theorem).
Holomorphic functions exhibit some remarkable features. For instance, Picard's theorem asserts that the range of an entire function can take only three possible forms:
C
{\displaystyle \mathbb {C} }
,
C
∖
{
z
0
}
{\displaystyle \mathbb {C} \setminus \{z_{0}\}}
, or
{
z
0
}
{\displaystyle \{z_{0}\}}
for some
z
0
∈
C
{\displaystyle z_{0}\in \mathbb {C} }
. In other words, if two distinct complex numbers
z
{\displaystyle z}
and
w
{\displaystyle w}
are not in the range of an entire function
f
{\displaystyle f}
, then
f
{\displaystyle f}
is a constant function. Moreover, a holomorphic function on a connected open set is determined by its restriction to any nonempty open subset.
== Conformal map ==
== Major results ==
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
A bounded function that is holomorphic in the entire complex plane must be constant; this is Liouville's theorem. It can be used to provide a natural and short proof for the fundamental theorem of algebra which states that the field of complex numbers is algebraically closed.
If a function is holomorphic throughout a connected domain then its values are fully determined by its values on any smaller subdomain. The function on the larger domain is said to be analytically continued from its values on the smaller domain. This allows the extension of the definition of functions, such as the Riemann zeta function, which are initially defined in terms of infinite sums that converge only on limited domains to almost the entire complex plane. Sometimes, as in the case of the natural logarithm, it is impossible to analytically continue a holomorphic function to a non-simply connected domain in the complex plane but it is possible to extend it to a holomorphic function on a closely related surface known as a Riemann surface.
All this refers to complex analysis in one variable. There is also a very rich theory of complex analysis in more than one complex dimension in which the analytic properties such as power series expansion carry over whereas most of the geometric properties of holomorphic functions in one complex dimension (such as conformality) do not carry over. The Riemann mapping theorem about the conformal relationship of certain domains in the complex plane, which may be the most important result in the one-dimensional theory, fails dramatically in higher dimensions.
A major application of certain complex spaces is in quantum mechanics as wave functions.
== See also ==
Complex geometry
Hypercomplex analysis
Vector calculus
List of complex analysis topics
Monodromy theorem
Riemann–Roch theorem
Runge's theorem
== References ==
== Sources ==
Ablowitz, M. J. & A. S. Fokas, Complex Variables: Introduction and Applications (Cambridge, 2003).
Ahlfors, L., Complex Analysis (McGraw-Hill, 1953).
Cartan, H., Théorie élémentaire des fonctions analytiques d'une ou plusieurs variables complexes. (Hermann, 1961). English translation, Elementary Theory of Analytic Functions of One or Several Complex Variables. (Addison-Wesley, 1963).
Carathéodory, C., Funktionentheorie. (Birkhäuser, 1950). English translation, Theory of Functions of a Complex Variable (Chelsea, 1954). [2 volumes.]
Carrier, G. F., M. Krook, & C. E. Pearson, Functions of a Complex Variable: Theory and Technique. (McGraw-Hill, 1966).
Conway, J. B., Functions of One Complex Variable. (Springer, 1973).
Fisher, S., Complex Variables. (Wadsworth & Brooks/Cole, 1990).
Forsyth, A., Theory of Functions of a Complex Variable (Cambridge, 1893).
Freitag, E. & R. Busam, Funktionentheorie. (Springer, 1995). English translation, Complex Analysis. (Springer, 2005).
Goursat, E., Cours d'analyse mathématique, tome 2. (Gauthier-Villars, 1905). English translation, A course of mathematical analysis, vol. 2, part 1: Functions of a complex variable. (Ginn, 1916).
Henrici, P., Applied and Computational Complex Analysis (Wiley). [Three volumes: 1974, 1977, 1986.]
Kreyszig, E., Advanced Engineering Mathematics. (Wiley, 1962).
Lavrentyev, M. & B. Shabat, Методы теории функций комплексного переменного. (Methods of the Theory of Functions of a Complex Variable). (1951, in Russian).
Markushevich, A. I., Theory of Functions of a Complex Variable, (Prentice-Hall, 1965). [Three volumes.]
Marsden & Hoffman, Basic Complex Analysis. (Freeman, 1973).
Needham, T., Visual Complex Analysis. (Oxford, 1997). http://usf.usfca.edu/vca/
Remmert, R., Theory of Complex Functions. (Springer, 1990).
Rudin, W., Real and Complex Analysis. (McGraw-Hill, 1966).
Shaw, W. T., Complex Analysis with Mathematica (Cambridge, 2006).
Stein, E. & R. Shakarchi, Complex Analysis. (Princeton, 2003).
Sveshnikov, A. G. & A. N. Tikhonov, Теория функций комплексной переменной. (Nauka, 1967). English translation, The Theory Of Functions Of A Complex Variable (MIR, 1978).
Titchmarsh, E. C., The Theory of Functions. (Oxford, 1932).
Wegert, E., Visual Complex Functions. (Birkhäuser, 2012).
Whittaker, E. T. & G. N. Watson, A Course of Modern Analysis. (Cambridge, 1902). 3rd ed. (1920)
== External links ==
Wolfram Research's MathWorld Complex Analysis Page | Wikipedia/Function_of_a_complex_variable |
Imaging is the representation or reproduction of an object's form; especially a visual representation (i.e., the formation of an image).
Imaging technology is the application of materials and methods to create, preserve, or duplicate images.
Imaging science is a multidisciplinary field concerned with the generation, collection, duplication, analysis, modification, and visualization of images, including imaging things that the human eye cannot detect. As an evolving field it includes research and researchers from physics, mathematics, electrical engineering, computer vision, computer science, and perceptual psychology.
Imagers are imaging sensors.
== Imaging chain ==
The foundation of imaging science as a discipline is the "imaging chain" – a conceptual model describing all of the factors which must be considered when developing a system for creating visual renderings (images). In general, the links of the imaging chain include:
The human visual system. Designers must also consider the psychophysical processes which take place in human beings as they make sense of information received through the visual system.
The subject of the image. When developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. These observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy.
The capture device. Once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. For example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal.
The processor. For all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. In practice, there are often multiple processors involved in the creation of a digital image.
The display. The display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. Examples include paper (for printed, or "hard copy" images), television, computer monitor, or projector.
Note that some imaging scientists will include additional "links" in their description of the imaging chain. For example, some will include the "source" of the energy which "illuminates" or interacts with the subject of the image. Others will include storage and/or transmission systems.
== Subfields ==
Subfields within imaging science include: image processing, computer vision, 3D computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies.
== Methodologies ==
Acoustic imaging
Coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar (SAR), medical ultrasound and optical coherence tomography; non-coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes.
Chemical imaging, the simultaneous measurement of spectra and pictures
Digital imaging, creating digital images, generally by scanning or through digital photography
Disk image, a file which contains the exact content of a data storage medium
Document imaging, replicating documents commonly used in business
Geophysical imaging
Industrial process imaging
Medical imaging, creating images of the human body or parts of it, to diagnose or examine disease
Medical optical imaging
Magnetic resonance imaging
Magneto-Acousto-Electrical Tomography (MAET), imaging modality to image the electrical conductivity of biological tissues
Molecular imaging
Radar imaging, or imaging radar, for obtaining an image of an object, not just its location and speed
Range imaging, for obtaining images with depth information
Reprography, reproduction of graphics through electrical and mechanical means
Cinematography
Photography, the process of creating still images
Xerography, the method of photocopying
Speckle imaging, a method of shift-and-add for astronomical imaging
Stereo imaging, an aspect of sound recording and reproduction concerning spatial locations of the performers
Thermography, infrared imaging
Tactile imaging, also known as elastography
== Examples ==
Imaging technology materials and methods include:
Computer graphics
Virtual camera system used in computer and video games and virtual cinematography
Microfilm and Micrographics
Visual arts
Etching
Drawing and Technical drawing
Film
Painting
Photography
Multiple-camera setup enables stereoscopy and stereophotogrammetry
Light-field camera (basically refocusable photography)
Printmaking
Sculpture
Infrared
Radar imagery
Ultrasound
Multi-spectral image
Electro-optical sensor
Charge-coupled device
Ground-penetrating radar
Electron microscope
Imagery analysis
Medical radiography
Industrial radiography
LIDAR
Image scanner
Structured-light 3D scanner
== See also ==
Image development (disambiguation)
Image processing
Nonimaging optics
Society for Imaging Science and Technology
The Imaging Science Journal
== References ==
== Further reading ==
Harrison Hooker Barrett and Kyle J. Myers, Foundations of Image Science (John Wiley & Sons, 2004) ISBN 0471153001
Ronald N. Bracewell, Fourier Analysis and Imaging (Kluwer Academic, 2003) ISBN 0306481871
Roger L. Easton, Jr., Fourier Methods in Imaging (John Wiley & Sons, 2010) ISBN 9780470689837 DOI 10.1002/9780470660102
Robert D. Fiete, Modeling the Imaging Chain of Digital Cameras (SPIE Press, 2010) ISBN 9780819483393
== External links ==
Chester F. Carlson Center for Imaging Science at RIT Research center that offers B.S., M.S., and Ph.D. degrees in Imaging Science.
The University of Arizona College of Optical Sciences offers an image science track for the M.S and Ph.D. degree in optical sciences.
Science de l'image et des médias numériques Bachelor of image science and digital media unique in Canada.
Image Sciences Institute, Utrecht, Netherlands Utrecht University Institute for Image Sciences - focuses on fundamental and applied research in specifically medical image processing and acquisition.
Vanderbilt University Institute of Imaging Science - dedicated to using imaging to improve health-care and for advancing knowledge in the biological sciences. | Wikipedia/Imaging_science |
Information science is an academic field which is primarily concerned with analysis, collection, classification, manipulation, storage, retrieval, movement, dissemination, and protection of information. Practitioners within and outside the field study the application and the usage of knowledge in organizations in addition to the interaction between people, organizations, and any existing information systems with the aim of creating, replacing, improving, or understanding the information systems.
== Disciplines and related fields ==
Historically, information science has evolved as a transdisciplinary field, both drawing from and contributing to diverse domains.
=== Core foundations ===
Technical and computational: informatics, computer science, data science, network science, information theory, discrete mathematics, statistics and analytics
Information organization: library science, archival science, documentation science, knowledge representation, ontologies, organization studies
Human dimensions: human-computer interaction, cognitive psychology, information behavior, social epistemology, philosophy of information, information ethics and science and technology studies
=== Applied contexts ===
Information science methodologies are applied across numerous domains, reflecting the discipline's versatility and relevance. Key application areas include:
Health and life sciences: health informatics, bioinformatics
Cultural and social contexts: digital humanities, computational social science, social media analytics, social informatics, computational linguistics
Spatial information: geographic information science, environmental informatics
Organizational settings: knowledge management, business analytics, decision support systems, information economics
Security and governance: cybersecurity, intelligence analysis, information policy, IT law, legal informatics
Education and learning: educational technology, learning analytics
The interdisciplinary nature of information science continues to expand as new technological developments and social practices emerge, creating innovative research frontiers that bridge traditional disciplinary boundaries.
== Foundations ==
=== Scope and approach ===
Information science focuses on understanding problems from the perspective of the stakeholders involved and then applying information and other technologies as needed. In other words, it tackles systemic problems first rather than individual pieces of technology within that system. In this respect, one can see information science as a response to technological determinism, the belief that technology "develops by its own laws, that it realizes its own potential, limited only by the material resources available and the creativity of its developers. It must therefore be regarded as an autonomous system controlling and ultimately permeating all other subsystems of society."
Many universities have entire colleges, departments or schools devoted to the study of information science, while numerous information-science scholars work in disciplines such as communication, healthcare, computer science, law, and sociology. Several institutions have formed an I-School Caucus (see List of I-Schools), but numerous others besides these also have comprehensive information specializations.
Within information science, current issues as of 2013 include:
Human–computer interaction for science
Groupware
The Semantic Web
Value sensitive design
Iterative design processes
The ways people generate, use and find information
=== Definitions ===
The first known usage of the term "information science" was in 1955. An early definition of Information science (going back to 1968, the year when the American Documentation Institute renamed itself as the American Society for Information Science and Technology) states:
"Information science is that discipline that investigates the properties and behavior of information, the forces governing the flow of information, and the means of processing information for optimum accessibility and usability. It is concerned with that body of knowledge relating to the origination, collection, organization, storage, retrieval, interpretation, transmission, transformation, and utilization of information. This includes the authenticity of information representations in both natural and artificial systems, the use of codes for efficient message transmission, and the study of information processing devices and techniques such as computers and their programming systems. It is an interdisciplinary science derived from and related to such fields as mathematics, logic, linguistics, psychology, computer technology, operations research, the graphic arts, communications, management, and other similar fields. It has both a pure science component, which inquires into the subject without regard to its application, and an applied science component, which develops services and products." (Borko 1968, p. 3).
==== Related terms ====
Some authors use informatics as a synonym for information science. This is especially true when related to the concept developed by A. I. Mikhailov and other Soviet authors in the mid-1960s. The Mikhailov school saw informatics as a discipline related to the study of scientific information.
Informatics is difficult to precisely define because of the rapidly evolving and interdisciplinary nature of the field. Definitions reliant on the nature of the tools used for deriving meaningful information from data are emerging in Informatics academic programs.
Regional differences and international terminology complicate the problem. Some people note that much of what is called "Informatics" today was once called "Information Science" – at least in fields such as Medical Informatics. For example, when library scientists also began to use the phrase "Information Science" to refer to their work, the term "informatics" emerged:
in the United States as a response by computer scientists to distinguish their work from that of library science
in Britain as a term for a science of information that studies natural, as well as artificial or engineered, information-processing systems
Another term discussed as a synonym for "information studies" is "information systems". Brian Campbell Vickery's Information Systems (1973) placed information systems within IS. Ellis, Allen & Wilson (1999), on the other hand, provided a bibliometric investigation describing the relation between two different fields: "information science" and "information systems".
=== Philosophy of information ===
Philosophy of information studies conceptual issues arising at the intersection of psychology, computer science, information technology, and philosophy. It includes the investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation and sciences, as well as the elaboration and application of information-theoretic and computational methodologies to its philosophical problems. Robert Hammarberg pointed out that there is no coherent distinction between information and data: "an Information Processing System (IPS) cannot process data except in terms of whatever representational language is inherent to it, [so] data could not even be apprehended by an IPS without becoming representational in nature, and thus losing their status of being raw, brute, facts."
=== Ontology ===
In science and information science, an ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts. It can be used to reason about the entities within that domain and may be used to describe the domain.
More specifically, an ontology is a model for describing the world that consists of a set of types, properties, and relationship types. Exactly what is provided around these varies, but they are the essentials of an ontology. There is also generally an expectation that there be a close resemblance between the real world and the features of the model in an ontology.
In theory, an ontology is a "formal, explicit specification of a shared conceptualisation". An ontology renders shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations.
Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture as a form of knowledge representation about the world or some part of it. The creation of domain ontologies is also essential to the definition and use of an enterprise architecture framework.
=== Science or discipline? ===
Authors such as Ingwersen argue that informatology has problems defining its own boundaries with other disciplines. According to Popper "Information science operates busily on an ocean of commonsense practical applications, which increasingly involve the computer ... and on commonsense views of language, of communication, of knowledge and Information, computer science is in little better state". Other authors, such as Furner, deny that information science is a true science.
== Careers ==
=== Information scientist ===
An information scientist is an individual, usually with a relevant subject degree or high level of subject knowledge, who provides focused information to scientific and technical research staff in industry or to subject faculty and students in academia. The industry *information specialist/scientist* and the academic information subject specialist/librarian have, in general, similar subject background training, but the academic position holder will be required to hold a second advanced degree—e.g. Master of Library Science (MLS), Military Intelligence (MI), Master of Arts (MA)—in information and library studies in addition to a subject master's. The title also applies to an individual carrying out research in information science.
=== Systems analyst ===
A systems analyst works on creating, designing, and improving information systems for a specific need. Often systems analysts work with one or more businesses to evaluate and implement organizational processes and techniques for accessing information in order to improve efficiency and productivity within the organization(s).
=== Information professional ===
An information professional is an individual who preserves, organizes, and disseminates information. Information professionals are skilled in the organization and retrieval of recorded knowledge. Traditionally, their work has been with print materials, but these skills are being increasingly used with electronic, visual, audio, and digital materials. Information professionals work in a variety of public, private, non-profit, and academic institutions. Information professionals can also be found within organisational and industrial contexts, and are performing roles that include system design and development and system analysis.
== History ==
=== Early beginnings ===
Information science, in studying the collection, classification, manipulation, storage, retrieval and dissemination of information has origins in the common stock of human knowledge. Information analysis has been carried out by scholars at least as early as the time of the Assyrian Empire with the emergence of cultural depositories, what is today known as libraries and archives. Institutionally, information science emerged in the 19th century along with many other social science disciplines. As a science, however, it finds its institutional roots in the history of science, beginning with publication of the first issues of Philosophical Transactions, generally considered the first scientific journal, in 1665 by the Royal Society.
The institutionalization of science occurred throughout the 18th century. In 1731, Benjamin Franklin established the Library Company of Philadelphia, the first library owned by a group of public citizens, which quickly expanded beyond the realm of books and became a center of scientific experimentation, and which hosted public exhibitions of scientific experiments. Benjamin Franklin invested a town in Massachusetts with a collection of books that the town voted to make available to all free of charge, forming the first public library of the United States. Academie de Chirurgia (Paris) published Memoires pour les Chirurgiens, generally considered to be the first medical journal, in 1736. The American Philosophical Society, patterned on the Royal Society (London), was founded in Philadelphia in 1743. As numerous other scientific journals and societies were founded, Alois Senefelder developed the concept of lithography for use in mass printing work in Germany in 1796.
=== 19th century ===
By the 19th century the first signs of information science emerged as separate and distinct from other sciences and social sciences but in conjunction with communication and computation. In 1801, Joseph Marie Jacquard invented a punched card system to control operations of the cloth weaving loom in France. It was the first use of "memory storage of patterns" system. As chemistry journals emerged throughout the 1820s and 1830s, Charles Babbage developed his "difference engine", the first step towards the modern computer, in 1822 and his "analytical engine" by 1834. By 1843 Richard Hoe developed the rotary press, and in 1844 Samuel Morse sent the first public telegraph message. By 1848 William F. Poole begins the Index to Periodical Literature, the first general periodical literature index in the US.
In 1854 George Boole published An Investigation into Laws of Thought..., which lays the foundations for Boolean algebra, which is later used in information retrieval. In 1860 a congress was held at Karlsruhe Technische Hochschule to discuss the feasibility of establishing a systematic and rational nomenclature for chemistry. The congress did not reach any conclusive results, but several key participants returned home with Stanislao Cannizzaro's outline (1858), which ultimately convinces them of the validity of his scheme for calculating atomic weights.
By 1865, the Smithsonian Institution began a catalog of current scientific papers, which became the International Catalogue of Scientific Papers in 1902. The following year the Royal Society began publication of its Catalogue of Papers in London. In 1868, Christopher Sholes, Carlos Glidden, and S. W. Soule produced the first practical typewriter. By 1872 Lord Kelvin devised an analogue computer to predict the tides, and by 1875 Frank Stephen Baldwin was granted the first US patent for a practical calculating machine that performs four arithmetic functions. Alexander Graham Bell and Thomas Edison invented the telephone and phonograph in 1876 and 1877 respectively, and the American Library Association was founded in Philadelphia. In 1879 Index Medicus was first issued by the Library of the Surgeon General, U.S. Army, with John Shaw Billings as librarian, and later the library issues Index Catalogue, which achieved an international reputation as the most complete catalog of medical literature.
=== European documentation ===
The discipline of documentation science, which marks the earliest theoretical foundations of modern information science, emerged in the late part of the 19th century in Europe together with several more scientific indexes whose purpose was to organize scholarly literature. Many information science historians cite Paul Otlet and Henri La Fontaine as the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895. A second generation of European Documentalists emerged after the Second World War, most notably Suzanne Briet. However, "information science" as a term is not popularly used in academia until sometime in the latter part of the 20th century.
Documentalists emphasized the utilitarian integration of technology and technique toward specific social goals. According to Ronald Day, "As an organized system of techniques and technologies, documentation was understood as a player in the historical development of global organization in modernity – indeed, a major player inasmuch as that organization was dependent on the organization and transmission of information." Otlet and Lafontaine (who won the Nobel Prize in 1913) not only envisioned later technical innovations but also projected a global vision for information and information technologies that speaks directly to postwar visions of a global "information society". Otlet and Lafontaine established numerous organizations dedicated to standardization, bibliography, international associations, and consequently, international cooperation. These organizations were fundamental for ensuring international production in commerce, information, communication and modern economic development, and they later found their global form in such institutions as the League of Nations and the United Nations. Otlet designed the Universal Decimal Classification, based on Melville Dewey's decimal classification system.
Although he lived decades before computers and networks emerged, what he discussed prefigured what ultimately became the World Wide Web. His vision of a great network of knowledge focused on documents and included the notions of hyperlinks, search engines, remote access, and social networks.
Otlet not only imagined that all the world's knowledge should be interlinked and made available remotely to anyone, but he also proceeded to build a structured document collection. This collection involved standardized paper sheets and cards filed in custom-designed cabinets according to a hierarchical index (which culled information worldwide from diverse sources) and a commercial information retrieval service (which answered written requests by copying relevant information from index cards). Users of this service were even warned if their query was likely to produce more than 50 results per search.
By 1937 documentation had formally been institutionalized, as evidenced by the founding of the American Documentation Institute (ADI), later called the American Society for Information Science and Technology.
=== Transition to modern information science ===
With the 1950s came increasing awareness of the potential of automatic devices for literature searching and information storage and retrieval. As these concepts grew in magnitude and potential, so did the variety of information science interests. By the 1960s and 70s, there was a move from batch processing to online modes, from mainframe to mini and microcomputers. Additionally, traditional boundaries among disciplines began to fade and many information science scholars joined with other programs. They further made themselves multidisciplinary by incorporating disciplines in the sciences, humanities and social sciences, as well as other professional programs, such as law and medicine in their curriculum.
Among the individuals who had distinct opportunities to facilitate interdisciplinary activity targeted at scientific communication was Foster E. Mohrhardt, director of the National Agricultural Library from 1954 to 1968.
By the 1980s, large databases, such as Grateful Med at the National Library of Medicine, and user-oriented services such as Dialog and Compuserve, were for the first time accessible by individuals from their personal computers. The 1980s also saw the emergence of numerous special interest groups to respond to the changes. By the end of the decade, special interest groups were available involving non-print media, social sciences, energy and the environment, and community information systems. Today, information science largely examines technical bases, social consequences, and theoretical understanding of online databases, widespread use of databases in government, industry, and education, and the development of the Internet and World Wide Web.
== Information dissemination in the 21st century ==
=== Changing definition ===
Dissemination has historically been interpreted as unilateral communication of information. With the advent of the internet, and the explosion in popularity of online communities, social media has changed the information landscape in many respects, and creates both new modes of communication and new types of information", changing the interpretation of the definition of dissemination. The nature of social networks allows for faster diffusion of information than through organizational sources. The internet has changed the way we view, use, create, and store information; now it is time to re-evaluate the way we share and spread it.
=== Impact of social media on people and industry ===
Social media networks provide an open information environment for the mass of people who have limited time or access to traditional outlets of information diffusion, this is an "increasingly mobile and social world [that] demands...new types of information skills". Social media integration as an access point is a very useful and mutually beneficial tool for users and providers. All major news providers have visibility and an access point through networks such as Facebook and Twitter maximizing their breadth of audience. Through social media people are directed to, or provided with, information by people they know. The ability to "share, like, and comment on...content" increases the reach farther and wider than traditional methods. People like to interact with information, they enjoy including the people they know in their circle of knowledge. Sharing through social media has become so influential that publishers must "play nice" if they desire to succeed. Although, it is often mutually beneficial for publishers and Facebook to "share, promote and uncover new content" to improve both user base experiences. The impact of popular opinion can spread in unimaginable ways. Social media allows interaction through simple to learn and access tools; The Wall Street Journal offers an app through Facebook, and The Washington Post goes a step further and offers an independent social app that was downloaded by 19.5 million users in six months, proving how interested people are in the new way of being provided information.
=== Social media's power to facilitate topics ===
The connections and networks sustained through social media help information providers learn what is important to people. The connections people have throughout the world enable the exchange of information at an unprecedented rate. It is for this reason that these networks have been realized for the potential they provide. "Most news media monitor Twitter for breaking news", as well as news anchors frequently request the audience to tweet pictures of events. The users and viewers of the shared information have earned "opinion-making and agenda-setting power" This channel has been recognized for the usefulness of providing targeted information based on public demand.
== Research sectors and applications ==
The following areas are some of those that information science investigates and develops.
=== Information access ===
Information access is an area of research at the intersection of Informatics, Information Science, Information Security, Language Technology, and Computer Science. The objectives of information access research are to automate the processing of large and unwieldy amounts of information and to simplify users' access to it. What about assigning privileges and restricting access to unauthorized users? The extent of access should be defined in the level of clearance granted for the information. Applicable technologies include information retrieval, text mining, text editing, machine translation, and text categorisation. In discussion, information access is often defined as concerning the insurance of free and closed or public access to information and is brought up in discussions on copyright, patent law, and public domain. Public libraries need resources to provide knowledge of information assurance.
=== Information architecture ===
Information architecture (IA) is the art and science of organizing and labelling websites, intranets, online communities and software to support usability. It is an emerging discipline and community of practice focused on bringing together principles of design and architecture to the digital landscape. Typically it involves a model or concept of information which is used and applied to activities that require explicit details of complex information systems. These activities include library systems and database development.
=== Information management ===
Information management (IM) is the collection and management of information from one or more sources and the distribution of that information to one or more audiences. This sometimes involves those who have a stake in, or a right to that information. Management means the organization of and control over the structure, processing and delivery of information. Throughout the 1970s this was largely limited to files, file maintenance, and the life cycle management of paper-based files, other media and records. With the proliferation of information technology starting in the 1970s, the job of information management took on a new light and also began to include the field of data maintenance.
=== Information retrieval ===
Information retrieval (IR) is the area of study concerned with searching for documents, for information within documents, and for metadata about documents, as well as that of searching structured storage, relational databases, and the World Wide Web. Automated information retrieval systems are used to reduce what has been called "information overload". Many universities and public libraries use IR systems to provide access to books, journals and other documents. Web search engines are the most visible IR applications.
An information retrieval process begins when a user enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevancy.
An object is an entity that is represented by information in a database. User queries are matched against the database information. Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.
Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.
=== Information seeking ===
Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from, information retrieval (IR).
Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians, academics, medical professionals, engineers and lawyers (among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" (Leckie, Pettigrew & Sylvain 1996, p. 188). The model has been adapted by Wilkinson (2001) who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals' work-related reality and desired skills." (Solomon & Bronstein 2021).
=== Information society ===
An information society is a society where the creation, distribution, diffusion, uses, integration and manipulation of information is a significant economic, political, and cultural activity. The aim of an information society is to gain competitive advantage internationally, through using IT in a creative and productive way. The knowledge economy is its economic counterpart, whereby wealth is created through the economic exploitation of understanding. People who have the means to partake in this form of society are sometimes called digital citizens.
Basically, an information society is the means of getting information from one place to another (Wark 1997, p. 22). As technology has become more advanced over time so too has the way we have adapted in sharing this information with each other.
Information society theory discusses the role of information and information technology in society, the question of which key concepts should be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology.
=== Knowledge representation and reasoning ===
Knowledge representation (KR) is an area of artificial intelligence research aimed at representing knowledge in symbols to facilitate inferencing from those knowledge elements, creating new elements of knowledge. The KR can be made to be independent of the underlying knowledge model or knowledge base system (KBS) such as a semantic network.
Knowledge Representation (KR) research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enable inferences about elements in the KR to create new KR sentences. Logic is used to supply formal semantics of how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include, negation, conjunction, adverbs, adjectives, quantifiers and modal operators. The logic is interpretation theory. These elements—symbols, operators, and interpretation theory—are what give sequences of symbols meaning within a KR.
== See also ==
Computer and information science
Category:Information science journals
List of computer science awards § Information science awards
Outline of information science
Outline of information technology
== References ==
== Sources ==
Borko, H. (1968). "Information science: What is it?". American Documentation. 19 (1). Wiley: 3–5. doi:10.1002/asi.5090190103. ISSN 0096-946X.
Leckie, Gloria J.; Pettigrew, Karen E.; Sylvain, Christian (1996). "Modeling the information seeking of professionals: A general model derived from research on engineers, health care professionals, and lawyers". Library Quarterly. 66 (2): 161–193. doi:10.1086/602864. S2CID 7829155.
Wark, McKenzie (1997). The Virtual Republic. Allen & Unwin, St Leonards.
Wilkinson, Margaret A (2001). "Information sources used by lawyers in problem-solving: An empirical exploration". Library & Information Science Research. 23 (3): 257–276. doi:10.1016/s0740-8188(01)00082-2. S2CID 59067811.
== Further reading ==
Khosrow-Pour, Mehdi (22 March 2005). Encyclopedia of Information Science and Technology. Idea Group Reference. ISBN 978-1-59140-553-5.
== External links ==
American Society for Information Science and Technology, a "professional association that bridges the gap between information science practice and research. ASIS&T members represent the fields of information science, computer science, linguistics, management, librarianship, engineering, data science, information architecture, law, medicine, chemistry, education, and related technology".
iSchools
Knowledge Map of Information Science
Journal of Information Science
Digital Library of Information Science and Technology open access archive for the Information Sciences
Current Information Science Research at U.S. Geological Survey
Introduction to Information Science Archived 2021-05-13 at the Wayback Machine
The Nitecki Trilogy
Information science at the University of California at Berkeley in the 1960s: a memoir of student days
Chronology of Information Science and Technology Archived 2011-05-14 at the Wayback Machine
LIBRES – Library and Information Science Research Electronic Journal -
Shared decision-making | Wikipedia/Information_science |
In mathematics, Stirling's approximation (or Stirling's formula) is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of
n
{\displaystyle n}
. It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre.
One way of stating the approximation involves the logarithm of the factorial:
ln
(
n
!
)
=
n
ln
n
−
n
+
O
(
ln
n
)
,
{\displaystyle \ln(n!)=n\ln n-n+O(\ln n),}
where the big O notation means that, for all sufficiently large values of
n
{\displaystyle n}
, the difference between
ln
(
n
!
)
{\displaystyle \ln(n!)}
and
n
ln
n
−
n
{\displaystyle n\ln n-n}
will be at most proportional to the logarithm of
n
{\displaystyle n}
. In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to instead use the binary logarithm, giving the equivalent form
log
2
(
n
!
)
=
n
log
2
n
−
n
log
2
e
+
O
(
log
2
n
)
.
{\displaystyle \log _{2}(n!)=n\log _{2}n-n\log _{2}e+O(\log _{2}n).}
The error term in either base can be expressed more precisely as
1
2
log
(
2
π
n
)
+
O
(
1
n
)
{\displaystyle {\tfrac {1}{2}}\log(2\pi n)+O({\tfrac {1}{n}})}
, corresponding to an approximate formula for the factorial itself,
n
!
∼
2
π
n
(
n
e
)
n
.
{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}
Here the sign
∼
{\displaystyle \sim }
means that the two quantities are asymptotic, that is, their ratio tends to 1 as
n
{\displaystyle n}
tends to infinity.
== Derivation ==
Roughly speaking, the simplest version of Stirling's formula can be quickly obtained by approximating the sum
ln
(
n
!
)
=
∑
j
=
1
n
ln
j
{\displaystyle \ln(n!)=\sum _{j=1}^{n}\ln j}
with an integral:
∑
j
=
1
n
ln
j
≈
∫
1
n
ln
x
d
x
=
n
ln
n
−
n
+
1.
{\displaystyle \sum _{j=1}^{n}\ln j\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1.}
The full formula, together with precise estimates of its error, can be derived as follows. Instead of approximating
n
!
{\displaystyle n!}
, one considers its natural logarithm, as this is a slowly varying function:
ln
(
n
!
)
=
ln
1
+
ln
2
+
⋯
+
ln
n
.
{\displaystyle \ln(n!)=\ln 1+\ln 2+\cdots +\ln n.}
The right-hand side of this equation minus
1
2
(
ln
1
+
ln
n
)
=
1
2
ln
n
{\displaystyle {\tfrac {1}{2}}(\ln 1+\ln n)={\tfrac {1}{2}}\ln n}
is the approximation by the trapezoid rule of the integral
ln
(
n
!
)
−
1
2
ln
n
≈
∫
1
n
ln
x
d
x
=
n
ln
n
−
n
+
1
,
{\displaystyle \ln(n!)-{\tfrac {1}{2}}\ln n\approx \int _{1}^{n}\ln x\,{\rm {d}}x=n\ln n-n+1,}
and the error in this approximation is given by the Euler–Maclaurin formula:
ln
(
n
!
)
−
1
2
ln
n
=
1
2
ln
1
+
ln
2
+
ln
3
+
⋯
+
ln
(
n
−
1
)
+
1
2
ln
n
=
n
ln
n
−
n
+
1
+
∑
k
=
2
m
(
−
1
)
k
B
k
k
(
k
−
1
)
(
1
n
k
−
1
−
1
)
+
R
m
,
n
,
{\displaystyle {\begin{aligned}\ln(n!)-{\tfrac {1}{2}}\ln n&={\tfrac {1}{2}}\ln 1+\ln 2+\ln 3+\cdots +\ln(n-1)+{\tfrac {1}{2}}\ln n\\&=n\ln n-n+1+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}\left({\frac {1}{n^{k-1}}}-1\right)+R_{m,n},\end{aligned}}}
where
B
k
{\displaystyle B_{k}}
is a Bernoulli number, and Rm,n is the remainder term in the Euler–Maclaurin formula. Take limits to find that
lim
n
→
∞
(
ln
(
n
!
)
−
n
ln
n
+
n
−
1
2
ln
n
)
=
1
−
∑
k
=
2
m
(
−
1
)
k
B
k
k
(
k
−
1
)
+
lim
n
→
∞
R
m
,
n
.
{\displaystyle \lim _{n\to \infty }\left(\ln(n!)-n\ln n+n-{\tfrac {1}{2}}\ln n\right)=1-\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)}}+\lim _{n\to \infty }R_{m,n}.}
Denote this limit as
y
{\displaystyle y}
. Because the remainder Rm,n in the Euler–Maclaurin formula satisfies
R
m
,
n
=
lim
n
→
∞
R
m
,
n
+
O
(
1
n
m
)
,
{\displaystyle R_{m,n}=\lim _{n\to \infty }R_{m,n}+O\left({\frac {1}{n^{m}}}\right),}
where big-O notation is used, combining the equations above yields the approximation formula in its logarithmic form:
ln
(
n
!
)
=
n
ln
(
n
e
)
+
1
2
ln
n
+
y
+
∑
k
=
2
m
(
−
1
)
k
B
k
k
(
k
−
1
)
n
k
−
1
+
O
(
1
n
m
)
.
{\displaystyle \ln(n!)=n\ln \left({\frac {n}{e}}\right)+{\tfrac {1}{2}}\ln n+y+\sum _{k=2}^{m}{\frac {(-1)^{k}B_{k}}{k(k-1)n^{k-1}}}+O\left({\frac {1}{n^{m}}}\right).}
Taking the exponential of both sides and choosing any positive integer
m
{\displaystyle m}
, one obtains a formula involving an unknown quantity
e
y
{\displaystyle e^{y}}
. For m = 1, the formula is
n
!
=
e
y
n
(
n
e
)
n
(
1
+
O
(
1
n
)
)
.
{\displaystyle n!=e^{y}{\sqrt {n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).}
The quantity
e
y
{\displaystyle e^{y}}
can be found by taking the limit on both sides as
n
{\displaystyle n}
tends to infinity and using Wallis' product, which shows that
e
y
=
2
π
{\displaystyle e^{y}={\sqrt {2\pi }}}
. Therefore, one obtains Stirling's formula:
n
!
=
2
π
n
(
n
e
)
n
(
1
+
O
(
1
n
)
)
.
{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+O\left({\frac {1}{n}}\right)\right).}
== Alternative derivations ==
An alternative formula for
n
!
{\displaystyle n!}
using the gamma function is
n
!
=
∫
0
∞
x
n
e
−
x
d
x
.
{\displaystyle n!=\int _{0}^{\infty }x^{n}e^{-x}\,{\rm {d}}x.}
(as can be seen by repeated integration by parts). Rewriting and changing variables x = ny, one obtains
n
!
=
∫
0
∞
e
n
ln
x
−
x
d
x
=
e
n
ln
n
n
∫
0
∞
e
n
(
ln
y
−
y
)
d
y
.
{\displaystyle n!=\int _{0}^{\infty }e^{n\ln x-x}\,{\rm {d}}x=e^{n\ln n}n\int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y.}
Applying Laplace's method one has
∫
0
∞
e
n
(
ln
y
−
y
)
d
y
∼
2
π
n
e
−
n
,
{\displaystyle \int _{0}^{\infty }e^{n(\ln y-y)}\,{\rm {d}}y\sim {\sqrt {\frac {2\pi }{n}}}e^{-n},}
which recovers Stirling's formula:
n
!
∼
e
n
ln
n
n
2
π
n
e
−
n
=
2
π
n
(
n
e
)
n
.
{\displaystyle n!\sim e^{n\ln n}n{\sqrt {\frac {2\pi }{n}}}e^{-n}={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}.}
=== Higher orders ===
In fact, further corrections can also be obtained using Laplace's method. From previous result, we know that
Γ
(
x
)
∼
x
x
e
−
x
{\displaystyle \Gamma (x)\sim x^{x}e^{-x}}
, so we "peel off" this dominant term, then perform two changes of variables, to obtain:
x
−
x
e
x
Γ
(
x
)
=
∫
R
e
x
(
1
+
t
−
e
t
)
d
t
{\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{x(1+t-e^{t})}dt}
To verify this:
∫
R
e
x
(
1
+
t
−
e
t
)
d
t
=
t
↦
ln
t
e
x
∫
0
∞
t
x
−
1
e
−
x
t
d
t
=
t
↦
t
/
x
x
−
x
e
x
∫
0
∞
e
−
t
t
x
−
1
d
t
=
x
−
x
e
x
Γ
(
x
)
{\displaystyle \int _{\mathbb {R} }e^{x(1+t-e^{t})}dt{\overset {t\mapsto \ln t}{=}}e^{x}\int _{0}^{\infty }t^{x-1}e^{-xt}dt{\overset {t\mapsto t/x}{=}}x^{-x}e^{x}\int _{0}^{\infty }e^{-t}t^{x-1}dt=x^{-x}e^{x}\Gamma (x)}
.
Now the function
t
↦
1
+
t
−
e
t
{\displaystyle t\mapsto 1+t-e^{t}}
is unimodal, with maximum value zero. Locally around zero, it looks like
−
t
2
/
2
{\displaystyle -t^{2}/2}
, which is why we are able to perform Laplace's method. In order to extend Laplace's method to higher orders, we perform another change of variables by
1
+
t
−
e
t
=
−
τ
2
/
2
{\displaystyle 1+t-e^{t}=-\tau ^{2}/2}
. This equation cannot be solved in closed form, but it can be solved by serial expansion, which gives us
t
=
τ
−
τ
2
/
6
+
τ
3
/
36
+
a
4
τ
4
+
O
(
τ
5
)
{\displaystyle t=\tau -\tau ^{2}/6+\tau ^{3}/36+a_{4}\tau ^{4}+O(\tau ^{5})}
. Now plug back to the equation to obtain
x
−
x
e
x
Γ
(
x
)
=
∫
R
e
−
x
τ
2
/
2
(
1
−
τ
/
3
+
τ
2
/
12
+
4
a
4
τ
3
+
O
(
τ
4
)
)
d
τ
=
2
π
(
x
−
1
/
2
+
x
−
3
/
2
/
12
)
+
O
(
x
−
5
/
2
)
{\displaystyle x^{-x}e^{x}\Gamma (x)=\int _{\mathbb {R} }e^{-x\tau ^{2}/2}(1-\tau /3+\tau ^{2}/12+4a_{4}\tau ^{3}+O(\tau ^{4}))d\tau ={\sqrt {2\pi }}(x^{-1/2}+x^{-3/2}/12)+O(x^{-5/2})}
notice how we don't need to actually find
a
4
{\displaystyle a_{4}}
, since it is cancelled out by the integral. Higher orders can be achieved by computing more terms in
t
=
τ
+
⋯
{\displaystyle t=\tau +\cdots }
, which can be obtained programmatically.
Thus we get Stirling's formula to two orders:
n
!
=
2
π
n
(
n
e
)
n
(
1
+
1
12
n
+
O
(
1
n
2
)
)
.
{\displaystyle n!={\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+O\left({\frac {1}{n^{2}}}\right)\right).}
=== Complex-analytic version ===
A complex-analysis version of this method is to consider
1
n
!
{\displaystyle {\frac {1}{n!}}}
as a Taylor coefficient of the exponential function
e
z
=
∑
n
=
0
∞
z
n
n
!
{\displaystyle e^{z}=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}}
, computed by Cauchy's integral formula as
1
n
!
=
1
2
π
i
∮
|
z
|
=
r
e
z
z
n
+
1
d
z
.
{\displaystyle {\frac {1}{n!}}={\frac {1}{2\pi i}}\oint \limits _{|z|=r}{\frac {e^{z}}{z^{n+1}}}\,\mathrm {d} z.}
This line integral can then be approximated using the saddle-point method with an appropriate choice of contour radius
r
=
r
n
{\displaystyle r=r_{n}}
. The dominant portion of the integral near the saddle point is then approximated by a real integral and Laplace's method, while the remaining portion of the integral can be bounded above to give an error term.
=== Using the Central Limit Theorem and the Poisson distribution ===
An alternative version uses the fact that the Poisson distribution converges to a normal distribution by the Central Limit Theorem.
Since the Poisson distribution with parameter
λ
{\displaystyle \lambda }
converges to a normal distribution with mean
λ
{\displaystyle \lambda }
and variance
λ
{\displaystyle \lambda }
, their density functions will be approximately the same:
exp
(
−
μ
)
μ
x
x
!
≈
1
2
π
μ
exp
(
−
1
2
(
x
−
μ
μ
)
2
)
{\displaystyle {\frac {\exp(-\mu )\mu ^{x}}{x!}}\approx {\frac {1}{\sqrt {2\pi \mu }}}\exp \left(-{\frac {1}{2}}\left({\frac {x-\mu }{\sqrt {\mu }}}\right)^{2}\right)}
Evaluating this expression at the mean, at which the approximation is particularly accurate, simplifies this expression to:
exp
(
−
μ
)
μ
μ
μ
!
≈
1
2
π
μ
{\displaystyle {\frac {\exp(-\mu )\mu ^{\mu }}{\mu !}}\approx {\frac {1}{\sqrt {2\pi \mu }}}}
Taking logs then results in:
−
μ
+
μ
ln
μ
−
ln
μ
!
≈
−
1
2
ln
2
π
μ
{\displaystyle -\mu +\mu \ln \mu -\ln \mu !\approx -{\frac {1}{2}}\ln 2\pi \mu }
which can easily be rearranged to give:
ln
μ
!
≈
μ
ln
μ
−
μ
+
1
2
ln
2
π
μ
{\displaystyle \ln \mu !\approx \mu \ln \mu -\mu +{\frac {1}{2}}\ln 2\pi \mu }
Evaluating at
μ
=
n
{\displaystyle \mu =n}
gives the usual, more precise form of Stirling's approximation.
== Speed of convergence and error estimates ==
Stirling's formula is in fact the first approximation to the following series (now called the Stirling series):
n
!
∼
2
π
n
(
n
e
)
n
(
1
+
1
12
n
+
1
288
n
2
−
139
51840
n
3
−
571
2488320
n
4
+
⋯
)
.
{\displaystyle n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}\left(1+{\frac {1}{12n}}+{\frac {1}{288n^{2}}}-{\frac {139}{51840n^{3}}}-{\frac {571}{2488320n^{4}}}+\cdots \right).}
An explicit formula for the coefficients in this series was given by G. Nemes. Further terms are listed in the On-Line Encyclopedia of Integer Sequences as A001163 and A001164. The first graph in this section shows the relative error vs.
n
{\displaystyle n}
, for 1 through all 5 terms listed above. (Bender and Orszag p. 218) gives the asymptotic formula for the coefficients:
A
2
j
+
1
∼
(
−
1
)
j
2
(
2
j
)
!
/
(
2
π
)
2
(
j
+
1
)
{\displaystyle A_{2j+1}\sim (-1)^{j}2(2j)!/(2\pi )^{2(j+1)}}
which shows that it grows superexponentially, and that by the ratio test the radius of convergence is zero.
As n → ∞, the error in the truncated series is asymptotically equal to the first omitted term. This is an example of an asymptotic expansion. It is not a convergent series; for any particular value of
n
{\displaystyle n}
there are only so many terms of the series that improve accuracy, after which accuracy worsens. This is shown in the next graph, which shows the relative error versus the number of terms in the series, for larger numbers of terms. More precisely, let S(n, t) be the Stirling series to
t
{\displaystyle t}
terms evaluated at
n
{\displaystyle n}
. The graphs show
|
ln
(
S
(
n
,
t
)
n
!
)
|
,
{\displaystyle \left|\ln \left({\frac {S(n,t)}{n!}}\right)\right|,}
which, when small, is essentially the relative error.
Writing Stirling's series in the form
ln
(
n
!
)
∼
n
ln
n
−
n
+
1
2
ln
(
2
π
n
)
+
1
12
n
−
1
360
n
3
+
1
1260
n
5
−
1
1680
n
7
+
⋯
,
{\displaystyle \ln(n!)\sim n\ln n-n+{\tfrac {1}{2}}\ln(2\pi n)+{\frac {1}{12n}}-{\frac {1}{360n^{3}}}+{\frac {1}{1260n^{5}}}-{\frac {1}{1680n^{7}}}+\cdots ,}
it is known that the error in truncating the series is always of the opposite sign and at most the same magnitude as the first omitted term.
Other bounds, due to Robbins, valid for all positive integers
n
{\displaystyle n}
are
2
π
n
(
n
e
)
n
e
1
12
n
+
1
<
n
!
<
2
π
n
(
n
e
)
n
e
1
12
n
.
{\displaystyle {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n+1}}<n!<{\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}e^{\frac {1}{12n}}.}
This upper bound corresponds to stopping the above series for
ln
(
n
!
)
{\displaystyle \ln(n!)}
after the
1
n
{\displaystyle {\frac {1}{n}}}
term. The lower bound is weaker than that obtained by stopping the series after the
1
n
3
{\displaystyle {\frac {1}{n^{3}}}}
term. A looser version of this bound is that
n
!
e
n
n
n
+
1
2
∈
(
2
π
,
e
]
{\displaystyle {\frac {n!e^{n}}{n^{n+{\frac {1}{2}}}}}\in ({\sqrt {2\pi }},e]}
for all
n
≥
1
{\displaystyle n\geq 1}
.
== Stirling's formula for the gamma function ==
For all positive integers,
n
!
=
Γ
(
n
+
1
)
,
{\displaystyle n!=\Gamma (n+1),}
where Γ denotes the gamma function.
However, the gamma function, unlike the factorial, is more broadly defined for all complex numbers other than non-positive integers; nevertheless, Stirling's formula may still be applied. If Re(z) > 0, then
ln
Γ
(
z
)
=
z
ln
z
−
z
+
1
2
ln
2
π
z
+
∫
0
∞
2
arctan
(
t
z
)
e
2
π
t
−
1
d
t
.
{\displaystyle \ln \Gamma (z)=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{z}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t.}
Repeated integration by parts gives
ln
Γ
(
z
)
∼
z
ln
z
−
z
+
1
2
ln
2
π
z
+
∑
n
=
1
N
−
1
B
2
n
2
n
(
2
n
−
1
)
z
2
n
−
1
=
z
ln
z
−
z
+
1
2
ln
2
π
z
+
1
12
z
−
1
360
z
3
+
1
1260
z
5
+
…
,
{\displaystyle {\begin{aligned}\ln \Gamma (z)\sim z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+\sum _{n=1}^{N-1}{\frac {B_{2n}}{2n(2n-1)z^{2n-1}}}\\=z\ln z-z+{\tfrac {1}{2}}\ln {\frac {2\pi }{z}}+{\frac {1}{12z}}-{\frac {1}{360z^{3}}}+{\frac {1}{1260z^{5}}}+\dots ,\end{aligned}}}
where
B
n
{\displaystyle B_{n}}
is the
n
{\displaystyle n}
th Bernoulli number (note that the limit of the sum as
N
→
∞
{\displaystyle N\to \infty }
is not convergent, so this formula is just an asymptotic expansion). The formula is valid for
z
{\displaystyle z}
large enough in absolute value, when |arg(z)| < π − ε, where ε is positive, with an error term of O(z−2N+ 1). The corresponding approximation may now be written:
Γ
(
z
)
=
2
π
z
(
z
e
)
z
(
1
+
O
(
1
z
)
)
.
{\displaystyle \Gamma (z)={\sqrt {\frac {2\pi }{z}}}\,{\left({\frac {z}{e}}\right)}^{z}\left(1+O\left({\frac {1}{z}}\right)\right).}
where the expansion is identical to that of Stirling's series above for
n
!
{\displaystyle n!}
, except that
n
{\displaystyle n}
is replaced with z − 1.
A further application of this asymptotic expansion is for complex argument z with constant Re(z). See for example the Stirling formula applied in Im(z) = t of the Riemann–Siegel theta function on the straight line 1/4 + it.
== A convergent version of Stirling's formula ==
Thomas Bayes showed, in a letter to John Canton published by the Royal Society in 1763, that Stirling's formula did not give a convergent series. Obtaining a convergent version of Stirling's formula entails evaluating Binet's formula:
∫
0
∞
2
arctan
(
t
x
)
e
2
π
t
−
1
d
t
=
ln
Γ
(
x
)
−
x
ln
x
+
x
−
1
2
ln
2
π
x
.
{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\ln \Gamma (x)-x\ln x+x-{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}.}
One way to do this is by means of a convergent series of inverted rising factorials. If
z
n
¯
=
z
(
z
+
1
)
⋯
(
z
+
n
−
1
)
,
{\displaystyle z^{\bar {n}}=z(z+1)\cdots (z+n-1),}
then
∫
0
∞
2
arctan
(
t
x
)
e
2
π
t
−
1
d
t
=
∑
n
=
1
∞
c
n
(
x
+
1
)
n
¯
,
{\displaystyle \int _{0}^{\infty }{\frac {2\arctan \left({\frac {t}{x}}\right)}{e^{2\pi t}-1}}\,{\rm {d}}t=\sum _{n=1}^{\infty }{\frac {c_{n}}{(x+1)^{\bar {n}}}},}
where
c
n
=
1
n
∫
0
1
x
n
¯
(
x
−
1
2
)
d
x
=
1
2
n
∑
k
=
1
n
k
|
s
(
n
,
k
)
|
(
k
+
1
)
(
k
+
2
)
,
{\displaystyle c_{n}={\frac {1}{n}}\int _{0}^{1}x^{\bar {n}}\left(x-{\tfrac {1}{2}}\right)\,{\rm {d}}x={\frac {1}{2n}}\sum _{k=1}^{n}{\frac {k|s(n,k)|}{(k+1)(k+2)}},}
where s(n, k) denotes the Stirling numbers of the first kind. From this one obtains a version of Stirling's series
ln
Γ
(
x
)
=
x
ln
x
−
x
+
1
2
ln
2
π
x
+
1
12
(
x
+
1
)
+
1
12
(
x
+
1
)
(
x
+
2
)
+
+
59
360
(
x
+
1
)
(
x
+
2
)
(
x
+
3
)
+
29
60
(
x
+
1
)
(
x
+
2
)
(
x
+
3
)
(
x
+
4
)
+
⋯
,
{\displaystyle {\begin{aligned}\ln \Gamma (x)&=x\ln x-x+{\tfrac {1}{2}}\ln {\frac {2\pi }{x}}+{\frac {1}{12(x+1)}}+{\frac {1}{12(x+1)(x+2)}}+\\&\quad +{\frac {59}{360(x+1)(x+2)(x+3)}}+{\frac {29}{60(x+1)(x+2)(x+3)(x+4)}}+\cdots ,\end{aligned}}}
which converges when Re(x) > 0.
Stirling's formula may also be given in convergent form as
Γ
(
x
)
=
2
π
x
x
−
1
2
e
−
x
+
μ
(
x
)
{\displaystyle \Gamma (x)={\sqrt {2\pi }}x^{x-{\frac {1}{2}}}e^{-x+\mu (x)}}
where
μ
(
x
)
=
∑
n
=
0
∞
(
(
x
+
n
+
1
2
)
ln
(
1
+
1
x
+
n
)
−
1
)
.
{\displaystyle \mu \left(x\right)=\sum _{n=0}^{\infty }\left(\left(x+n+{\frac {1}{2}}\right)\ln \left(1+{\frac {1}{x+n}}\right)-1\right).}
== Versions suitable for calculators ==
The approximation
Γ
(
z
)
≈
2
π
z
(
z
e
z
sinh
1
z
+
1
810
z
6
)
z
{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {z}{e}}{\sqrt {z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}}}\right)^{z}}
and its equivalent form
2
ln
Γ
(
z
)
≈
ln
(
2
π
)
−
ln
z
+
z
(
2
ln
z
+
ln
(
z
sinh
1
z
+
1
810
z
6
)
−
2
)
{\displaystyle 2\ln \Gamma (z)\approx \ln(2\pi )-\ln z+z\left(2\ln z+\ln \left(z\sinh {\frac {1}{z}}+{\frac {1}{810z^{6}}}\right)-2\right)}
can be obtained by rearranging Stirling's extended formula and observing a coincidence between the resultant power series and the Taylor series expansion of the hyperbolic sine function. This approximation is good to more than 8 decimal digits for z with a real part greater than 8. Robert H. Windschitl suggested it in 2002 for computing the gamma function with fair accuracy on calculators with limited program or register memory.
Gergő Nemes proposed in 2007 an approximation which gives the same number of exact digits as the Windschitl approximation but is much simpler:
Γ
(
z
)
≈
2
π
z
(
1
e
(
z
+
1
12
z
−
1
10
z
)
)
z
,
{\displaystyle \Gamma (z)\approx {\sqrt {\frac {2\pi }{z}}}\left({\frac {1}{e}}\left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)\right)^{z},}
or equivalently,
ln
Γ
(
z
)
≈
1
2
(
ln
(
2
π
)
−
ln
z
)
+
z
(
ln
(
z
+
1
12
z
−
1
10
z
)
−
1
)
.
{\displaystyle \ln \Gamma (z)\approx {\tfrac {1}{2}}\left(\ln(2\pi )-\ln z\right)+z\left(\ln \left(z+{\frac {1}{12z-{\frac {1}{10z}}}}\right)-1\right).}
An alternative approximation for the gamma function stated by Srinivasa Ramanujan in Ramanujan's lost notebook is
Γ
(
1
+
x
)
≈
π
(
x
e
)
x
(
8
x
3
+
4
x
2
+
x
+
1
30
)
1
6
{\displaystyle \Gamma (1+x)\approx {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{\frac {1}{6}}}
for x ≥ 0. The equivalent approximation for ln n! has an asymptotic error of 1/1400n3 and is given by
ln
n
!
≈
n
ln
n
−
n
+
1
6
ln
(
8
n
3
+
4
n
2
+
n
+
1
30
)
+
1
2
ln
π
.
{\displaystyle \ln n!\approx n\ln n-n+{\tfrac {1}{6}}\ln(8n^{3}+4n^{2}+n+{\tfrac {1}{30}})+{\tfrac {1}{2}}\ln \pi .}
The approximation may be made precise by giving paired upper and lower bounds; one such inequality is
π
(
x
e
)
x
(
8
x
3
+
4
x
2
+
x
+
1
100
)
1
/
6
<
Γ
(
1
+
x
)
<
π
(
x
e
)
x
(
8
x
3
+
4
x
2
+
x
+
1
30
)
1
/
6
.
{\displaystyle {\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{100}}\right)^{1/6}<\Gamma (1+x)<{\sqrt {\pi }}\left({\frac {x}{e}}\right)^{x}\left(8x^{3}+4x^{2}+x+{\frac {1}{30}}\right)^{1/6}.}
== History ==
The formula was first discovered by Abraham de Moivre in the form
n
!
∼
[
c
o
n
s
t
a
n
t
]
⋅
n
n
+
1
2
e
−
n
.
{\displaystyle n!\sim [{\rm {constant}}]\cdot n^{n+{\frac {1}{2}}}e^{-n}.}
De Moivre gave an approximate rational-number expression for the natural logarithm of the constant. Stirling's contribution consisted of showing that the constant is precisely
2
π
{\displaystyle {\sqrt {2\pi }}}
.
== See also ==
Lanczos approximation
Spouge's approximation
== References ==
== Further reading ==
Abramowitz, M. & Stegun, I. (2002), Handbook of Mathematical Functions
Paris, R. B. & Kaminski, D. (2001), Asymptotics and Mellin–Barnes Integrals, New York: Cambridge University Press, ISBN 978-0-521-79001-7
Whittaker, E. T. & Watson, G. N. (1996), A Course in Modern Analysis (4th ed.), New York: Cambridge University Press, ISBN 978-0-521-58807-2
Romik, Dan (2000), "Stirling's approximation for
n
!
{\displaystyle n!}
: the ultimate short proof?", The American Mathematical Monthly, 107 (6): 556–557, doi:10.2307/2589351, JSTOR 2589351, MR 1767064
Li, Yuan-Chuan (July 2006), "A note on an identity of the gamma function and Stirling's formula", Real Analysis Exchange, 32 (1): 267–271, MR 2329236
== External links ==
"Stirling_formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Peter Luschny, Approximation formulas for the factorial function n!
Weisstein, Eric W., "Stirling's Approximation", MathWorld
Stirling's approximation at PlanetMath. | Wikipedia/Stirling's_approximation |
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers
R
{\displaystyle \mathbb {R} }
, or a subset of
R
{\displaystyle \mathbb {R} }
that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval.
The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of
R
{\displaystyle \mathbb {R} }
-vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an
R
{\displaystyle \mathbb {R} }
-algebra, such as the complex numbers or the quaternions. The structure
R
{\displaystyle \mathbb {R} }
-vector space of the codomain induces a structure of
R
{\displaystyle \mathbb {R} }
-vector space on the functions. If the codomain has a structure of
R
{\displaystyle \mathbb {R} }
-algebra, the same is true for the functions.
The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve.
When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications.
== Real function ==
A real function is a function from a subset of
R
{\displaystyle \mathbb {R} }
to
R
,
{\displaystyle \mathbb {R} ,}
where
R
{\displaystyle \mathbb {R} }
denotes as usual the set of real numbers. That is, the domain of a real function is a subset
R
{\displaystyle \mathbb {R} }
, and its codomain is
R
.
{\displaystyle \mathbb {R} .}
It is generally assumed that the domain contains an interval of positive length.
=== Basic examples ===
For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of:
All polynomial functions, including constant functions and linear functions
Sine and cosine functions
Exponential function
Some functions are defined everywhere, but not continuous at some points. For example
The Heaviside step function is defined everywhere, but not continuous at zero.
Some functions are defined and continuous everywhere, but not everywhere differentiable. For example
The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero.
The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero.
Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example:
A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator.
The tangent function is not defined for
π
2
+
k
π
,
{\displaystyle {\frac {\pi }{2}}+k\pi ,}
where k is any integer.
The logarithm function is defined only for positive values of the variable.
Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of:
The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable).
== General definition ==
A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.
Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of
R
{\displaystyle \mathbb {R} }
, the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
such that its domain X is a subset of
R
{\displaystyle \mathbb {R} }
that contains an interval of positive length.
A simple example of a function in one variable could be:
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
X
=
{
x
∈
R
:
x
≥
0
}
{\displaystyle X=\{x\in \mathbb {R} \,:\,x\geq 0\}}
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
which is the square root of x.
=== Image ===
The image of a function
f
(
x
)
{\displaystyle f(x)}
is the set of all values of f when the variable x runs in the whole domain of f. For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.
The preimage of a given real number y is the set of the solutions of the equation y = f(x).
=== Domain ===
The domain of a function of several real variables is a subset of
R
{\displaystyle \mathbb {R} }
that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y.
Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable.
=== Algebraic structure ===
The arithmetic operations may be applied to the functions in the following way:
For every real number r, the constant function
(
x
)
↦
r
{\displaystyle (x)\mapsto r}
, is everywhere defined.
For every real number r and every function f, the function
r
f
:
(
x
)
↦
r
f
(
x
)
{\displaystyle rf:(x)\mapsto rf(x)}
has the same domain as f (or is everywhere defined if r = 0).
If f and g are two functions of respective domains X and Y such that X∩Y contains an open subset of
R
{\displaystyle \mathbb {R} }
, then
f
+
g
:
(
x
)
↦
f
(
x
)
+
g
(
x
)
{\displaystyle f+g:(x)\mapsto f(x)+g(x)}
and
f
g
:
(
x
)
↦
f
(
x
)
g
(
x
)
{\displaystyle f\,g:(x)\mapsto f(x)\,g(x)}
are functions that have a domain containing X∩Y.
It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (
R
{\displaystyle \mathbb {R} }
-algebras).
One may similarly define
1
/
f
:
(
x
)
↦
1
/
f
(
x
)
,
{\displaystyle 1/f:(x)\mapsto 1/f(x),}
which is a function only if the set of the points (x) in the domain of f such that f(x) ≠ 0 contains an open subset of
R
{\displaystyle \mathbb {R} }
. This constraint implies that the above two algebras are not fields.
=== Continuity and limit ===
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space.
For defining the continuity, it is useful to consider the distance function of
R
{\displaystyle \mathbb {R} }
, which is an everywhere defined function of 2 real variables:
d
(
x
,
y
)
=
|
x
−
y
|
{\displaystyle d(x,y)=|x-y|}
A function f is continuous at a point
a
{\displaystyle a}
which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that
|
f
(
x
)
−
f
(
a
)
|
<
ε
{\displaystyle |f(x)-f(a)|<\varepsilon }
for all
x
{\displaystyle x}
such that
d
(
x
,
a
)
<
φ
.
{\displaystyle d(x,a)<\varphi .}
In other words, φ may be chosen small enough for having the image by f of the interval of radius φ centered at
a
{\displaystyle a}
contained in the interval of length 2ε centered at
f
(
a
)
.
{\displaystyle f(a).}
A function is continuous if it is continuous at every point of its domain.
The limit of a real-valued function of a real variable is as follows. Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted
L
=
lim
x
→
a
f
(
x
)
,
{\displaystyle L=\lim _{x\to a}f(x),}
if the following condition is satisfied:
For every positive real number ε > 0, there is a positive real number δ > 0 such that
|
f
(
x
)
−
L
|
<
ε
{\displaystyle |f(x)-L|<\varepsilon }
for all x in the domain such that
d
(
x
,
a
)
<
δ
.
{\displaystyle d(x,a)<\delta .}
If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have
f
(
a
)
=
lim
x
→
a
f
(
x
)
.
{\displaystyle f(a)=\lim _{x\to a}f(x).}
When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a.
== Calculus ==
One can collect a number of functions each of a real variable, say
y
1
=
f
1
(
x
)
,
y
2
=
f
2
(
x
)
,
…
,
y
n
=
f
n
(
x
)
{\displaystyle y_{1}=f_{1}(x)\,,\quad y_{2}=f_{2}(x)\,,\ldots ,y_{n}=f_{n}(x)}
into a vector parametrized by x:
y
=
(
y
1
,
y
2
,
…
,
y
n
)
=
[
f
1
(
x
)
,
f
2
(
x
)
,
…
,
f
n
(
x
)
]
{\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})=[f_{1}(x),f_{2}(x),\ldots ,f_{n}(x)]}
The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n:
d
y
d
x
=
(
d
y
1
d
x
,
d
y
2
d
x
,
…
,
d
y
n
d
x
)
{\displaystyle {\frac {d\mathbf {y} }{dx}}=\left({\frac {dy_{1}}{dx}},{\frac {dy_{2}}{dx}},\ldots ,{\frac {dy_{n}}{dx}}\right)}
One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x:
∫
a
b
y
(
x
)
⋅
d
r
=
∫
a
b
y
(
x
)
⋅
d
r
(
x
)
d
x
d
x
{\displaystyle \int _{a}^{b}\mathbf {y} (x)\cdot d\mathbf {r} =\int _{a}^{b}\mathbf {y} (x)\cdot {\frac {d\mathbf {r} (x)}{dx}}dx}
where · is the dot product, and x = a and x = b are the start and endpoints of the curve.
=== Theorems ===
With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign.
== Implicit functions ==
A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space
R
{\displaystyle \mathbb {R} }
2 to the zero element in
R
{\displaystyle \mathbb {R} }
(just the ordinary zero 0):
ϕ
:
R
2
→
{
0
}
{\displaystyle \phi :\mathbb {R} ^{2}\to \{0\}}
and
ϕ
(
x
,
y
)
=
0
{\displaystyle \phi (x,y)=0}
is an equation in the variables. Implicit functions are a more general way to represent functions, since if:
y
=
f
(
x
)
{\displaystyle y=f(x)}
then we can always define:
ϕ
(
x
,
y
)
=
y
−
f
(
x
)
=
0
{\displaystyle \phi (x,y)=y-f(x)=0}
but the converse is not always possible, i.e. not all implicit functions have the form of this equation.
== One-dimensional space curves in ==
R
{\displaystyle \mathbb {R} }
n
=== Formulation ===
Given the functions r1 = r1(t), r2 = r2(t), ..., rn = rn(t) all of a common variable t, so that:
r
1
:
R
→
R
r
2
:
R
→
R
⋯
r
n
:
R
→
R
r
1
=
r
1
(
t
)
r
2
=
r
2
(
t
)
⋯
r
n
=
r
n
(
t
)
{\displaystyle {\begin{aligned}r_{1}:\mathbb {R} \rightarrow \mathbb {R} &\quad r_{2}:\mathbb {R} \rightarrow \mathbb {R} &\cdots &\quad r_{n}:\mathbb {R} \rightarrow \mathbb {R} \\r_{1}=r_{1}(t)&\quad r_{2}=r_{2}(t)&\cdots &\quad r_{n}=r_{n}(t)\\\end{aligned}}}
or taken together:
r
:
R
→
R
n
,
r
=
r
(
t
)
{\displaystyle \mathbf {r} :\mathbb {R} \rightarrow \mathbb {R} ^{n}\,,\quad \mathbf {r} =\mathbf {r} (t)}
then the parametrized n-tuple,
r
(
t
)
=
[
r
1
(
t
)
,
r
2
(
t
)
,
…
,
r
n
(
t
)
]
{\displaystyle \mathbf {r} (t)=[r_{1}(t),r_{2}(t),\ldots ,r_{n}(t)]}
describes a one-dimensional space curve.
=== Tangent line to curve ===
At a point r(t = c) = a = (a1, a2, ..., an) for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t:
r
1
(
t
)
−
a
1
d
r
1
(
t
)
/
d
t
=
r
2
(
t
)
−
a
2
d
r
2
(
t
)
/
d
t
=
⋯
=
r
n
(
t
)
−
a
n
d
r
n
(
t
)
/
d
t
{\displaystyle {\frac {r_{1}(t)-a_{1}}{dr_{1}(t)/dt}}={\frac {r_{2}(t)-a_{2}}{dr_{2}(t)/dt}}=\cdots ={\frac {r_{n}(t)-a_{n}}{dr_{n}(t)/dt}}}
=== Normal plane to curve ===
The equation of the n-dimensional hyperplane normal to the tangent line at r = a is:
(
p
1
−
a
1
)
d
r
1
(
t
)
d
t
+
(
p
2
−
a
2
)
d
r
2
(
t
)
d
t
+
⋯
+
(
p
n
−
a
n
)
d
r
n
(
t
)
d
t
=
0
{\displaystyle (p_{1}-a_{1}){\frac {dr_{1}(t)}{dt}}+(p_{2}-a_{2}){\frac {dr_{2}(t)}{dt}}+\cdots +(p_{n}-a_{n}){\frac {dr_{n}(t)}{dt}}=0}
or in terms of the dot product:
(
p
−
a
)
⋅
d
r
(
t
)
d
t
=
0
{\displaystyle (\mathbf {p} -\mathbf {a} )\cdot {\frac {d\mathbf {r} (t)}{dt}}=0}
where p = (p1, p2, ..., pn) are points in the plane, not on the space curve.
=== Relation to kinematics ===
The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector dr(t)/dt|t = c, and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (p − a) must be normal to dr(t)/dt|t = c.
Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature.
== Matrix valued functions ==
A matrix can also be a function of a single variable. For example, the rotation matrix in 2d:
R
(
θ
)
=
[
cos
θ
−
sin
θ
sin
θ
cos
θ
]
{\displaystyle R(\theta )={\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{bmatrix}}}
is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations):
Λ
(
β
)
=
[
1
1
−
β
2
−
β
1
−
β
2
0
0
−
β
1
−
β
2
1
1
−
β
2
0
0
0
0
1
0
0
0
0
1
]
{\displaystyle \Lambda (\beta )={\begin{bmatrix}{\frac {1}{\sqrt {1-\beta ^{2}}}}&-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&0&0\\-{\frac {\beta }{\sqrt {1-\beta ^{2}}}}&{\frac {1}{\sqrt {1-\beta ^{2}}}}&0&0\\0&0&1&0\\0&0&0&1\\\end{bmatrix}}}
is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant.
== Banach and Hilbert spaces and quantum mechanics ==
Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation:
i
ℏ
∂
∂
t
Ψ
=
H
^
Ψ
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi ={\hat {H}}\Psi }
where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces.
== Complex-valued function of a real variable ==
A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.
If f(x) is such a complex valued function, it may be decomposed as
f(x) = g(x) + ih(x),
where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.
== Cardinality of sets of functions of a real variable ==
The cardinality of the set of real-valued functions of a real variable,
R
R
=
{
f
:
R
→
R
}
{\displaystyle \mathbb {R} ^{\mathbb {R} }=\{f:\mathbb {R} \to \mathbb {R} \}}
, is
ℶ
2
=
2
c
{\displaystyle \beth _{2}=2^{\mathfrak {c}}}
, which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic:
c
a
r
d
(
R
R
)
=
c
a
r
d
(
R
)
c
a
r
d
(
R
)
=
c
c
=
(
2
ℵ
0
)
c
=
2
ℵ
0
⋅
c
=
2
c
.
{\displaystyle \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=\mathrm {card} (\mathbb {R} )^{\mathrm {card} (\mathbb {R} )}={\mathfrak {c}}^{\mathfrak {c}}=(2^{\aleph _{0}})^{\mathfrak {c}}=2^{\aleph _{0}\cdot {\mathfrak {c}}}=2^{\mathfrak {c}}.}
Furthermore, if
X
{\displaystyle X}
is a set such that
2
≤
c
a
r
d
(
X
)
≤
c
{\displaystyle 2\leq \mathrm {card} (X)\leq {\mathfrak {c}}}
, then the cardinality of the set
X
R
=
{
f
:
R
→
X
}
{\displaystyle X^{\mathbb {R} }=\{f:\mathbb {R} \to X\}}
is also
2
c
{\displaystyle 2^{\mathfrak {c}}}
, since
2
c
=
c
a
r
d
(
2
R
)
≤
c
a
r
d
(
X
R
)
≤
c
a
r
d
(
R
R
)
=
2
c
.
{\displaystyle 2^{\mathfrak {c}}=\mathrm {card} (2^{\mathbb {R} })\leq \mathrm {card} (X^{\mathbb {R} })\leq \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=2^{\mathfrak {c}}.}
However, the set of continuous functions
C
0
(
R
)
=
{
f
:
R
→
R
:
f
c
o
n
t
i
n
u
o
u
s
}
{\displaystyle C^{0}(\mathbb {R} )=\{f:\mathbb {R} \to \mathbb {R} :f\ \mathrm {continuous} \}}
has a strictly smaller cardinality, the cardinality of the continuum,
c
{\displaystyle {\mathfrak {c}}}
. This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic:
c
a
r
d
(
C
0
(
R
)
)
≤
c
a
r
d
(
R
Q
)
=
(
2
ℵ
0
)
ℵ
0
=
2
ℵ
0
⋅
ℵ
0
=
2
ℵ
0
=
c
.
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\leq \mathrm {card} (\mathbb {R} ^{\mathbb {Q} })=(2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}\cdot \aleph _{0}}=2^{\aleph _{0}}={\mathfrak {c}}.}
On the other hand, since there is a clear bijection between
R
{\displaystyle \mathbb {R} }
and the set of constant functions
{
f
:
R
→
R
:
f
(
x
)
≡
x
0
}
{\displaystyle \{f:\mathbb {R} \to \mathbb {R} :f(x)\equiv x_{0}\}}
, which forms a subset of
C
0
(
R
)
{\displaystyle C^{0}(\mathbb {R} )}
,
c
a
r
d
(
C
0
(
R
)
)
≥
c
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\geq {\mathfrak {c}}}
must also hold. Hence,
c
a
r
d
(
C
0
(
R
)
)
=
c
{\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))={\mathfrak {c}}}
.
== See also ==
Real analysis
Function of several real variables
Complex analysis
Function of several complex variables
== References ==
F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2.
R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7.
N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6.
== External links ==
Multivariable Calculus
L. A. Talman (2007) Differentiability for Multivariable Functions | Wikipedia/Function_of_a_real_variable |
In calculus, the integral of the secant function can be evaluated using a variety of methods and there are multiple ways of expressing the antiderivative, all of which can be shown to be equivalent via trigonometric identities,
∫
sec
θ
d
θ
=
{
1
2
ln
1
+
sin
θ
1
−
sin
θ
+
C
ln
|
sec
θ
+
tan
θ
|
+
C
ln
|
tan
(
θ
2
+
π
4
)
|
+
C
{\displaystyle \int \sec \theta \,d\theta ={\begin{cases}{\dfrac {1}{2}}\ln {\dfrac {1+\sin \theta }{1-\sin \theta }}+C\\[15mu]\ln {{\bigl |}\sec \theta +\tan \theta \,{\bigr |}}+C\\[15mu]\ln {\left|\,{\tan }{\biggl (}{\dfrac {\theta }{2}}+{\dfrac {\pi }{4}}{\biggr )}\right|}+C\end{cases}}}
This formula is useful for evaluating various trigonometric integrals. In particular, it can be used to evaluate the integral of the secant cubed, which, though seemingly special, comes up rather frequently in applications.
The definite integral of the secant function starting from
0
{\displaystyle 0}
is the inverse Gudermannian function,
gd
−
1
.
{\textstyle \operatorname {gd} ^{-1}.}
For numerical applications, all of the above expressions result in loss of significance for some arguments. An alternative expression in terms of the inverse hyperbolic sine arsinh is numerically well behaved for real arguments
|
ϕ
|
<
1
2
π
{\textstyle |\phi |<{\tfrac {1}{2}}\pi }
:
gd
−
1
ϕ
=
∫
0
ϕ
sec
θ
d
θ
=
arsinh
(
tan
ϕ
)
.
{\displaystyle \operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\sec \theta \,d\theta =\operatorname {arsinh} (\tan \phi ).}
The integral of the secant function was historically one of the first integrals of its type ever evaluated, before most of the development of integral calculus. It is important because it is the vertical coordinate of the Mercator projection, used for marine navigation with constant compass bearing.
== Proof that the different antiderivatives are equivalent ==
=== Trigonometric forms ===
Three common expressions for the integral of the secant,
∫
sec
θ
d
θ
=
1
2
ln
1
+
sin
θ
1
−
sin
θ
+
C
=
ln
|
sec
θ
+
tan
θ
|
+
C
=
ln
|
tan
(
θ
2
+
π
4
)
|
+
C
,
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta &={\dfrac {1}{2}}\ln {\dfrac {1+\sin \theta }{1-\sin \theta }}+C\\[5mu]&=\ln {{\bigl |}\sec \theta +\tan \theta \,{\bigr |}}+C\\[5mu]&=\ln {\left|\,{\tan }{\biggl (}{\frac {\theta }{2}}+{\frac {\pi }{4}}{\biggr )}\right|}+C,\end{aligned}}}
are equivalent because
1
+
sin
θ
1
−
sin
θ
=
|
sec
θ
+
tan
θ
|
=
|
tan
(
θ
2
+
π
4
)
|
.
{\displaystyle {\sqrt {\dfrac {1+\sin \theta }{1-\sin \theta }}}={\bigl |}\sec \theta +\tan \theta \,{\bigr |}=\left|\,{\tan }{\biggl (}{\frac {\theta }{2}}+{\frac {\pi }{4}}{\biggr )}\right|.}
Proof: we can separately apply the tangent half-angle substitution
t
=
tan
1
2
θ
{\displaystyle t=\tan {\tfrac {1}{2}}\theta }
to each of the three forms, and show them equivalent to the same expression in terms of
t
.
{\displaystyle t.}
Under this substitution
cos
θ
=
(
1
−
t
2
)
/
(
1
+
t
2
)
{\displaystyle \cos \theta =(1-t^{2}){\big /}(1+t^{2})}
and
sin
θ
=
2
t
/
(
1
+
t
2
)
.
{\displaystyle \sin \theta =2t{\big /}(1+t^{2}).}
First,
1
+
sin
θ
1
−
sin
θ
=
1
+
2
t
1
+
t
2
1
−
2
t
1
+
t
2
=
1
+
t
2
+
2
t
1
+
t
2
−
2
t
=
(
1
+
t
)
2
(
1
−
t
)
2
=
|
1
+
t
1
−
t
|
.
{\displaystyle {\begin{aligned}{\sqrt {\dfrac {1+\sin \theta }{1-\sin \theta }}}&={\sqrt {\frac {1+{\dfrac {2t}{1+t^{2}}}}{1-{\dfrac {2t}{1+t^{2}}}}}}={\sqrt {\frac {1+t^{2}+2t}{1+t^{2}-2t}}}={\sqrt {\frac {(1+t)^{2}}{(1-t)^{2}}}}\\[5mu]&=\left|{\frac {1+t}{1-t}}\right|.\end{aligned}}}
Second,
|
sec
θ
+
tan
θ
|
=
|
1
cos
θ
+
sin
θ
cos
θ
|
=
|
1
+
t
2
1
−
t
2
+
2
t
1
−
t
2
|
=
|
(
1
+
t
)
2
(
1
+
t
)
(
1
−
t
)
|
=
|
1
+
t
1
−
t
|
.
{\displaystyle {\begin{aligned}{\bigl |}\sec \theta +\tan \theta \,{\bigr |}&=\left|{\frac {1}{\cos \theta }}+{\frac {\sin \theta }{\cos \theta }}\right|=\left|{\frac {1+t^{2}}{1-t^{2}}}+{\frac {2t}{1-t^{2}}}\right|=\left|{\frac {(1+t)^{2}}{(1+t)(1-t)}}\right|\\[5mu]&=\left|{\frac {1+t}{1-t}}\right|.\end{aligned}}}
Third, using the tangent addition identity
tan
(
ϕ
+
ψ
)
=
(
tan
ϕ
+
tan
ψ
)
/
(
1
−
tan
ϕ
tan
ψ
)
,
{\displaystyle \tan(\phi +\psi )=(\tan \phi +\tan \psi ){\big /}(1-\tan \phi \,\tan \psi ),}
|
tan
(
θ
2
+
π
4
)
|
=
|
tan
1
2
θ
+
tan
1
4
π
1
−
tan
1
2
θ
tan
1
4
π
|
=
|
t
+
1
1
−
t
⋅
1
|
=
|
1
+
t
1
−
t
|
.
{\displaystyle {\begin{aligned}\left|\,{\tan }{\biggl (}{\frac {\theta }{2}}+{\frac {\pi }{4}}{\biggr )}\right|&=\left|{\frac {\tan {\tfrac {1}{2}}\theta +\tan {\tfrac {1}{4}}\pi }{1-\tan {\tfrac {1}{2}}\theta \,\tan {\tfrac {1}{4}}\pi }}\right|=\left|{\frac {t+1}{1-t\cdot 1}}\right|\\[5mu]&=\left|{\frac {1+t}{1-t}}\right|.\end{aligned}}}
So all three expressions describe the same quantity.
The conventional solution for the Mercator projection ordinate may be written without the absolute value signs since the latitude
φ
{\displaystyle \varphi }
lies between
−
1
2
π
{\textstyle -{\tfrac {1}{2}}\pi }
and
1
2
π
{\textstyle {\tfrac {1}{2}}\pi }
,
y
=
ln
tan
(
φ
2
+
π
4
)
.
{\displaystyle y=\ln \,{\tan }{\biggl (}{\frac {\varphi }{2}}+{\frac {\pi }{4}}{\biggr )}.}
=== Hyperbolic forms ===
Let
ψ
=
ln
(
sec
θ
+
tan
θ
)
,
e
ψ
=
sec
θ
+
tan
θ
,
sinh
ψ
=
e
ψ
−
e
−
ψ
2
=
tan
θ
,
cosh
ψ
=
1
+
sinh
2
ψ
=
|
sec
θ
|
,
tanh
ψ
=
sin
θ
.
{\displaystyle {\begin{aligned}\psi &=\ln(\sec \theta +\tan \theta ),\\[4pt]e^{\psi }&=\sec \theta +\tan \theta ,\\[4pt]\sinh \psi &={\frac {e^{\psi }-e^{-\psi }}{2}}=\tan \theta ,\\[4pt]\cosh \psi &={\sqrt {1+\sinh ^{2}\psi }}=|\sec \theta \,|,\\[4pt]\tanh \psi &=\sin \theta .\end{aligned}}}
Therefore,
∫
sec
θ
d
θ
=
artanh
(
sin
θ
)
+
C
=
sgn
(
cos
θ
)
arsinh
(
tan
θ
)
+
C
=
sgn
(
sin
θ
)
arcosh
|
sec
θ
|
+
C
.
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta &=\operatorname {artanh} \left(\sin \theta \right)+C\\[-2mu]&=\operatorname {sgn}(\cos \theta )\operatorname {arsinh} \left(\tan \theta \right)+C\\[7mu]&=\operatorname {sgn}(\sin \theta )\operatorname {arcosh} {\left|\sec \theta \right|}+C.\end{aligned}}}
== History ==
The integral of the secant function was one of the "outstanding open problems of the mid-seventeenth century", solved in 1668 by James Gregory. He applied his result to a problem concerning nautical tables. In 1599, Edward Wright evaluated the integral by numerical methods – what today we would call Riemann sums. He wanted the solution for the purposes of cartography – specifically for constructing an accurate Mercator projection. In the 1640s, Henry Bond, a teacher of navigation, surveying, and other mathematical topics, compared Wright's numerically computed table of values of the integral of the secant with a table of logarithms of the tangent function, and consequently conjectured that
∫
0
φ
sec
θ
d
θ
=
ln
tan
(
φ
2
+
π
4
)
.
{\displaystyle \int _{0}^{\varphi }\sec \theta \,d\theta =\ln \tan \left({\frac {\varphi }{2}}+{\frac {\pi }{4}}\right).}
This conjecture became widely known, and in 1665, Isaac Newton was aware of it.
== Evaluations ==
=== By a standard substitution (Gregory's approach) ===
A standard method of evaluating the secant integral presented in various references involves multiplying the numerator and denominator by sec θ + tan θ and then using the substitution u = sec θ + tan θ. This substitution can be obtained from the derivatives of secant and tangent added together, which have secant as a common factor.
Starting with
d
d
θ
sec
θ
=
sec
θ
tan
θ
and
d
d
θ
tan
θ
=
sec
2
θ
,
{\displaystyle {\frac {d}{d\theta }}\sec \theta =\sec \theta \tan \theta \quad {\text{and}}\quad {\frac {d}{d\theta }}\tan \theta =\sec ^{2}\theta ,}
adding them gives
d
d
θ
(
sec
θ
+
tan
θ
)
=
sec
θ
tan
θ
+
sec
2
θ
=
sec
θ
(
tan
θ
+
sec
θ
)
.
{\displaystyle {\begin{aligned}{\frac {d}{d\theta }}(\sec \theta +\tan \theta )&=\sec \theta \tan \theta +\sec ^{2}\theta \\&=\sec \theta (\tan \theta +\sec \theta ).\end{aligned}}}
The derivative of the sum is thus equal to the sum multiplied by sec θ. This enables multiplying sec θ by sec θ + tan θ in the numerator and denominator and performing the following substitutions:
u
=
sec
θ
+
tan
θ
d
u
=
(
sec
θ
tan
θ
+
sec
2
θ
)
d
θ
.
{\displaystyle {\begin{aligned}u&=\sec \theta +\tan \theta \\du&=\left(\sec \theta \tan \theta +\sec ^{2}\theta \right)\,d\theta .\end{aligned}}}
The integral is evaluated as follows:
∫
sec
θ
d
θ
=
∫
sec
θ
(
sec
θ
+
tan
θ
)
sec
θ
+
tan
θ
d
θ
=
∫
sec
2
θ
+
sec
θ
tan
θ
sec
θ
+
tan
θ
d
θ
u
=
sec
θ
+
tan
θ
=
∫
1
u
d
u
d
u
=
(
sec
θ
tan
θ
+
sec
2
θ
)
d
θ
=
ln
|
u
|
+
C
=
ln
|
sec
θ
+
tan
θ
|
+
C
,
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta &=\int {\frac {\sec \theta (\sec \theta +\tan \theta )}{\sec \theta +\tan \theta }}\,d\theta \\[6pt]&=\int {\frac {\sec ^{2}\theta +\sec \theta \tan \theta }{\sec \theta +\tan \theta }}\,d\theta &u&=\sec \theta +\tan \theta \\[6pt]&=\int {\frac {1}{u}}\,du&du&=\left(\sec \theta \tan \theta +\sec ^{2}\theta \right)\,d\theta \\[6pt]&=\ln |u|+C\\[4pt]&=\ln |\sec \theta +\tan \theta |+C,\end{aligned}}}
as claimed. This was the formula discovered by James Gregory.
=== By partial fractions and a substitution (Barrow's approach) ===
Although Gregory proved the conjecture in 1668 in his Exercitationes Geometricae, the proof was presented in a form that renders it nearly impossible for modern readers to comprehend; Isaac Barrow, in his Lectiones Geometricae of 1670, gave the first "intelligible" proof, though even that was "couched in the geometric idiom of the day." Barrow's proof of the result was the earliest use of partial fractions in integration. Adapted to modern notation, Barrow's proof began as follows:
∫
sec
θ
d
θ
=
∫
1
cos
θ
d
θ
=
∫
cos
θ
cos
2
θ
d
θ
=
∫
cos
θ
1
−
sin
2
θ
d
θ
{\displaystyle \int \sec \theta \,d\theta =\int {\frac {1}{\cos \theta }}\,d\theta =\int {\frac {\cos \theta }{\cos ^{2}\theta }}\,d\theta =\int {\frac {\cos \theta }{1-\sin ^{2}\theta }}\,d\theta }
Substituting u = sin θ, du = cos θ dθ, reduces the integral to
∫
1
1
−
u
2
d
u
=
∫
1
(
1
+
u
)
(
1
−
u
)
d
u
=
∫
1
2
(
1
1
+
u
+
1
1
−
u
)
d
u
partial fraction decomposition
=
1
2
(
ln
|
1
+
u
|
−
ln
|
1
−
u
|
)
+
C
=
1
2
ln
|
1
+
u
1
−
u
|
+
C
{\displaystyle {\begin{aligned}\int {\frac {1}{1-u^{2}}}\,du&=\int {\frac {1}{(1+u)(1-u)}}\,du\\[6pt]&=\int {\frac {1}{2}}\!\left({\frac {1}{1+u}}+{\frac {1}{1-u}}\right)du&&{\text{partial fraction decomposition}}\\[6pt]&={\frac {1}{2}}{\bigl (}\ln \left|1+u\right|-\ln \left|1-u\right|{\bigr )}+C\\[6pt]&={\frac {1}{2}}\ln \left|{\frac {1+u}{1-u}}\right|+C\end{aligned}}}
Therefore,
∫
sec
θ
d
θ
=
1
2
ln
1
+
sin
θ
1
−
sin
θ
+
C
,
{\displaystyle \int \sec \theta \,d\theta ={\frac {1}{2}}\ln {\frac {1+\sin \theta }{1-\sin \theta }}+C,}
as expected. Taking the absolute value is not necessary because
1
+
sin
θ
{\displaystyle 1+\sin \theta }
and
1
−
sin
θ
{\displaystyle 1-\sin \theta }
are always non-negative for real values of
θ
.
{\displaystyle \theta .}
=== By the tangent half-angle substitution ===
==== Standard ====
Under the tangent half-angle substitution
t
=
tan
1
2
θ
,
{\textstyle t=\tan {\tfrac {1}{2}}\theta ,}
sin
θ
=
2
t
1
+
t
2
,
cos
θ
=
1
−
t
2
1
+
t
2
,
d
θ
=
2
1
+
t
2
d
t
,
tan
θ
=
sin
θ
cos
θ
=
2
t
1
−
t
2
,
sec
θ
=
1
cos
θ
=
1
+
t
2
1
−
t
2
,
sec
θ
+
tan
θ
=
1
+
2
t
+
t
2
1
−
t
2
=
1
+
t
1
−
t
.
{\displaystyle {\begin{aligned}&\sin \theta ={\frac {2t}{1+t^{2}}},\quad \cos \theta ={\frac {1-t^{2}}{1+t^{2}}},\quad d\theta ={\frac {2}{1+t^{2}}}\,dt,\\[10mu]&\tan \theta ={\frac {\sin \theta }{\cos \theta }}={\frac {2t}{1-t^{2}}},\quad \sec \theta ={\frac {1}{\cos \theta }}={\frac {1+t^{2}}{1-t^{2}}},\\[10mu]&\sec \theta +\tan \theta ={\frac {1+2t+t^{2}}{1-t^{2}}}={\frac {1+t}{1-t}}.\end{aligned}}}
Therefore the integral of the secant function is
∫
sec
θ
d
θ
=
∫
(
1
+
t
2
1
−
t
2
)
(
2
1
+
t
2
)
d
t
t
=
tan
θ
2
=
∫
2
(
1
−
t
)
(
1
+
t
)
d
t
=
∫
(
1
1
+
t
+
1
1
−
t
)
d
t
partial fraction decomposition
=
ln
|
1
+
t
|
−
ln
|
1
−
t
|
+
C
=
ln
|
1
+
t
1
−
t
|
+
C
=
ln
|
sec
θ
+
tan
θ
|
+
C
,
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta &=\int \left({\frac {1+t^{2}}{1-t^{2}}}\right)\!\left({\frac {2}{1+t^{2}}}\right)dt&&t=\tan {\frac {\theta }{2}}\\[6pt]&=\int {\frac {2}{(1-t)(1+t)}}\,dt\\[6pt]&=\int \left({\frac {1}{1+t}}+{\frac {1}{1-t}}\right)dt&&{\text{partial fraction decomposition}}\\[6pt]&=\ln |1+t|-\ln |1-t|+C\\[6pt]&=\ln \left|{\frac {1+t}{1-t}}\right|+C\\[6pt]&=\ln |\sec \theta +\tan \theta |+C,\end{aligned}}}
as before.
==== Non-standard ====
The integral can also be derived by using a somewhat non-standard version of the tangent half-angle substitution, which is simpler in the case of this particular integral, published in 2013, is as follows:
x
=
tan
(
π
4
+
θ
2
)
2
x
1
+
x
2
=
2
tan
(
π
4
+
θ
2
)
sec
2
(
π
4
+
θ
2
)
=
2
sin
(
π
4
+
θ
2
)
cos
(
π
4
+
θ
2
)
=
sin
(
π
2
+
θ
)
=
cos
θ
by the double-angle formula
d
x
=
1
2
sec
2
(
π
4
+
θ
2
)
d
θ
=
1
2
(
1
+
x
2
)
d
θ
d
θ
=
2
1
+
x
2
d
x
.
{\displaystyle {\begin{aligned}x&=\tan \left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)\\[10pt]{\frac {2x}{1+x^{2}}}&={\frac {2\tan \left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)}{\sec ^{2}\left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)}}=2\sin \left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)\cos \left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)\\[6pt]&=\sin \left({\frac {\pi }{2}}+\theta \right)=\cos \theta &&{\text{by the double-angle formula}}\\[10pt]dx&={\frac {1}{2}}\sec ^{2}\left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)d\theta ={\frac {1}{2}}\left(1+x^{2}\right)d\theta \\[10pt]d\theta &={\frac {2}{1+x^{2}}}\,dx.\end{aligned}}}
Substituting:
∫
sec
θ
d
θ
=
∫
1
cos
θ
d
θ
=
∫
1
+
x
2
2
x
⋅
2
1
+
x
2
d
x
=
∫
1
x
d
x
=
ln
|
x
|
+
C
=
ln
|
tan
(
π
4
+
θ
2
)
|
+
C
.
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta =\int {\frac {1}{\cos \theta }}\,d\theta &=\int {\frac {1+x^{2}}{2x}}\cdot {\frac {2}{1+x^{2}}}\,dx\\[6pt]&=\int {\frac {1}{x}}\,dx\\[6pt]&=\ln |x|+C\\[6pt]&=\ln \left|\tan \left({\frac {\pi }{4}}+{\frac {\theta }{2}}\right)\right|+C.\end{aligned}}}
=== By two successive substitutions ===
The integral can also be solved by manipulating the integrand and substituting twice. Using the definition sec θ = 1/cos θ and the identity cos2 θ + sin2 θ = 1, the integral can be rewritten as
∫
sec
θ
d
θ
=
∫
1
cos
θ
d
θ
=
∫
cos
θ
cos
2
θ
d
θ
=
∫
cos
θ
1
−
sin
2
θ
d
θ
.
{\displaystyle \int \sec \theta \,d\theta =\int {\frac {1}{\cos \theta }}\,d\theta =\int {\frac {\cos \theta }{\cos ^{2}\theta }}\,d\theta =\int {\frac {\cos \theta }{1-\sin ^{2}\theta }}\,d\theta .}
Substituting u = sin θ, du = cos θ dθ reduces the integral to
∫
1
1
−
u
2
d
u
.
{\displaystyle \int {\frac {1}{1-u^{2}}}\,du.}
The reduced integral can be evaluated by substituting u = tanh t, du = sech2 t dt, and then using the identity 1 − tanh2 t = sech2 t.
∫
sech
2
t
1
−
tanh
2
t
d
t
=
∫
sech
2
t
sech
2
t
d
t
=
∫
d
t
.
{\displaystyle \int {\frac {\operatorname {sech} ^{2}t}{1-\tanh ^{2}t}}\,dt=\int {\frac {\operatorname {sech} ^{2}t}{\operatorname {sech} ^{2}t}}\,dt=\int dt.}
The integral is now reduced to a simple integral, and back-substituting gives
∫
d
t
=
t
+
C
=
artanh
u
+
C
=
artanh
(
sin
θ
)
+
C
,
{\displaystyle {\begin{aligned}\int dt&=t+C\\&=\operatorname {artanh} u+C\\[4pt]&=\operatorname {artanh} (\sin \theta )+C,\end{aligned}}}
which is one of the hyperbolic forms of the integral.
A similar strategy can be used to integrate the cosecant, hyperbolic secant, and hyperbolic cosecant functions.
=== Other hyperbolic forms ===
It is also possible to find the other two hyperbolic forms directly, by again multiplying and dividing by a convenient term:
∫
sec
θ
d
θ
=
∫
sec
2
θ
sec
θ
d
θ
=
∫
sec
2
θ
±
1
+
tan
2
θ
d
θ
,
{\displaystyle \int \sec \theta \,d\theta =\int {\frac {\sec ^{2}\theta }{\sec \theta }}\,d\theta =\int {\frac {\sec ^{2}\theta }{\pm {\sqrt {1+\tan ^{2}\theta }}}}\,d\theta ,}
where
±
{\displaystyle \pm }
stands for
sgn
(
cos
θ
)
{\displaystyle \operatorname {sgn}(\cos \theta )}
because
1
+
tan
2
θ
=
|
sec
θ
|
.
{\displaystyle {\sqrt {1+\tan ^{2}\theta }}=|\sec \theta \,|.}
Substituting u = tan θ, du = sec2 θ dθ, reduces to a standard integral:
∫
1
±
1
+
u
2
d
u
=
±
arsinh
u
+
C
=
sgn
(
cos
θ
)
arsinh
(
tan
θ
)
+
C
,
{\displaystyle {\begin{aligned}\int {\frac {1}{\pm {\sqrt {1+u^{2}}}}}\,du&=\pm \operatorname {arsinh} u+C\\&=\operatorname {sgn}(\cos \theta )\operatorname {arsinh} \left(\tan \theta \right)+C,\end{aligned}}}
where sgn is the sign function.
Likewise:
∫
sec
θ
d
θ
=
∫
sec
θ
tan
θ
tan
θ
d
θ
=
∫
sec
θ
tan
θ
±
sec
2
θ
−
1
d
θ
.
{\displaystyle \int \sec \theta \,d\theta =\int {\frac {\sec \theta \tan \theta }{\tan \theta }}\,d\theta =\int {\frac {\sec \theta \tan \theta }{\pm {\sqrt {\sec ^{2}\theta -1}}}}\,d\theta .}
Substituting u = |sec θ|, du = |sec θ| tan θ dθ, reduces to a standard integral:
∫
1
±
u
2
−
1
d
u
=
±
arcosh
u
+
C
=
sgn
(
sin
θ
)
arcosh
|
sec
θ
|
+
C
.
{\displaystyle {\begin{aligned}\int {\frac {1}{\pm {\sqrt {u^{2}-1}}}}\,du&=\pm \operatorname {arcosh} u+C\\&=\operatorname {sgn}(\sin \theta )\operatorname {arcosh} \left|\sec \theta \right|+C.\end{aligned}}}
=== Using complex exponential form ===
Under the substitution
z
=
e
i
θ
,
{\displaystyle z=e^{i\theta },}
θ
=
−
i
ln
z
,
d
θ
=
−
i
z
d
z
,
cos
θ
=
z
+
z
−
1
2
,
sin
θ
=
z
−
z
−
1
2
i
,
sec
θ
=
2
z
+
z
−
1
,
tan
θ
=
−
i
z
−
z
−
1
z
+
z
−
1
,
sec
θ
+
tan
θ
=
−
i
2
i
+
z
−
z
−
1
z
+
z
−
1
=
−
i
(
z
+
i
)
(
1
+
i
z
−
1
)
(
z
−
i
)
(
1
+
i
z
−
1
)
=
−
i
z
+
i
z
−
i
{\displaystyle {\begin{aligned}&\theta =-i\ln z,\quad d\theta ={\frac {-i}{z}}dz,\quad \cos \theta ={\frac {z+z^{-1}}{2}},\quad \sin \theta ={\frac {z-z^{-1}}{2i}},\quad \\[5mu]&\sec \theta ={\frac {2}{z+z^{-1}}},\quad \tan \theta =-i{\frac {z-z^{-1}}{z+z^{-1}}},\quad \\[5mu]&\sec \theta +\tan \theta =-i{\frac {2i+z-z^{-1}}{z+z^{-1}}}=-i{\frac {(z+i)(1+iz^{-1})}{(z-i)(1+iz^{-1})}}=-i{\frac {z+i}{z-i}}\end{aligned}}}
So the integral can be solved as:
∫
sec
θ
d
θ
=
∫
2
z
+
z
−
1
−
i
z
d
z
z
=
e
i
θ
=
∫
−
2
i
z
2
+
1
d
z
=
∫
1
z
+
i
−
1
z
−
i
d
z
partial fraction decomposition
=
ln
(
z
+
i
)
−
ln
(
z
−
i
)
+
C
=
ln
z
+
i
z
−
i
+
C
=
ln
(
i
(
sec
θ
+
tan
θ
)
)
+
C
=
ln
(
sec
θ
+
tan
θ
)
+
ln
i
+
C
{\displaystyle {\begin{aligned}\int \sec \theta \,d\theta &=\int {\frac {2}{z+z^{-1}}}\,{\frac {-i}{z}}dz&&z=e^{i\theta }\\[5mu]&=\int {\frac {-2i}{z^{2}+1}}dz\\&=\int {\frac {1}{z+i}}-{\frac {1}{z-i}}\,dz&&{\text{partial fraction decomposition}}\\[5mu]&=\ln(z+i)-\ln(z-i)+C\\[5mu]&=\ln {\frac {z+i}{z-i}}+C\\[5mu]&=\ln {\bigl (}i(\sec \theta +\tan \theta ){\bigr )}+C\\[5mu]&=\ln(\sec \theta +\tan \theta )+\ln i+C\end{aligned}}}
Because the constant of integration can be anything, the additional constant term can be absorbed into it. Finally, if theta is real-valued, we can indicate this with absolute value brackets in order to get the equation into its most familiar form:
∫
sec
θ
d
θ
=
ln
|
tan
θ
+
sec
θ
|
+
C
{\displaystyle \int \sec \theta \,d\theta =\ln \left|\tan \theta +\sec \theta \right|+C}
== Gudermannian and Lambertian ==
The integral of the hyperbolic secant function defines the Gudermannian function:
∫
0
ψ
sech
u
d
u
=
gd
ψ
.
{\displaystyle \int _{0}^{\psi }\operatorname {sech} u\,du=\operatorname {gd} \psi .}
The integral of the secant function defines the Lambertian function, which is the inverse of the Gudermannian function:
∫
0
φ
sec
t
d
t
=
lam
φ
=
gd
−
1
φ
.
{\displaystyle \int _{0}^{\varphi }\sec t\,dt=\operatorname {lam} \varphi =\operatorname {gd} ^{-1}\varphi .}
These functions are encountered in the theory of map projections: the Mercator projection of a point on the sphere with longitude λ and latitude ϕ may be written as:
(
x
,
y
)
=
(
λ
,
lam
φ
)
.
{\displaystyle (x,y)={\bigl (}\lambda ,\operatorname {lam} \varphi {\bigr )}.}
== See also ==
Integral of secant cubed
== Notes ==
== References ==
Maseres, Francis, ed. (1791). Scriptores Logarithmici; Or, A Collection of Several Curious Tracts on the Nature and Construction of Logarithms. Vol. 2. J. Davis.
Strauss, Simon W. (1980). "The Integrals
∫
sec
x
d
x
{\textstyle \int \sec x\,dx}
and
∫
csc
x
d
x
{\textstyle \int \csc x\,dx}
Revisited". Journal of the Washington Academy of Sciences. 70 (4): 137–143. JSTOR 24537231. | Wikipedia/Integral_of_the_secant_function |
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables.
Fundamentally, if a function
F
{\displaystyle F}
is defined such that
F
=
f
(
x
)
{\displaystyle F=f(x)}
, then the derivative of the function
F
{\displaystyle F}
can be taken with respect to another variable. We assume
x
{\displaystyle x}
is a function of
t
{\displaystyle t}
, i.e.
x
=
g
(
t
)
{\displaystyle x=g(t)}
. Then
F
=
f
(
g
(
t
)
)
{\displaystyle F=f(g(t))}
, so
F
′
(
t
)
=
f
′
(
g
(
t
)
)
⋅
g
′
(
t
)
{\displaystyle F'(t)=f'(g(t))\cdot g'(t)}
Written in Leibniz notation, this is:
d
F
d
t
=
d
f
d
x
⋅
d
x
d
t
.
{\displaystyle {\frac {dF}{dt}}={\frac {df}{dx}}\cdot {\frac {dx}{dt}}.}
Thus, if it is known how
x
{\displaystyle x}
changes with respect to
t
{\displaystyle t}
, then we can determine how
F
{\displaystyle F}
changes with respect to
t
{\displaystyle t}
and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc.
For example, if
F
(
x
)
=
G
(
y
)
+
H
(
z
)
{\displaystyle F(x)=G(y)+H(z)}
then
d
F
d
x
⋅
d
x
d
t
=
d
G
d
y
⋅
d
y
d
t
+
d
H
d
z
⋅
d
z
d
t
.
{\displaystyle {\frac {dF}{dx}}\cdot {\frac {dx}{dt}}={\frac {dG}{dy}}\cdot {\frac {dy}{dt}}+{\frac {dH}{dz}}\cdot {\frac {dz}{dt}}.}
== Procedure ==
The most common way to approach related rates problems is the following:
Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order)
Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found.
Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step.
Substitute the known rates of change and the known quantities into the equation.
Solve for the wanted rate of change.
Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will become constants; and when the equation is differentiated, zeroes appear in places of all variables for which the values were plugged in.
== Example ==
A 10-meter ladder is leaning against the wall of a building, and the base of the ladder is sliding away from the building at a rate of 3 meters per second. How fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall?
The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a right triangle with the ladder as the hypotenuse, h. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known.
Step 1:
x
=
6
{\displaystyle x=6}
h
=
10
{\displaystyle h=10}
d
x
d
t
=
3
{\displaystyle {\frac {dx}{dt}}=3}
d
h
d
t
=
0
{\displaystyle {\frac {dh}{dt}}=0}
d
y
d
t
=
?
{\displaystyle {\frac {dy}{dt}}={\text{?}}}
Step 2:
From the Pythagorean theorem, the equation
x
2
+
y
2
=
h
2
,
{\displaystyle x^{2}+y^{2}=h^{2},}
describes the relationship between x, y and h, for a right triangle. Differentiating both sides of this equation with respect to time, t, yields
d
d
t
(
x
2
+
y
2
)
=
d
d
t
(
h
2
)
{\displaystyle {\frac {d}{dt}}\left(x^{2}+y^{2}\right)={\frac {d}{dt}}\left(h^{2}\right)}
Step 3:
When solved for the wanted rate of change, dy/dt, gives us
d
d
t
(
x
2
)
+
d
d
t
(
y
2
)
=
d
d
t
(
h
2
)
{\displaystyle {\frac {d}{dt}}\left(x^{2}\right)+{\frac {d}{dt}}\left(y^{2}\right)={\frac {d}{dt}}\left(h^{2}\right)}
(
2
x
)
d
x
d
t
+
(
2
y
)
d
y
d
t
=
(
2
h
)
d
h
d
t
{\displaystyle (2x){\frac {dx}{dt}}+(2y){\frac {dy}{dt}}=(2h){\frac {dh}{dt}}}
x
d
x
d
t
+
y
d
y
d
t
=
h
d
h
d
t
{\displaystyle x{\frac {dx}{dt}}+y{\frac {dy}{dt}}=h{\frac {dh}{dt}}}
d
y
d
t
=
h
d
h
d
t
−
x
d
x
d
t
y
.
{\displaystyle {\frac {dy}{dt}}={\frac {h{\frac {dh}{dt}}-x{\frac {dx}{dt}}}{y}}.}
Step 4 & 5:
Using the variables from step 1 gives us:
d
y
d
t
=
h
d
h
d
t
−
x
d
x
d
t
y
.
{\displaystyle {\frac {dy}{dt}}={\frac {h{\frac {dh}{dt}}-x{\frac {dx}{dt}}}{y}}.}
d
y
d
t
=
10
×
0
−
6
×
3
y
=
−
18
y
.
{\displaystyle {\frac {dy}{dt}}={\frac {10\times 0-6\times 3}{y}}=-{\frac {18}{y}}.}
Solving for y using the Pythagorean Theorem gives:
x
2
+
y
2
=
h
2
{\displaystyle x^{2}+y^{2}=h^{2}}
6
2
+
y
2
=
10
2
{\displaystyle 6^{2}+y^{2}=10^{2}}
y
=
8
{\displaystyle y=8}
Plugging in 8 for the equation:
−
18
y
=
−
18
8
=
−
9
4
{\displaystyle -{\frac {18}{y}}=-{\frac {18}{8}}=-{\frac {9}{4}}}
It is generally assumed that negative values represent the downward direction. In doing such, the top of the ladder is sliding down the wall at a rate of 9/4 meters per second.
== Physics examples ==
Because one physical quantity often depends on another, which, in turn depends on others, such as time, related-rates methods have broad applications in Physics. This section presents an example of related rates kinematics and electromagnetic induction.
=== Relative kinematics of two vehicles ===
For example, one can consider the kinematics problem where one vehicle is heading West toward an intersection at 80 miles per hour while another is heading North away from the intersection at 60 miles per hour. One can ask whether the vehicles are getting closer or further apart and at what rate at the moment when the North bound vehicle is 3 miles North of the intersection and the West bound vehicle is 4 miles East of the intersection.
Big idea: use chain rule to compute rate of change of distance between two vehicles.
Plan:
Choose coordinate system
Identify variables
Draw picture
Big idea: use chain rule to compute rate of change of distance between two vehicles
Express c in terms of x and y via Pythagorean theorem
Express dc/dt using chain rule in terms of dx/dt and dy/dt
Substitute in x, y, dx/dt, dy/dt
Simplify.
Choose coordinate system:
Let the y-axis point North and the x-axis point East.
Identify variables:
Define y(t) to be the distance of the vehicle heading North from the origin and x(t) to be the distance of the vehicle heading West from the origin.
Express c in terms of x and y via the Pythagorean theorem:
c
=
(
x
2
+
y
2
)
1
/
2
{\displaystyle c=\left(x^{2}+y^{2}\right)^{1/2}}
Express dc/dt using chain rule in terms of dx/dt and dy/dt:
Substitute in x = 4 mi, y = 3 mi, dx/dt = −80 mi/hr, dy/dt = 60 mi/hr and simplify
d
c
d
t
=
4
mi
⋅
(
−
80
mi
/
hr
)
+
3
mi
⋅
(
60
)
mi
/
hr
(
4
mi
)
2
+
(
3
mi
)
2
=
−
320
mi
2
/
hr
+
180
mi
2
/
hr
5
mi
=
−
140
mi
2
/
hr
5
mi
=
−
28
mi
/
hr
{\displaystyle {\begin{aligned}{\frac {dc}{dt}}&={\frac {4{\text{ mi}}\cdot (-80{\text{ mi}}/{\text{hr}})+3{\text{ mi}}\cdot (60){\text{mi}}/{\text{hr}}}{\sqrt {(4{\text{ mi}})^{2}+(3{\text{ mi}})^{2}}}}\\&={\frac {-320{\text{ mi}}^{2}/{\text{hr}}+180{\text{ mi}}^{2}/{\text{hr}}}{5{\text{ mi}}}}\\&={\frac {-140{\text{ mi}}^{2}/{\text{hr}}}{5{\text{ mi}}}}\\&=-28{\text{ mi}}/{\text{hr}}\end{aligned}}}
Consequently, the two vehicles are getting closer together at a rate of 28 mi/hr.
=== Electromagnetic induction of conducting loop spinning in magnetic field ===
The magnetic flux through a loop of area A whose normal is at an angle θ to a magnetic field of strength B is
Φ
B
=
B
A
cos
(
θ
)
,
{\displaystyle \Phi _{B}=BA\cos(\theta ),}
Faraday's law of electromagnetic induction states that the induced electromotive force
E
{\displaystyle {\mathcal {E}}}
is the negative rate of change of magnetic flux
Φ
B
{\displaystyle \Phi _{B}}
through a conducting loop.
E
=
−
d
Φ
B
d
t
,
{\displaystyle {\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}},}
If the loop area A and magnetic field B are held constant, but the loop is rotated so that the angle θ is a known function of time, the rate of change of θ can be related to the rate of change of
Φ
B
{\displaystyle \Phi _{B}}
(and therefore the electromotive force) by taking the time derivative of the flux relation
E
=
−
d
Φ
B
d
t
=
B
A
sin
θ
d
θ
d
t
{\displaystyle {\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}}=BA\sin \theta {\frac {d\theta }{dt}}}
If for example, the loop is rotating at a constant angular velocity ω, so that θ = ωt, then
E
=
ω
B
A
sin
ω
t
{\displaystyle {\mathcal {E}}=\omega BA\sin \omega t}
== References == | Wikipedia/Related_rates |
Method of Fluxions (Latin: De Methodis Serierum et Fluxionum) is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus. The book was completed in 1671 and posthumously published in 1736.
== Background ==
Fluxion is Newton's term for a derivative. He originally developed the method at Woolsthorpe Manor during the closing of Cambridge due to the Great Plague of London from 1665 to 1667. Newton did not choose to make his findings known (similarly, his findings which eventually became the Philosophiae Naturalis Principia Mathematica were developed at this time and hidden from the world in Newton's notes for many years). Gottfried Leibniz developed his form of calculus independently around 1673, seven years after Newton had developed the basis for differential calculus, as seen in surviving documents like “the method of fluxions and fluents..." from 1666. Leibniz, however, published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693.
== Impact ==
The calculus notation in use today is mostly that of Leibniz, although Newton's dot notation for differentiation
x
˙
{\displaystyle {\dot {x}}}
is frequently used to denote derivatives with respect to time.
== Rivalry with Leibniz ==
Newton's Method of Fluxions was formally published posthumously, but following Leibniz's publication of the calculus a bitter rivalry erupted between the two mathematicians over who had developed the calculus first, provoking Newton to reveal his work on fluxions.
== Newton's development of analysis ==
For a period of time encompassing Newton's working life, the discipline of analysis was a subject of controversy in the mathematical community. Although analytic techniques provided solutions to long-standing problems, including problems of quadrature and the finding of tangents, the proofs of these solutions were not known to be reducible to the synthetic rules of Euclidean geometry. Instead, analysts were often forced to invoke infinitesimal, or "infinitely small", quantities to justify their algebraic manipulations. Some of Newton's mathematical contemporaries, such as Isaac Barrow, were highly skeptical of such techniques, which had no clear geometric interpretation. Although in his early work Newton also used infinitesimals in his derivations without justifying them, he later developed something akin to the modern definition of limits in order to justify his work.
== See also ==
== References and notes ==
== External links ==
Method of Fluxions at the Internet Archive | Wikipedia/Method_of_Fluxions |
In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is.
== Usage in science and engineering ==
In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero-order approximation is also common. Cardinal numerals are occasionally used in expressions like an order-zero approximation, an order-one approximation, etc.
The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives.
The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy.
In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion. This affects accuracy. The error usually varies within the interval. Thus the terms (zeroth, first, second, etc.) used above meaning do not directly give information about percent error or significant figures. For example, in the Taylor series expansion of the exponential function,
e
x
=
1
⏟
0
th
+
x
⏟
1
st
+
x
2
2
!
⏟
2
nd
+
x
3
3
!
⏟
3
rd
+
x
4
4
!
⏟
4
th
+
…
,
{\displaystyle e^{x}=\underbrace {1} _{0^{\text{th}}}+\underbrace {x} _{1^{\text{st}}}+\underbrace {\frac {x^{2}}{2!}} _{2^{\text{nd}}}+\underbrace {\frac {x^{3}}{3!}} _{3^{\text{rd}}}+\underbrace {\frac {x^{4}}{4!}} _{4^{\text{th}}}+\ldots \;,}
the zeroth-order term is
1
;
{\displaystyle 1;}
the first-order term is
x
,
{\displaystyle x,}
second-order is
x
2
/
2
,
{\displaystyle x^{2}/2,}
and so forth. If
|
x
|
<
1
,
{\displaystyle |x|<1,}
each higher order term is smaller than the previous. If
|
x
|
<<
1
,
{\displaystyle |x|<<1,\,}
then the first order approximation,
e
x
≈
1
+
x
,
{\displaystyle e^{x}\approx 1+x,}
is often sufficient. But at
x
=
1
,
{\displaystyle x=1,}
the first-order term,
x
,
{\displaystyle x,}
is not smaller than the zeroth-order term,
1.
{\displaystyle 1.}
And at
x
=
2
,
{\displaystyle x=2,}
even the second-order term,
2
3
/
3
!
=
4
/
3
,
{\displaystyle 2^{3}/3!=4/3,\,}
is greater than the zeroth-order term.
=== Zeroth-order ===
Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined.
A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example,
x
=
[
0
,
1
,
2
]
,
{\displaystyle x=[0,1,2],}
y
=
[
3
,
3
,
5
]
,
{\displaystyle y=[3,3,5],}
y
∼
f
(
x
)
=
3.67
{\displaystyle y\sim f(x)=3.67}
could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x values and the y values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7 ± 2.0 in the interval of x from −0.5 to 2.5, considering the standard deviation.
If the data points are reported as
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
the zeroth-order approximation results in
y
∼
f
(
x
)
=
3.67.
{\displaystyle y\sim f(x)=3.67.}
The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example,
y
∼
x
+
2.67.
{\displaystyle y\sim x+2.67.}
One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series are useful and help predict analytic solutions, but the approximations alone do not provide conclusive evidence.
=== First-order ===
First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has 4×103, or four thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given.
A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example:
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
y
∼
f
(
x
)
=
x
+
2.67
{\displaystyle y\sim f(x)=x+2.67}
is an approximate fit to the data.
In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess".
=== Second-order ===
Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has 3.9×103, or thirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above.
A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example:
x
=
[
0.00
,
1.00
,
2.00
]
,
{\displaystyle x=[0.00,1.00,2.00],}
y
=
[
3.00
,
3.00
,
5.00
]
,
{\displaystyle y=[3.00,3.00,5.00],}
y
∼
f
(
x
)
=
x
2
−
x
+
3
{\displaystyle y\sim f(x)=x^{2}-x+3}
is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order").
=== Higher-order ===
While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number.
Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation.
== Colloquial usage ==
These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement.
== See also ==
Linearization
Perturbation theory
Taylor series
Chapman–Enskog method
Big O notation
Order of accuracy
== References == | Wikipedia/Order_of_approximation |
In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function.
== General first order linear equations ==
The general first-order, linear (only with respect to the term involving derivative) integro-differential equation is of the form
d
d
x
u
(
x
)
+
∫
x
0
x
f
(
t
,
u
(
t
)
)
d
t
=
g
(
x
,
u
(
x
)
)
,
u
(
x
0
)
=
u
0
,
x
0
≥
0.
{\displaystyle {\frac {d}{dx}}u(x)+\int _{x_{0}}^{x}f(t,u(t))\,dt=g(x,u(x)),\qquad u(x_{0})=u_{0},\qquad x_{0}\geq 0.}
As is typical with differential equations, obtaining a closed-form solution can often be difficult. In the relatively few cases where a solution can be found, it is often by some kind of integral transform, where the problem is first transformed into an algebraic setting. In such situations, the solution of the problem may be derived by applying the inverse transform to the solution of this algebraic equation.
=== Example ===
Consider the following second-order problem,
u
′
(
x
)
+
2
u
(
x
)
+
5
∫
0
x
u
(
t
)
d
t
=
θ
(
x
)
with
u
(
0
)
=
0
,
{\displaystyle u'(x)+2u(x)+5\int _{0}^{x}u(t)\,dt=\theta (x)\qquad {\text{with}}\qquad u(0)=0,}
where
θ
(
x
)
=
{
1
,
x
≥
0
0
,
x
<
0
{\displaystyle \theta (x)=\left\{{\begin{array}{ll}1,\qquad x\geq 0\\0,\qquad x<0\end{array}}\right.}
is the Heaviside step function. The Laplace transform is defined by,
U
(
s
)
=
L
{
u
(
x
)
}
=
∫
0
∞
e
−
s
x
u
(
x
)
d
x
.
{\displaystyle U(s)={\mathcal {L}}\left\{u(x)\right\}=\int _{0}^{\infty }e^{-sx}u(x)\,dx.}
Upon taking term-by-term Laplace transforms, and utilising the rules for derivatives and integrals, the integro-differential equation is converted into the following algebraic equation,
s
U
(
s
)
−
u
(
0
)
+
2
U
(
s
)
+
5
s
U
(
s
)
=
1
s
.
{\displaystyle sU(s)-u(0)+2U(s)+{\frac {5}{s}}U(s)={\frac {1}{s}}.}
Thus,
U
(
s
)
=
1
s
2
+
2
s
+
5
{\displaystyle U(s)={\frac {1}{s^{2}+2s+5}}}
.
Inverting the Laplace transform using contour integral methods then gives
u
(
x
)
=
1
2
e
−
x
sin
(
2
x
)
θ
(
x
)
{\displaystyle u(x)={\frac {1}{2}}e^{-x}\sin(2x)\theta (x)}
.
Alternatively, one can complete the square and use a table of Laplace transforms ("exponentially decaying sine wave") or recall from memory to proceed:
U
(
s
)
=
1
s
2
+
2
s
+
5
=
1
2
2
(
s
+
1
)
2
+
4
⇒
u
(
x
)
=
L
−
1
{
U
(
s
)
}
=
1
2
e
−
x
sin
(
2
x
)
θ
(
x
)
{\displaystyle U(s)={\frac {1}{s^{2}+2s+5}}={\frac {1}{2}}{\frac {2}{(s+1)^{2}+4}}\Rightarrow u(x)={\mathcal {L}}^{-1}\left\{U(s)\right\}={\frac {1}{2}}e^{-x}\sin(2x)\theta (x)}
.
== Applications ==
Integro-differential equations model many situations from science and engineering, such as in circuit analysis. By Kirchhoff's second law, the net voltage drop across a closed loop equals the voltage impressed
E
(
t
)
{\displaystyle E(t)}
. (It is essentially an application of energy conservation.) An RLC circuit therefore obeys
L
d
d
t
I
(
t
)
+
R
I
(
t
)
+
1
C
∫
0
t
I
(
τ
)
d
τ
=
E
(
t
)
,
{\displaystyle L{\frac {d}{dt}}I(t)+RI(t)+{\frac {1}{C}}\int _{0}^{t}I(\tau )d\tau =E(t),}
where
I
(
t
)
{\displaystyle I(t)}
is the current as a function of time,
R
{\displaystyle R}
is the resistance,
L
{\displaystyle L}
the inductance, and
C
{\displaystyle C}
the capacitance.
The activity of interacting inhibitory and excitatory neurons can be described by a system of integro-differential equations, see for example the Wilson-Cowan model.
The Whitham equation is used to model nonlinear dispersive waves in fluid dynamics.
=== Epidemiology ===
Integro-differential equations have found applications in epidemiology, the mathematical modeling of epidemics, particularly when the models contain age-structure or describe spatial epidemics. The Kermack-McKendrick theory of infectious disease transmission is one particular example where age-structure in the population is incorporated into the modeling framework.
== See also ==
Delay differential equation
Differential equation
Integral equation
Integrodifference equation
== References ==
== Further reading ==
Vangipuram Lakshmikantham, M. Rama Mohana Rao, “Theory of Integro-Differential Equations”, CRC Press, 1995
== External links ==
Interactive Mathematics
Numerical solution of the example using Chebfun | Wikipedia/Integro-differential_equation |
In mathematics, an implicit equation is a relation of the form
R
(
x
1
,
…
,
x
n
)
=
0
,
{\displaystyle R(x_{1},\dots ,x_{n})=0,}
where R is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is
x
2
+
y
2
−
1
=
0.
{\displaystyle x^{2}+y^{2}-1=0.}
An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments.: 204–206 For example, the equation
x
2
+
y
2
−
1
=
0
{\displaystyle x^{2}+y^{2}-1=0}
of the unit circle defines y as an implicit function of x if −1 ≤ x ≤ 1, and y is restricted to nonnegative values.
The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable.
== Examples ==
=== Inverse functions ===
A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If g is a function of x that has a unique inverse, then the inverse function of g, called g−1, is the unique function giving a solution of the equation
y
=
g
(
x
)
{\displaystyle y=g(x)}
for x in terms of y. This solution can then be written as
x
=
g
−
1
(
y
)
.
{\displaystyle x=g^{-1}(y)\,.}
Defining g−1 as the inverse of g is an implicit definition. For some functions g, g−1(y) can be written out explicitly as a closed-form expression — for instance, if g(x) = 2x − 1, then g−1(y) = 1/2(y + 1). However, this is often not possible, or only by introducing a new notation (as in the product log example below).
Intuitively, an inverse function is obtained from g by interchanging the roles of the dependent and independent variables.
Example: The product log is an implicit function giving the solution for x of the equation y − xex = 0.
=== Algebraic functions ===
An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable x gives a solution for y of an equation
a
n
(
x
)
y
n
+
a
n
−
1
(
x
)
y
n
−
1
+
⋯
+
a
0
(
x
)
=
0
,
{\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0\,,}
where the coefficients ai(x) are polynomial functions of x. This algebraic function can be written as the right side of the solution equation y = f(x). Written like this, f is a multi-valued implicit function.
Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation:
x
2
+
y
2
−
1
=
0
.
{\displaystyle x^{2}+y^{2}-1=0\,.}
Solving for y gives an explicit solution:
y
=
±
1
−
x
2
.
{\displaystyle y=\pm {\sqrt {1-x^{2}}}\,.}
But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as y = f(x), where f is the multi-valued implicit function.
While explicit solutions can be found for equations that are quadratic, cubic, and quartic in y, the same is not in general true for quintic and higher degree equations, such as
y
5
+
2
y
4
−
7
y
3
+
3
y
2
−
6
y
−
x
=
0
.
{\displaystyle y^{5}+2y^{4}-7y^{3}+3y^{2}-6y-x=0\,.}
Nevertheless, one can still refer to the implicit solution y = f(x) involving the multi-valued implicit function f.
== Caveats ==
Not every equation R(x, y) = 0 implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by x − C(y) = 0 where C is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of the x-axis and "cutting away" some unwanted function branches. Then an equation expressing y as an implicit function of the other variables can be written.
The defining equation R(x, y) = 0 can also have other pathologies. For example, the equation x = 0 does not imply a function f(x) giving solutions for y at all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on the domain. The implicit function theorem provides a uniform way of handling these sorts of pathologies.
== Implicit differentiation ==
In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions.
To differentiate an implicit function y(x), defined by an equation R(x, y) = 0, it is not generally possible to solve it explicitly for y and then differentiate. Instead, one can totally differentiate R(x, y) = 0 with respect to x and y and then solve the resulting linear equation for dy/dx to explicitly get the derivative in terms of x and y. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use.
=== Examples ===
==== Example 1 ====
Consider
y
+
x
+
5
=
0
.
{\displaystyle y+x+5=0\,.}
This equation is easy to solve for y, giving
y
=
−
x
−
5
,
{\displaystyle y=-x-5\,,}
where the right side is the explicit form of the function y(x). Differentiation then gives dy/dx = −1.
Alternatively, one can totally differentiate the original equation:
d
y
d
x
+
d
x
d
x
+
d
d
x
(
5
)
=
0
;
d
y
d
x
+
1
+
0
=
0
.
{\displaystyle {\begin{aligned}{\frac {dy}{dx}}+{\frac {dx}{dx}}+{\frac {d}{dx}}(5)&=0\,;\\[6px]{\frac {dy}{dx}}+1+0&=0\,.\end{aligned}}}
Solving for dy/dx gives
d
y
d
x
=
−
1
,
{\displaystyle {\frac {dy}{dx}}=-1\,,}
the same answer as obtained previously.
==== Example 2 ====
An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the function y(x) defined by the equation
x
4
+
2
y
2
=
8
.
{\displaystyle x^{4}+2y^{2}=8\,.}
To differentiate this explicitly with respect to x, one has first to get
y
(
x
)
=
±
8
−
x
4
2
,
{\displaystyle y(x)=\pm {\sqrt {\frac {8-x^{4}}{2}}}\,,}
and then differentiate this function. This creates two derivatives: one for y ≥ 0 and another for y < 0.
It is substantially easier to implicitly differentiate the original equation:
4
x
3
+
4
y
d
y
d
x
=
0
,
{\displaystyle 4x^{3}+4y{\frac {dy}{dx}}=0\,,}
giving
d
y
d
x
=
−
4
x
3
4
y
=
−
x
3
y
.
{\displaystyle {\frac {dy}{dx}}={\frac {-4x^{3}}{4y}}=-{\frac {x^{3}}{y}}\,.}
==== Example 3 ====
Often, it is difficult or impossible to solve explicitly for y, and implicit differentiation is the only feasible method of differentiation. An example is the equation
y
5
−
y
=
x
.
{\displaystyle y^{5}-y=x\,.}
It is impossible to algebraically express y explicitly as a function of x, and therefore one cannot find dy/dx by explicit differentiation. Using the implicit method, dy/dx can be obtained by differentiating the equation to obtain
5
y
4
d
y
d
x
−
d
y
d
x
=
d
x
d
x
,
{\displaystyle 5y^{4}{\frac {dy}{dx}}-{\frac {dy}{dx}}={\frac {dx}{dx}}\,,}
where dx/dx = 1. Factoring out dy/dx shows that
(
5
y
4
−
1
)
d
y
d
x
=
1
,
{\displaystyle \left(5y^{4}-1\right){\frac {dy}{dx}}=1\,,}
which yields the result
d
y
d
x
=
1
5
y
4
−
1
,
{\displaystyle {\frac {dy}{dx}}={\frac {1}{5y^{4}-1}}\,,}
which is defined for
y
≠
±
1
5
4
and
y
≠
±
i
5
4
.
{\displaystyle y\neq \pm {\frac {1}{\sqrt[{4}]{5}}}\quad {\text{and}}\quad y\neq \pm {\frac {i}{\sqrt[{4}]{5}}}\,.}
=== General formula for derivative of implicit function ===
If R(x, y) = 0, the derivative of the implicit function y(x) is given by: §11.5
d
y
d
x
=
−
∂
R
∂
x
∂
R
∂
y
=
−
R
x
R
y
,
{\displaystyle {\frac {dy}{dx}}=-{\frac {\,{\frac {\partial R}{\partial x}}\,}{\frac {\partial R}{\partial y}}}=-{\frac {R_{x}}{R_{y}}}\,,}
where Rx and Ry indicate the partial derivatives of R with respect to x and y.
The above formula comes from using the generalized chain rule to obtain the total derivative — with respect to x — of both sides of R(x, y) = 0:
∂
R
∂
x
d
x
d
x
+
∂
R
∂
y
d
y
d
x
=
0
,
{\displaystyle {\frac {\partial R}{\partial x}}{\frac {dx}{dx}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,}
hence
∂
R
∂
x
+
∂
R
∂
y
d
y
d
x
=
0
,
{\displaystyle {\frac {\partial R}{\partial x}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,}
which, when solved for dy/dx, gives the expression above.
== Implicit function theorem ==
Let R(x, y) be a differentiable function of two variables, and (a, b) be a pair of real numbers such that R(a, b) = 0. If ∂R/∂y ≠ 0, then R(x, y) = 0 defines an implicit function that is differentiable in some small enough neighbourhood of (a, b); in other words, there is a differentiable function f that is defined and differentiable in some neighbourhood of a, such that R(x, f(x)) = 0 for x in this neighbourhood.
The condition ∂R/∂y ≠ 0 means that (a, b) is a regular point of the implicit curve of implicit equation R(x, y) = 0 where the tangent is not vertical.
In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent.: §11.5
== In algebraic geometry ==
Consider a relation of the form R(x1, …, xn) = 0, where R is a multivariable polynomial. The set of the values of the variables that satisfy this relation is called an implicit curve if n = 2 and an implicit surface if n = 3. The implicit equations are the basis of algebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are called affine algebraic sets.
== In differential equations ==
The solutions of differential equations generally appear expressed by an implicit function.
== Applications in economics ==
=== Marginal rate of substitution ===
In economics, when the level set R(x, y) = 0 is an indifference curve for the quantities x and y consumed of two goods, the absolute value of the implicit derivative dy/dx is interpreted as the marginal rate of substitution of the two goods: how much more of y one must receive in order to be indifferent to a loss of one unit of x.
=== Marginal rate of technical substitution ===
Similarly, sometimes the level set R(L, K) is an isoquant showing various combinations of utilized quantities L of labor and K of physical capital each of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivative dK/dL is interpreted as the marginal rate of technical substitution between the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor.
=== Optimization ===
Often in economic theory, some function such as a utility function or a profit function is to be maximized with respect to a choice vector x even though the objective function has not been restricted to any specific functional form. The implicit function theorem guarantees that the first-order conditions of the optimization define an implicit function for each element of the optimal vector x* of the choice vector x. When profit is being maximized, typically the resulting implicit functions are the labor demand function and the supply functions of various goods. When utility is being maximized, typically the resulting implicit functions are the labor supply function and the demand functions for various goods.
Moreover, the influence of the problem's parameters on x* — the partial derivatives of the implicit function — can be expressed as total derivatives of the system of first-order conditions found using total differentiation.
== See also ==
== References ==
== Further reading ==
Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1.
Rudin, Walter (1976). Principles of Mathematical Analysis. Boston: McGraw-Hill. pp. 223–228. ISBN 0-07-054235-X.
Simon, Carl P.; Blume, Lawrence (1994). "Implicit Functions and Their Derivatives". Mathematics for Economists. New York: W. W. Norton. pp. 334–371. ISBN 0-393-95733-0.
== External links ==
Archived at Ghostarchive and the Wayback Machine: "Implicit Differentiation, What's Going on Here?". 3Blue1Brown. Essence of Calculus. May 3, 2017 – via YouTube. | Wikipedia/Implicit_function |
In the history of mathematics, the generality of algebra was a phrase used by Augustin-Louis Cauchy to describe a method of argument that was used in the 18th century by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange, particularly in manipulating infinite series. According to Koetsier, the generality of algebra principle assumed, roughly, that the algebraic rules that hold for a certain class of expressions can be extended to hold more generally on a larger class of objects, even if the rules are no longer obviously valid. As a consequence, 18th century mathematicians believed that they could derive meaningful results by applying the usual rules of algebra and calculus that hold for finite expansions even when manipulating infinite expansions.
In works such as Cours d'Analyse, Cauchy rejected the use of "generality of algebra" methods and sought a more rigorous foundation for mathematical analysis.
== Example ==
An example is Euler's derivation of the series
for
0
<
x
<
π
{\displaystyle 0<x<\pi }
. He first evaluated the identity
at
r
=
1
{\displaystyle r=1}
to obtain
The infinite series on the right hand side of (3) diverges for all real
x
{\displaystyle x}
. But nevertheless integrating this term-by-term gives (1), an identity which is known to be true by Fourier analysis.
== See also ==
Principle of permanence
Transfer principle
== References == | Wikipedia/Generality_of_algebra |
Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into two or three major branches: the natural sciences (e.g., physics, chemistry, and biology), which study the physical world; and the social sciences (e.g., economics, psychology, and sociology), which study individuals and societies. Applied sciences are disciplines that use scientific knowledge for practical purposes, such as engineering and medicine. While sometimes referred to as the formal sciences, the study of logic, mathematics, and theoretical computer science (which study formal systems governed by axioms and rules) are typically regarded as separate because they rely on deductive reasoning instead of the scientific method or empirical evidence as their main methodology.
The history of science spans the majority of the historical record, with the earliest identifiable predecessors to modern science dating to the Bronze Age in Egypt and Mesopotamia (c. 3000–1200 BCE). Their contributions to mathematics, astronomy, and medicine entered and shaped the Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes, while further advancements, including the introduction of the Hindu–Arabic numeral system, were made during the Golden Age of India.: 12 Scientific research deteriorated in these regions after the fall of the Western Roman Empire during the Early Middle Ages (400–1000 CE), but in the Medieval renaissances (Carolingian Renaissance, Ottonian Renaissance and the Renaissance of the 12th century) scholarship flourished again. Some Greek manuscripts lost in Western Europe were preserved and expanded upon in the Middle East during the Islamic Golden Age, Later, Byzantine Greek scholars contributed to their transmission by bringing Greek manuscripts from the declining Byzantine Empire to Western Europe at the beginning of the Renaissance.
The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th centuries revived natural philosophy, which was later transformed by the Scientific Revolution that began in the 16th century as new ideas and discoveries departed from previous Greek conceptions and traditions. The scientific method soon played a greater role in knowledge creation and in the 19th century many of the institutional and professional features of science began to take shape, along with the changing of "natural philosophy" to "natural science".
New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems. Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions, government agencies, and companies. The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritising the ethical and moral development of commercial products, armaments, health care, public infrastructure, and environmental protection.
== Etymology ==
The word science has been used in Middle English since the 14th century in the sense of "the state of knowing". The word was borrowed from the Anglo-Norman language as the suffix -cience, which was borrowed from the Latin word scientia, meaning "knowledge, awareness, understanding", a noun derivative of sciens meaning "knowing", itself the present active participle of sciō, "to know".
There are many hypotheses for science's ultimate word origin. According to Michiel de Vaan, Dutch linguist and Indo-Europeanist, sciō may have its origin in the Proto-Italic language as *skije- or *skijo- meaning "to know", which may originate from Proto-Indo-European language as *skh1-ie, *skh1-io, meaning "to incise". The Lexikon der indogermanischen Verben proposed sciō is a back-formation of nescīre, meaning "to not know, be unfamiliar with", which may derive from Proto-Indo-European *sekH- in Latin secāre, or *skh2-, from *sḱʰeh2(i)- meaning "to cut".
In the past, science was a synonym for "knowledge" or "study", in keeping with its Latin origin. A person who conducted scientific research was called a "natural philosopher" or "man of science". In 1834, William Whewell introduced the term scientist in a review of Mary Somerville's book On the Connexion of the Physical Sciences, crediting it to "some ingenious gentleman" (possibly himself).
== History ==
=== Early history ===
Science has no single origin. Rather, scientific thinking emerged gradually over the course of tens of thousands of years, taking different forms around the world, and few details are known about the very earliest developments. Women likely played a central role in prehistoric science, as did religious rituals. Some scholars use the term "protoscience" to label activities in the past that resemble modern science in some but not all features; however, this label has also been criticised as denigrating, or too suggestive of presentism, thinking about those activities only in relation to modern categories.
Direct evidence for scientific processes becomes clearer with the advent of writing systems in the Bronze Age civilisations of Ancient Egypt and Mesopotamia (c. 3000–1200 BCE), creating the earliest written records in the history of science.: 12–15 Although the words and concepts of "science" and "nature" were not part of the conceptual landscape at the time, the ancient Egyptians and Mesopotamians made contributions that would later find a place in Greek and medieval science: mathematics, astronomy, and medicine.: 12 From the 3rd millennium BCE, the ancient Egyptians developed a non-positional decimal numbering system, solved practical problems using geometry, and developed a calendar. Their healing therapies involved drug treatments and the supernatural, such as prayers, incantations, and rituals.: 9
The ancient Mesopotamians used knowledge about the properties of various natural chemicals for manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. They studied animal physiology, anatomy, behaviour, and astrology for divinatory purposes. The Mesopotamians had an intense interest in medicine and the earliest medical prescriptions appeared in Sumerian during the Third Dynasty of Ur. They seem to have studied scientific subjects which had practical or religious applications and had little interest in satisfying curiosity.
=== Classical antiquity ===
In classical antiquity, there is no real ancient analogue of a modern scientist. Instead, well-educated, usually upper-class, and almost universally male individuals performed various investigations into nature whenever they could afford the time. Before the invention or discovery of the concept of phusis or nature by the pre-Socratic philosophers, the same words tend to be used to describe the natural "way" in which a plant grows, and the "way" in which, for example, one tribe worships a particular god. For this reason, it is claimed that these men were the first philosophers in the strict sense and the first to clearly distinguish "nature" and "convention".
The early Greek philosophers of the Milesian school, which was founded by Thales of Miletus and later continued by his successors Anaximander and Anaximenes, were the first to attempt to explain natural phenomena without relying on the supernatural. The Pythagoreans developed a complex number philosophy: 467–468 and contributed significantly to the development of mathematical science.: 465 The theory of atoms was developed by the Greek philosopher Leucippus and his student Democritus. Later, Epicurus would develop a full natural cosmology based on atomism, and would adopt a "canon" (ruler, standard) which established physical criteria or standards of scientific truth. The Greek doctor Hippocrates established the tradition of systematic medical science and is known as "The Father of Medicine".
A turning point in the history of early philosophical science was Socrates' example of applying philosophy to the study of human matters, including human nature, the nature of political communities, and human knowledge itself. The Socratic method as documented by Plato's dialogues is a dialectic method of hypothesis elimination: better hypotheses are found by steadily identifying and eliminating those that lead to contradictions. The Socratic method searches for general commonly-held truths that shape beliefs and scrutinises them for consistency. Socrates criticised the older type of study of physics as too purely speculative and lacking in self-criticism.
In the 4th century BCE, Aristotle created a systematic programme of teleological philosophy. In the 3rd century BCE, Greek astronomer Aristarchus of Samos was the first to propose a heliocentric model of the universe, with the Sun at the centre and all the planets orbiting it. Aristarchus's model was widely rejected because it was believed to violate the laws of physics, while Ptolemy's Almagest, which contains a geocentric description of the Solar System, was accepted through the early Renaissance instead. The inventor and mathematician Archimedes of Syracuse made major contributions to the beginnings of calculus. Pliny the Elder was a Roman writer and polymath, who wrote the seminal encyclopaedia Natural History.
Positional notation for representing numbers likely emerged between the 3rd and 5th centuries CE along Indian trade routes. This numeral system made efficient arithmetic operations more accessible and would eventually become standard for mathematics worldwide.
=== Middle Ages ===
Due to the collapse of the Western Roman Empire, the 5th century saw an intellectual decline, with knowledge of classical Greek conceptions of the world deteriorating in Western Europe.: 194 Latin encyclopaedists of the period such as Isidore of Seville preserved the majority of general ancient knowledge. In contrast, because the Byzantine Empire resisted attacks from invaders, they were able to preserve and improve prior learning.: 159 John Philoponus, a Byzantine scholar in the 6th century, started to question Aristotle's teaching of physics, introducing the theory of impetus.: 307, 311, 363, 402 His criticism served as an inspiration to medieval scholars and Galileo Galilei, who extensively cited his works ten centuries later.: 307–308
During late antiquity and the Early Middle Ages, natural phenomena were mainly examined via the Aristotelian approach. The approach includes Aristotle's four causes: material, formal, moving, and final cause. Many Greek classical texts were preserved by the Byzantine Empire and Arabic translations were made by Christians, mainly Nestorians and Miaphysites. Under the Abbasids, these Arabic translations were later improved and developed by Arabic scientists. By the 6th and 7th centuries, the neighbouring Sasanian Empire established the medical Academy of Gondishapur, which was considered by Greek, Syriac, and Persian physicians as the most important medical hub of the ancient world.
Islamic study of Aristotelianism flourished in the House of Wisdom established in the Abbasid capital of Baghdad, Iraq and the flourished until the Mongol invasions in the 13th century. Ibn al-Haytham, better known as Alhazen, used controlled experiments in his optical study. Avicenna's compilation of The Canon of Medicine, a medical encyclopaedia, is considered to be one of the most important publications in medicine and was used until the 18th century.
By the 11th century most of Europe had become Christian,: 204 and in 1088, the University of Bologna emerged as the first university in Europe. As such, demand for Latin translation of ancient and scientific texts grew,: 204 a major contributor to the Renaissance of the 12th century. Renaissance scholasticism in western Europe flourished, with experiments done by observing, describing, and classifying subjects in nature. In the 13th century, medical teachers and students at Bologna began opening human bodies, leading to the first anatomy textbook based on human dissection by Mondino de Luzzi.
=== Renaissance ===
New developments in optics played a role in the inception of the Renaissance, both by challenging long-held metaphysical ideas on perception, as well as by contributing to the improvement and development of technology such as the camera obscura and the telescope. At the start of the Renaissance, Roger Bacon, Vitello, and John Peckham each built up a scholastic ontology upon a causal chain beginning with sensation, perception, and finally apperception of the individual and universal forms of Aristotle.: Book I A model of vision later known as perspectivism was exploited and studied by the artists of the Renaissance. This theory uses only three of Aristotle's four causes: formal, material, and final.
In the 16th century, Nicolaus Copernicus formulated a heliocentric model of the Solar System, stating that the planets revolve around the Sun, instead of the geocentric model where the planets and the Sun revolve around the Earth. This was based on a theorem that the orbital periods of the planets are longer as their orbs are farther from the centre of motion, which he found not to agree with Ptolemy's model.
Johannes Kepler and others challenged the notion that the only function of the eye is perception, and shifted the main focus in optics from the eye to the propagation of light. Kepler is best known, however, for improving Copernicus' heliocentric model through the discovery of Kepler's laws of planetary motion. Kepler did not reject Aristotelian metaphysics and described his work as a search for the Harmony of the Spheres. Galileo had made significant contributions to astronomy, physics and engineering. However, he became persecuted after Pope Urban VIII sentenced him for writing about the heliocentric model.
The printing press was widely used to publish scholarly arguments, including some that disagreed widely with contemporary ideas of nature. Francis Bacon and René Descartes published philosophical arguments in favour of a new type of non-Aristotelian science. Bacon emphasised the importance of experiment over contemplation, questioned the Aristotelian concepts of formal and final cause, promoted the idea that science should study the laws of nature and the improvement of all human life. Descartes emphasised individual thought and argued that mathematics rather than geometry should be used to study nature.
=== Age of Enlightenment ===
At the start of the Age of Enlightenment, Isaac Newton formed the foundation of classical mechanics by his Philosophiæ Naturalis Principia Mathematica, greatly influencing future physicists. Gottfried Wilhelm Leibniz incorporated terms from Aristotelian physics, now used in a new non-teleological way. This implied a shift in the view of objects: objects were now considered as having no innate goals. Leibniz assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes.
During this time the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences is the endowment of human life with new inventions and riches", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime or pleasing [speculation]".
Science during the Enlightenment was dominated by scientific societies and academies, which had largely replaced universities as centres of scientific research and development. Societies and academies were the backbones of the maturation of the scientific profession. Another important development was the popularisation of science among an increasingly literate population. Enlightenment philosophers turned to a few of their scientific predecessors – Galileo, Kepler, Boyle, and Newton principally – as the guides to every physical and social field of the day.
The 18th century saw significant advancements in the practice of medicine and physics; the development of biological taxonomy by Carl Linnaeus; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline. Ideas on human nature, society, and economics evolved during the Enlightenment. Hume and other Scottish Enlightenment thinkers developed A Treatise of Human Nature, which was expressed historically in works by authors including James Burnett, Adam Ferguson, John Millar and William Robertson, all of whom merged a scientific study of how humans behaved in ancient and primitive cultures with a strong awareness of the determining forces of modernity. Modern sociology largely originated from this movement. In 1776, Adam Smith published The Wealth of Nations, which is often considered the first work on modern economics.
=== 19th century ===
During the 19th century, many distinguishing characteristics of contemporary modern science began to take shape. These included the transformation of the life and physical sciences; the frequent use of precision instruments; the emergence of terms such as "biologist", "physicist", and "scientist"; an increased professionalisation of those studying nature; scientists gaining cultural authority over many dimensions of society; the industrialisation of numerous countries; the thriving of popular science writings; and the emergence of science journals. During the late 19th century, psychology emerged as a separate discipline from philosophy when Wilhelm Wundt founded the first laboratory for psychological research in 1879.
During the mid-19th century Charles Darwin and Alfred Russel Wallace independently proposed the theory of evolution by natural selection in 1858, which explained how different plants and animals originated and evolved. Their theory was set out in detail in Darwin's book On the Origin of Species, published in 1859. Separately, Gregor Mendel presented his paper, "Experiments on Plant Hybridisation" in 1865, which outlined the principles of biological inheritance, serving as the basis for modern genetics.
Early in the 19th century John Dalton suggested the modern atomic theory, based on Democritus's original idea of indivisible particles called atoms. The laws of conservation of energy, conservation of momentum and conservation of mass suggested a highly stable universe where there could be little loss of resources. However, with the advent of the steam engine and the Industrial Revolution there was an increased understanding that not all forms of energy have the same energy qualities, the ease of conversion to useful work or to another form of energy. This realisation led to the development of the laws of thermodynamics, in which the free energy of the universe is seen as constantly declining: the entropy of a closed universe increases over time.
The electromagnetic theory was established in the 19th century by the works of Hans Christian Ørsted, André-Marie Ampère, Michael Faraday, James Clerk Maxwell, Oliver Heaviside, and Heinrich Hertz. The new theory raised questions that could not easily be answered using Newton's framework. The discovery of X-rays inspired the discovery of radioactivity by Henri Becquerel and Marie Curie in 1896, Marie Curie then became the first person to win two Nobel Prizes. In the next year came the discovery of the first subatomic particle, the electron.
=== 20th century ===
In the first half of the century the development of antibiotics and artificial fertilisers improved human living standards globally. Harmful environmental issues such as ozone depletion, ocean acidification, eutrophication, and climate change came to the public's attention and caused the onset of environmental studies.
During this period scientific experimentation became increasingly larger in scale and funding. The extensive technological innovation stimulated by World War I, World War II, and the Cold War led to competitions between global powers, such as the Space Race and nuclear arms race. Substantial international collaborations were also made, despite armed conflicts.
In the late 20th century active recruitment of women and elimination of sex discrimination greatly increased the number of women scientists, but large gender disparities remained in some fields. The discovery of the cosmic microwave background in 1964 led to a rejection of the steady-state model of the universe in favour of the Big Bang theory of Georges Lemaître.
The century saw fundamental changes within science disciplines. Evolution became a unified theory in the early 20th-century when the modern synthesis reconciled Darwinian evolution with classical genetics. Albert Einstein's theory of relativity and the development of quantum mechanics complement classical mechanics to describe physics in extreme length, time and gravity. Widespread use of integrated circuits in the last quarter of the 20th century combined with communications satellites led to a revolution in information technology and the rise of the global internet and mobile computing, including smartphones. The need for mass systematisation of long, intertwined causal chains and large amounts of data led to the rise of the fields of systems theory and computer-assisted scientific modelling.
=== 21st century ===
The Human Genome Project was completed in 2003 by identifying and mapping all of the genes of the human genome. The first induced pluripotent human stem cells were made in 2006, allowing adult cells to be transformed into stem cells and turn into any cell type found in the body. With the affirmation of the Higgs boson discovery in 2013, the last particle predicted by the Standard Model of particle physics was found. In 2015, gravitational waves, predicted by general relativity a century before, were first observed. In 2019, the international collaboration Event Horizon Telescope presented the first direct image of a black hole's accretion disc.
== Branches ==
Modern science is commonly divided into three major branches: natural science, social science, and formal science. Each of these branches comprises various specialised yet overlapping scientific disciplines that often possess their own nomenclature and expertise. Both natural and social sciences are empirical sciences, as their knowledge is based on empirical observations and is capable of being tested for its validity by other researchers working under the same conditions.
=== Natural ===
Natural science is the study of the physical world. It can be divided into two main branches: life science and physical science. These two branches may be further divided into more specialised disciplines. For example, physical science can be subdivided into physics, chemistry, astronomy, and earth science. Modern natural science is the successor to the natural philosophy that began in Ancient Greece. Galileo, Descartes, Bacon, and Newton debated the benefits of using approaches that were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and other biotic beings. Today, "natural history" suggests observational descriptions aimed at popular audiences.
=== Social ===
Social science is the study of human behaviour and the functioning of societies. It has many disciplines that include, but are not limited to anthropology, economics, history, human geography, political science, psychology, and sociology. In the social sciences, there are many competing theoretical perspectives, many of which are extended through competing research programmes such as the functionalists, conflict theorists, and interactionists in sociology. Due to the limitations of conducting controlled experiments involving large groups of individuals or complex situations, social scientists may adopt other research methods such as the historical method, case studies, and cross-cultural studies. Moreover, if quantitative information is available, social scientists may rely on statistical approaches to better understand social relationships and processes.
=== Formal ===
Formal science is an area of study that generates knowledge using formal systems. A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. It includes mathematics, systems theory, and theoretical computer science. The formal sciences share similarities with the other two branches by relying on objective, careful, and systematic study of an area of knowledge. They are, however, different from the empirical sciences as they rely exclusively on deductive reasoning, without the need for empirical evidence, to verify their abstract concepts. The formal sciences are therefore a priori disciplines and because of this, there is disagreement on whether they constitute a science. Nevertheless, the formal sciences play an important role in the empirical sciences. Calculus, for example, was initially invented to understand motion in physics. Natural and social sciences that rely heavily on mathematical applications include mathematical physics, chemistry, biology, finance, and economics.
=== Applied ===
Applied science is the use of the scientific method and knowledge to attain practical goals and includes a broad range of disciplines such as engineering and medicine. Engineering is the use of scientific principles to invent, design and build machines, structures and technologies. Science may contribute to the development of new technologies. Medicine is the practice of caring for patients by maintaining and restoring health through the prevention, diagnosis, and treatment of injury or disease.
=== Basic ===
The applied sciences are often contrasted with the basic sciences, which are focused on advancing scientific theories and laws that explain and predict events in the natural world.
=== Blue skies ===
=== Computational ===
Computational science applies computer simulations to science, enabling a better understanding of scientific problems than formal mathematics alone can achieve. The use of machine learning and artificial intelligence is becoming a central feature of computational contributions to science, for example in agent-based computational economics, random forests, topic modeling and various forms of prediction. However, machines alone rarely advance knowledge as they require human guidance and capacity to reason; and they can introduce bias against certain social groups or sometimes underperform against humans.
=== Interdisciplinary ===
Interdisciplinary science involves the combination of two or more disciplines into one, such as bioinformatics, a combination of biology and computer science or cognitive sciences. The concept has existed since the ancient Greek period and it became popular again in the 20th century.
== Research ==
Scientific research can be labelled as either basic or applied research. Basic research is the search for knowledge and applied research is the search for solutions to practical problems using this knowledge. Most understanding comes from basic research, though sometimes applied research targets specific practical problems. This leads to technological advances that were not previously imaginable.
=== Scientific method ===
Scientific research involves using the scientific method, which seeks to objectively explain the events of nature in a reproducible way. Scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: there is an objective reality shared by all rational observers; this objective reality is governed by natural laws; these laws were discovered by means of systematic observation and experimentation. Mathematics is essential in the formation of hypotheses, theories, and laws, because it is used extensively in quantitative modelling, observing, and collecting measurements. Statistics is used to summarise and analyse data, which allows scientists to assess the reliability of experimental results.
In the scientific method an explanatory thought experiment or hypothesis is put forward as an explanation using parsimony principles and is expected to seek consilience – fitting with other accepted facts related to an observation or scientific question. This tentative explanation is used to make falsifiable predictions, which are typically posted before being tested by experimentation. Disproof of a prediction is evidence of progress.: 4–5 Experimentation is especially important in science to help establish causal relationships to avoid the correlation fallacy, though in some sciences such as astronomy or geology, a predicted observation might be more appropriate.
When a hypothesis proves unsatisfactory it is modified or discarded. If the hypothesis survives testing, it may become adopted into the framework of a scientific theory, a validly reasoned, self-consistent model or framework for describing the behaviour of certain natural events. A theory typically describes the behaviour of much broader sets of observations than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus, a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. Scientists may generate a model, an attempt to describe or depict an observation in terms of a logical, physical or mathematical representation, and to generate new hypotheses that can be tested by experimentation.
While performing experiments to test hypotheses, scientists may have a preference for one outcome over another. Eliminating the bias can be achieved through transparency, careful experimental design, and a thorough peer review process of the experimental results and conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be. Taken in its entirety, the scientific method allows for highly creative problem solving while minimising the effects of subjective and confirmation bias. Intersubjective verifiability, the ability to reach a consensus and reproduce results, is fundamental to the creation of all scientific knowledge.
=== Literature ===
Scientific research is published in a range of literature. Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, Journal des sçavans followed by Philosophical Transactions, began publication in 1665. Since that time the total number of active periodicals has steadily increased. In 1981, one estimate for the number of scientific and technical journals in publication was 11,500.
Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is considered necessary to communicate the achievements, news, and ambitions of scientists to a wider population.
=== Challenges ===
The replication crisis is an ongoing methodological crisis that affects parts of the social and life sciences. In subsequent investigations, the results of many scientific studies have been proven to be unrepeatable. The crisis has long-standing roots; the phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in metascience, which aims to improve the quality of all scientific research while reducing waste.
An area of study or speculation that masquerades as science in an attempt to claim legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe, and at a glance, look like they are doing science but lack the honesty to allow their results to be rigorously evaluated. Various types of commercial advertising, ranging from hype to fraud, may fall into these categories. Science has been described as "the most important tool" for separating valid claims from invalid ones.
There can also be an element of political bias or ideological bias on all sides of scientific debates. Sometimes, research may be characterised as "bad science", research that may be well-intended but is incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term scientific misconduct refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person.
== Philosophy ==
There are different schools of thought in the philosophy of science. The most popular position is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalise observations. Empiricism generally encompasses inductivism, a position that explains how general theories can be made from the finite amount of empirical evidence available. Many versions of empiricism exist, with the predominant ones being Bayesianism and the hypothetico-deductive method.
Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation. Critical rationalism is a contrasting 20th-century approach to science, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories, and that the only way theory A can be affected by observation is after theory A were to conflict with observation, but theory B were to survive the observation.
Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories, replacing induction with falsification as the empirical method. Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error, covering all products of the human mind, including science, mathematics, philosophy, and art.
Another approach, instrumentalism, emphasises the utility of theories as instruments for explaining and predicting phenomena. It views scientific theories as black boxes, with only their input (initial conditions) and output (predictions) being relevant. Consequences, theoretical entities, and logical structure are claimed to be things that should be ignored. Close to instrumentalism is constructive empiricism, according to which the main criterion for the success of a scientific theory is whether what it says about observable entities is true.
Thomas Kuhn argued that the process of observation and evaluation takes place within a paradigm, a logically consistent "portrait" of the world that is consistent with observations made from its framing. He characterised normal science as the process of observation and "puzzle solving", which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Each paradigm has its own distinct questions, aims, and interpretations. The choice between paradigms involves setting two or more "portraits" against the world and deciding which likeness is most promising. A paradigm shift occurs when a significant number of observational anomalies arise in the old paradigm and a new paradigm makes sense of them. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism.
Another approach often cited in debates of scientific scepticism against controversial movements like "creation science" is methodological naturalism. Naturalists maintain that a difference should be made between natural and supernatural, and science should be restricted to natural explanations. Methodological naturalism maintains that science requires strict adherence to empirical study and independent verification.
== Community ==
The scientific community is a network of interacting scientists who conduct scientific research. The community consists of smaller groups working in scientific fields. By having peer review, through discussion and debate within journals and conferences, scientists maintain the quality of research methodology and objectivity when interpreting results.
=== Scientists ===
Scientists are individuals who conduct scientific research to advance knowledge in an area of interest. Scientists may exhibit a strong curiosity about reality and a desire to apply scientific knowledge for the benefit of public health, nations, the environment, or industries; other motivations include recognition by peers and prestige. In modern times, many scientists study within specific areas of science in academic institutions, often obtaining advanced degrees in the process. Many scientists pursue careers in various fields such as academia, industry, government, and nonprofit organisations.
Science has historically been a male-dominated field, with notable exceptions. Women have faced considerable discrimination in science, much as they have in other areas of male-dominated societies. For example, women were frequently passed over for job opportunities and denied credit for their work. The achievements of women in science have been attributed to the defiance of their traditional role as labourers within the domestic sphere.
=== Learned societies ===
Learned societies for the communication and promotion of scientific thought and experimentation have existed since the Renaissance. Many scientists belong to a learned society that promotes their respective scientific discipline, profession, or group of related disciplines. Membership may either be open to all, require possession of scientific credentials, or conferred by election. Most scientific societies are nonprofit organisations, and many are professional associations. Their activities typically include holding regular conferences for the presentation and discussion of new research results and publishing or sponsoring academic journals in their discipline. Some societies act as professional bodies, regulating the activities of their members in the public interest, or the collective interest of the membership.
The professionalisation of science, begun in the 19th century, was partly enabled by the creation of national distinguished academies of sciences such as the Italian Accademia dei Lincei in 1603, the British Royal Society in 1660, the French Academy of Sciences in 1666, the American National Academy of Sciences in 1863, the German Kaiser Wilhelm Society in 1911, and the Chinese Academy of Sciences in 1949. International scientific organisations, such as the International Science Council, are devoted to international cooperation for science advancement.
=== Awards ===
Science awards are usually given to individuals or organisations that have made significant contributions to a discipline. They are often given by prestigious institutions; thus, it is considered a great honour for a scientist receiving them. Since the early Renaissance, scientists have often been awarded medals, money, and titles. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, and chemistry.
== Society ==
=== Funding and policies ===
Funding of science is often through a competitive process in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations, or foundations, allocate scarce funds. Total research funding in most developed countries is between 1.5% and 3% of GDP. In the OECD, around two-thirds of research and development in scientific and technical fields is carried out by industry, and 20% and 10%, respectively, by universities and government. The government funding proportion in certain fields is higher, and it dominates research in social science and the humanities. In less developed nations, the government provides the bulk of the funds for their basic scientific research.
Many governments have dedicated agencies to support scientific research, such as the National Science Foundation in the United States, the National Scientific and Technical Research Council in Argentina, Commonwealth Scientific and Industrial Research Organisation in Australia, National Centre for Scientific Research in France, the Max Planck Society in Germany, and National Research Council in Spain. In commercial research and development, all but the most research-orientated corporations focus more heavily on near-term commercialisation possibilities than research driven by curiosity.
Science policy is concerned with policies that affect the conduct of the scientific enterprise, including research funding, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care, and environmental monitoring. Science policy sometimes refers to the act of applying scientific knowledge and consensus to the development of public policies. In accordance with public policy being concerned about the well-being of its citizens, science policy's goal is to consider how science and technology can best serve the public. Public policy can directly affect the funding of capital equipment and intellectual infrastructure for industrial research by providing tax incentives to those organisations that fund research.
=== Education and awareness ===
Science education for the general public is embedded in the school curriculum, and is supplemented by online pedagogical content (for example, YouTube and Khan Academy), museums, and science magazines and blogs. Major organisations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning, along with philosophy and history. Scientific literacy is chiefly concerned with an understanding of the scientific method, units and methods of measurement, empiricism, a basic understanding of statistics (correlations, qualitative versus quantitative observations, aggregate statistics), and a basic understanding of core scientific fields such as physics, chemistry, biology, ecology, geology, and computation. As a student advances into higher stages of formal education, the curriculum becomes more in depth. Traditional subjects usually included in the curriculum are natural and formal sciences, although recent movements include social and applied science as well.
The mass media face pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate may require considerable expertise regarding the matter. Few journalists have real scientific knowledge, and even beat reporters who are knowledgeable about certain scientific issues may be ignorant about other scientific issues that they are suddenly asked to cover.
Science magazines such as New Scientist, Science & Vie, and Scientific American cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. The science fiction genre, primarily speculative fiction, can transmit the ideas and methods of science to the general public. Recent efforts to intensify or develop links between science and non-scientific disciplines, such as literature or poetry, include the Creative Writing Science resource developed through the Royal Literary Fund.
=== Anti-science attitudes ===
While the scientific method is broadly accepted in the scientific community, some fractions of society reject certain scientific positions or are sceptical about science. Examples are the common notion that COVID-19 is not a major health threat to the US (held by 39% of Americans in August 2021) or the belief that climate change is not a major threat to the US (also held by 40% of Americans, in late 2019 and early 2020). Psychologists have pointed to four factors driving rejection of scientific results:
Scientific authorities are sometimes seen as inexpert, untrustworthy, or biased.
Some marginalised social groups hold anti-science attitudes, in part because these groups have often been exploited in unethical experiments.
Messages from scientists may contradict deeply held existing beliefs or morals.
The delivery of a scientific message may not be appropriately targeted to a recipient's learning style.
Anti-science attitudes often seem to be caused by fear of rejection in social groups. For instance, climate change is perceived as a threat by only 22% of Americans on the right side of the political spectrum, but by 85% on the left. That is, if someone on the left would not consider climate change as a threat, this person may face contempt and be rejected in that social group. In fact, people may rather deny a scientifically accepted fact than lose or jeopardise their social status.
=== Politics ===
Attitudes towards science are often determined by political opinions and goals. Government, business and advocacy groups have been known to use legal and economic pressure to influence scientific researchers. Many factors can act as facets of the politicisation of science such as anti-intellectualism, perceived threats to religious beliefs, and fear for business interests. Politicisation of science is usually accomplished when scientific information is presented in a way that emphasises the uncertainty associated with the scientific evidence. Tactics such as shifting conversation, failing to acknowledge facts, and capitalising on doubt of scientific consensus have been used to gain more attention for views that have been undermined by scientific evidence. Examples of issues that have involved the politicisation of science include the global warming controversy, health effects of pesticides, and health effects of tobacco.
== See also ==
List of scientific occupations
List of years in science
Logology (science)
Science (Wikiversity)
Scientific integrity
== Notes ==
== References ==
== External links == | Wikipedia/Science |
Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject.
Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann.
... the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area.
There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.
== Overview ==
A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.
Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.
For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture).
== Basics ==
=== Modelling as a substitute for direct measurement and experimentation ===
Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes.
Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.
=== Simulation ===
A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models.
=== Structure ===
Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.
=== Systems ===
A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.
=== Generating a model ===
Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components.
Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.
Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
=== Evaluating a model ===
A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include:
Ability to explain past observations
Ability to predict future observations
Cost of use, especially in combination with other models
Refutability, enabling estimation of the degree of confidence in the model
Simplicity, or even aesthetic appeal
People may attempt to quantify the evaluation of a model using a utility function.
=== Visualization ===
Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.
=== Space mapping ===
Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model).
== Types ==
== Applications ==
=== Modelling and simulation ===
One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools.
The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process.
== See also ==
Abductive reasoning – Inference seeking the simplest and most likely explanation
All models are wrong – Aphorism in statistics
Data and information visualization – Visual representation of data
Heuristic – Problem-solving method
Inverse problem – Process of calculating the causal factors that produced a set of observations
Scientific visualization – Interdisciplinary branch of science concerned with presenting scientific data visually
Statistical model – Type of mathematical model
== References ==
== Further reading ==
Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection:
Rainer Hegselmann, Ulrich Müller and Klaus Troitzsch (eds.) (1996). Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer.
Paul Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press.
Johannes Lenhard, Günter Küppers and Terry Shinn (Eds.) (2006) "Simulation: Pragmatic Constructions of Reality", Springer Berlin.
Tom Ritchey (2012). "Outline for a Morphology of Modelling Methods: Contribution to a General Theory of Modelling". In: Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
William Silvert (2001). "Modelling as a Discipline". In: Int. J. General Systems. Vol. 30(3), pp. 261.
Sergio Sismondo and Snait Gissis (eds.) (1999). Modeling and Simulation. Special Issue of Science in Context 12.
Eric Winsberg (2018) "Philosophy and Climate Science" Cambridge: Cambridge University Press
Eric Winsberg (2010) "Science in the Age of Computer Simulation" Chicago: University of Chicago Press
Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World". In: Philosophy of Science 70: 105–125.
Tomáš Helikar, Jim A Rogers (2009). "ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists". BioMed Central.
== External links ==
Models. Entry in the Internet Encyclopedia of Philosophy
Models in Science. Entry in the Stanford Encyclopedia of Philosophy
The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77-100.
Research in simulation and modelling of various physical systems
Modelling Water Quality Information Center, U.S. Department of Agriculture
Ecotoxicology & Models
A Morphology of Modelling Methods. Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20. | Wikipedia/Scientific_modelling |
Infographics (a clipped compound of "information" and "graphics") are graphic visual representations of information, data, or knowledge intended to present information quickly and clearly. They can improve cognition by using graphics to enhance the human visual system's ability to see patterns and trends. Similar pursuits are information visualization, data visualization, statistical graphics, information design, or information architecture. Infographics have evolved in recent years to be for mass communication, and thus are designed with fewer assumptions about the readers' knowledge base than other types of visualizations. Isotypes are an early example of infographics conveying information quickly and easily to the masses.
== Overview ==
Infographics have been around for many years and recently the increase of the number of easy-to-use, free tools have made the creation of infographics available to a large segment of the population. Social media sites such as Facebook and Twitter have also allowed for individual infographics to be spread among many people around the world. Infographics are widely used in the age of short attention span.
In newspapers, infographics are commonly used to show the weather, as well as maps, site plans, and graphs for summaries of data. Some books are almost entirely made up of information graphics, such as David Macaulay's The Way Things Work. The Snapshots in USA Today are also an example of simple infographics used to convey news and current events.
Modern maps, especially route maps for transit systems, use infographic techniques to integrate a variety of information, such as the conceptual layout of the transit network, transfer points, and local landmarks. Public transportation maps, such as those for the Washington Metro and the London Underground map, are well-known infographics. Public places such as transit terminals usually have some sort of integrated "signage system" with standardized icons and stylized maps.
In his 1983 "landmark book" The Visual Display of Quantitative Information, Edward Tufte defines "graphical displays" in the following passage:
Graphical displays should
show the data
induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
avoid distorting what the data has to say
present many numbers in a small space
make large data sets coherent
encourage the eye to compare different pieces of data
reveal the data at several levels of detail, from a broad overview to the fine structure
serve a reasonably clear purpose: description, exploration, tabulation, or decoration
be closely integrated with the statistical and verbal descriptions of a data set.
Graphics reveal data. Indeed graphics can be more precise and revealing than conventional statistical computations.
== History ==
=== Early history ===
In 1626, Christoph Scheiner published the Rosa Ursina sive Sol, a book that revealed his research about the rotation of the sun. Infographics appeared in the form of illustrations demonstrating the Sun's rotation patterns.
In 1786, William Playfair, an engineer and political economist, published the first data graphs in his book The Commercial and Political Atlas. To represent the economy of 18th century England, Playfair used statistical graphs, bar charts, line graphs, area charts, and histograms. In his work, Statistical Breviary, he is credited with introducing the first pie chart.
Around 1820, modern geography was established by Carl Ritter. His maps included shared frames, agreed map legends, scales, repeatability, and fidelity. Such a map can be considered a "supersign" which combines sign systems—as defined by Charles Sanders Peirce—consisting of symbols, icons, indexes as representations. Other examples can be seen in the works of geographers Ritter and Alexander von Humboldt.
In 1857, English nurse Florence Nightingale used information graphics to persuade Queen Victoria to improve conditions in military hospitals. The principal one she used was the Coxcomb chart, a combination of stacked bar and pie charts, depicting the number and causes of deaths during each month of the Crimean War.
1861 saw the release of an influential information graphic on the subject of Napoleon's disastrous march on Moscow. The graphic's creator, Charles Joseph Minard, captured four different changing variables that contributed to Napoleon's downfall in a single two-dimensional image: the army's direction as they traveled, the location the troops passed through, the size of the army as troops died from hunger and wounds, and the freezing temperatures they experienced.
James Joseph Sylvester introduced the term "graph" in 1878 in the scientific magazine Nature and published a set of diagrams showing the relationship between chemical bonds and mathematical properties. These were also some of the first mathematical graphs.
=== 20th century ===
In 1900, the African-American historian, sociologist, writer, and Black rights activist, W.E.B. Du Bois presented data visualizations at the Exposition Universelle (1900) in Paris, France. In addition to curating 500 photographs of the lives of Black Americans, Du Bois and his Atlanta University team of students and scholars created 60 handmade data visualizations to document the ways Black Americans were being denied access to education, housing, employment, and household wealth.
The Cologne Progressives developed an aesthetic approach to art that focused on communicating information. Gerd Arntz, Peter Alma and Augustin Tschinkel, all participants in this movement were recruited by Otto Neurath for the Gesellschafts- und Wirtschaftsmuseum, where they developed the Vienna Method from 1926 to 1934. Here simple images were used to represent data in a structured way. Following the victory of Austrofascism in the Austrian Civil War, the team moved to the Netherlands where they continued their work rebranding it Isotypes (International System of Typographic Picture Education). The method was also applied by IZOSTAT (ИЗОСТАТ) in the Soviet Union.
In 1942 Isidore Isou published the Lettrist manifesto, a document covering art, culture, poetry, film, and political theory. The included works also called metagraphics and hypergraphics, are a synthesis of writing and visual art.
In 1958 Stephen Toulmin proposed a graphical argument model, called The Toulmin Model of Argumentation. The diagram contained six interrelated components used for analyzing arguments and was considered Toulmin's most influential work, particularly in the field of rhetoric, communication, and computer science. The Toulmin Model of Argumentation became influential in argumentation theory and its applications.
In 1972 and 1973, respectively, the Pioneer 10 and Pioneer 11 spacecraft included on their vessels the Pioneer Plaques, a pair of gold-anodized aluminum plaques, each featuring a pictorial message. The pictorial messages included nude male and female figures as well as symbols that were intended to provide information about the origin of the spacecraft. The images were designed by Carl Sagan and Frank Drake and were unique in that their graphical meanings were to be understandable to extraterrestrial beings, who would have no conception of human language.
A pioneer in data visualization, Edward Tufte, wrote a series of books – Visual Explanations, The Visual Display of Quantitative Information, and Envisioning Information – on the subject of information graphics. Referred to by The New York Times as the "da Vinci of Data", Tufte began to give day-long lectures and workshops on the subject of infographics starting in 1993. As of 2012, Tufte still gives these lectures. To Tufte, good data visualizations represent every data point accurately and enable a viewer to see trends and patterns in the data. Tufte's contribution to the field of data visualization and infographics is considered immense, and his design principles can be seen in many websites, magazines, and newspapers today.
The infographics created by Peter Sullivan for The Sunday Times in the 1970s, 1980s, and 1990s were some of the key factors in encouraging newspapers to use more infographics. Sullivan is also one of the few authors who have written about information graphics in newspapers. Likewise, the staff artists at USA Today, the United States newspaper that debuted in 1982, established the goal of using graphics to make information easier to comprehend. However, the paper has received criticism for oversimplifying news stories and for creating infographics that some find emphasizes entertainment over content and data. Tufte coined the term chartjunk to refer to graphics that are visually appealing to the point of losing the information contained within them.
With vector graphics and raster graphics becoming ubiquitous in computing in the 21st century, data visualizations have been applied to commonly used computer systems, including desktop publishing and Geographic Information Systems (GIS).
Closely related to the field of information graphics is information design, which is the creation of infographics.
=== 21st century ===
By the year 2000, Adobe Flash-based animations on the Internet had made use of many key practices in creating infographics in order to create a variety of products and games.
Likewise, television began to incorporate infographics into the viewers' experiences in the early 2000s. One example of infographics usage in television and in pop culture is the 2002 music video by the Norwegian musicians of Röyksopp, for their song "Remind Me." The video was composed entirely of animated infographics. Similarly, in 2004, a television commercial for the French nuclear technology company Areva used animated infographics as an advertising tactic. Both of these videos and the attention they received have conveyed to other fields the potential value of using information graphics to describe complex information efficiently.
With the rise of alternatives to Adobe Flash, such as HTML 5 and CSS3, infographics are now created in a variety of media with a number of software tools.
The field of journalism has also incorporated and applied information graphics to news stories. For stories that intend to include text, images, and graphics, the system called the maestro concept allows entire newsrooms to collaborate and organize a story to successfully incorporate all components. Across many newsrooms, this teamwork-integrated system is applied to improve time management. The maestro system is designed to improve the presentation of stories for busy readers of media. Many news-based websites have also used interactive information graphics in which the user can extract information on a subject as they explore the graphic.
Many businesses use infographics as a medium for communicating with and attracting potential customers. Information graphics are a form of content marketing and have become a tool for internet marketers and companies to create content that others will link to, thus possibly boosting a company's reputation and online presence.
Religious denominations have also started using infographics. For example, The Church of Jesus Christ of Latter-day Saints has made numerous infographics to help people learn about their faith, missionaries, temples, lay ministry, and family history efforts.
Infographics are finding a home in the classroom as well. Courses that teach students to create their own infographics using a variety of tools may encourage engagement in the classroom and may lead to a better understanding of the concepts they are mapping onto the graphics.
With the popularity of social media, infographics have become popular, often as static images or simple web interfaces, covering any number of topics. Such infographics are often shared between users of social networks such as Facebook, Twitter, Pinterest, Google+ and Reddit. The hashtag #infographic was tweeted 56,765 times in March 2012 and at its peak 3,365 times in a span of 24 hours.
== Analysis ==
The three parts of all infographics are the visual, the content, and the knowledge. The visual consists of colors and graphics. There are two different types of graphics – theme, and reference. These graphics are included in all infographics and represent the underlying visual representation of the data. Reference graphics are generally icons that can be used to point to certain data, although they are not always found in infographics. Statistics and facts usually serve as the content for infographics and can be obtained from any number of sources, including census data and news reports. One of the most important aspects of infographics is that they contain some sort of insight into the data that they are presenting – this is the knowledge.
Infographics are effective because of their visual element. Humans receive input from all five of their senses (sight, touch, hearing, smell, taste), but they receive significantly more information from vision than any of the other four. Fifty percent of the human brain is dedicated to visual functions, and images are processed faster than text. The brain processes pictures all at once, but processes text in a linear fashion, meaning it takes much longer to obtain information from text. Entire business processes or industry sectors can be made relevant to a new audience through a guidance design technique that leads the eye. The page may link to a complete report, but the infographic primes the reader making the subject-matter more accessible. Online trends, such as the increasingly short attention span of Internet users, has also contributed to the increasing popularity and effectiveness of infographics.
When designing the visual aspect of an infographic, a number of considerations must be made to optimize the effectiveness of the visualization. The six components of visual encoding are spatial, marks, connection, enclosure, retinal properties, and temporal encoding. Each of these can be utilized in its own way to represent relationships between different types of data. However, studies have shown that spatial position is the most effective way to represent numerical data and leads to the fastest and easiest understanding by viewers. Therefore, the designers often spatially represent the most important relationship being depicted in an infographic.
There are also three basic provisions of communication that need to be assessed when designing an infographic – appeal, comprehension, and retention. "Appeal" is the idea that communication needs to engage its audience. Comprehension implies that the viewer should be able to easily understand the information that is presented to them. And finally, "retention" means that the viewer should remember the data presented by the infographic. The order of importance of these provisions depends on the purpose of the infographic. If the infographic is meant to convey information in an unbiased way, such as in the domains of academia or science, comprehension should be considered first, then retention, and finally, appeal. However, if the infographic is being used for commercial purposes, then appeal becomes most important, followed by retention and comprehension. When infographics are being used for editorial purposes, such as in a newspaper, the appeal is again most important but is followed first by comprehension and then retention.
However, the appeal and the retention can in practice be put together with the aid of a comprehensible layout design. Recently, as an attempt to study the effect of the layout of an infographic on the comprehension of the viewers, a new Neural Network-based cognitive load estimation method was applied on different types of common layouts for the infographic design. When the varieties of factors listed above are taken into consideration when designing infographics, they can be a highly efficient and effective way to convey large amounts of information in a visual manner.
== Data visualization ==
Data visualizations are often used in infographics and may make up the entire infographic. There are many types of visualizations that can be used to represent the same set of data. Therefore, it is crucial to identify the appropriate visualization for the data set and infographic by taking into consideration graphical features such as position, size, shape, and color. There are primarily five types of visualization categories – time-series data, statistical distributions, maps, hierarchies, and networking.
=== Time-series ===
Time-series data is one of the most common forms of data visualization. It documents sets of values over time. Examples of graphics in this category include index charts, stacked graphs, small multiples, and horizon graphs. Index charts are ideal to use when raw values are less important than relative changes. It is an interactive line chart that shows percentage changes for a collection of time-series data based on a selected index point. For example, stock investors could use this because they are less concerned with the specific price and more concerned with the rate of growth. Stacked graphs are area charts that are stacked on top of each other, and depict aggregate patterns. They allow viewers to see overall patterns and individual patterns. However, they do not support negative numbers and make it difficult to accurately interpret trends. An alternative to stacked graphs is small multiples. Instead of stacking each area chart, each series is individually shown so the overall trends of each sector are more easily interpreted. Horizon graphs are a space efficient method to increase the data density of a time-series while preserving resolution.
=== Statistical ===
Statistical distributions reveal trends based on how numbers are distributed. Common examples include histograms and box-and-whisker plots, which convey statistical features such as mean, median, and outliers. In addition to these common infographics, alternatives include stem-and-leaf plots, Q–Q plots, scatter plot matrices (SPLOM) and parallel coordinates. For assessing a collection of numbers and focusing on frequency distribution, stem-and-leaf plots can be helpful. The numbers are binned based on the first significant digit, and within each stack binned again based on the second significant digit. On the other hand, Q–Q plots compare two probability distributions by graphing quantiles against each other. This allows the viewer to see if the plot values are similar and if the two are linearly related. SPLOM is a technique that represents the relationships among multiple variables. It uses multiple scatter plots to represent a pairwise relation among variables. Another statistical distribution approach to visualize multivariate data is parallel coordinates. Rather than graphing every pair of variables in two dimensions, the data is repeatedly plotted on a parallel axis, and corresponding points are then connected with a line. The advantage of parallel coordinates is that they are relatively compact, allowing many variables to be shown simultaneously.
=== Maps ===
Maps are a natural way to represent geographical data. Time and space can be depicted through the use of flow maps. Line strokes are used with various widths and colors to help encode information. Choropleth maps, which encode data through color and geographical region, are also commonly used. Graduated symbol maps are another method to represent geographical data. They are an alternative to choropleth map and use symbols, such as pie charts for each area, over a map. This map allows for more dimensions to be represented using various shapes, sizes, and colors. Cartograms, on the other hand, completely distort the shape of a region and directly encode a data variable. Instead of using a geographic map, regions are redrawn proportionally to the data. For example, each region can be represented by a circle and the size/color is directly proportional to other information, such as population size.
=== Hierarchies ===
Many data sets, such as spatial entities of countries or common structures for governments, can be organized into natural hierarchies. Node-link diagrams, adjacency diagrams, and enclosure diagrams are all types of infographics that effectively communicate hierarchical data. Node-link diagrams are a popular method due to the tidy and space-efficient results. A node-link diagram is similar to a tree, where each node branches off into multiple sub-sections. An alternative is adjacency diagrams, which is a space-filling variant of the node-link diagram. Instead of drawing a link between hierarchies, nodes are drawn as solid areas with sub-sections inside of each section. This method allows for size to be easily represented than in the node-link diagrams. Enclosure diagrams are also a space-filling visualization method. However, they use containment rather than adjacency to represent the hierarchy. Similar to the adjacency diagram, the size of the node is easily represented in this model.
=== Networks ===
Network visualization explores relationships, such as friendships and cliques. Three common types are force-directed layout, arc diagrams, and matrix view. Force-directed layouts are a common and intuitive approach to network layout. In this system, nodes are similar to charged particles, which repel each other. Links are used to pull related nodes together. Arc diagrams are one-dimensional layouts of nodes with circular arcs linking each node. When used properly, with good order in nodes, cliques and bridges are easily identified in this layout. Alternatively, mathematicians and computer scientists more often use matrix views. Each value has an (x,y) value in the matrix that corresponds to a node. By using color and saturation instead of text, values associated with the links can be perceived rapidly. While this method makes it hard to view the path of the nodes, there are no line crossings, which in a large and highly connected network can quickly become too cluttered.
While all of these visualizations can be effectively used on their own, many modern infographics combine multiple types into one graphic, along with other features, such as illustrations and text. Some modern infographics do not even contain data visualization, and instead are simply a colorful and succinct ways to present knowledge. Fifty-three percent of the 30 most-viewed infographics on the infographic sharing site visual.ly did not contain actual data.
=== Comparison infographics ===
Comparison infographics are a type of visual representation that focuses on comparing and contrasting different elements, such as products, services, options, or features. These infographics are designed to help viewers make informed decisions by presenting information in a clear and concise manner. Comparison infographics can be highly effective in simplifying complex data and highlighting key differences between multiple items.
== Tools ==
Infographics can be created by hand using simple everyday tools such as graph paper, pencils, markers, and rulers. However, today they are more often created using computer software, which is often both faster and easier. They can be created with general illustration software.
Diagrams can be manually created and drawn using software, which can be downloaded for the desktop or used online. Templates can be used to get users started on their diagrams. Additionally, the software allows users to collaborate on diagrams in real time over the Internet.
There are also numerous tools to create very specific types of visualizations, such as creating a visualization based on embedded data in the photos on a user's smartphone. Users can create an infographic of their resume or a "picture of their digital life."
== See also ==
== References ==
== Further reading ==
Heiner Benking (1981–1988) Requisite inquiry and time-line: computer graphics-infographics http://benking.de/infographics/ see there: Computer Graphics in the Environmental Sector – Possibilities and Limitations of Data-visualisation this citation in chapter 3: technical possibilities and human potentials and capacities, "a picture is more than 10.000 words", and "10.000 miles equal 10.000 books".
Sullivan, Peter. (1987) Newspaper Graphics. IFRA, Darmstadt.
Jacques Bertin (1983). Semiology of Graphics. Madison, WI: University of Wisconsin Press. Translation by William Berg of Semiologie Graphique. Paris: Mouton/Gauthier-Villars, 1967.
William S. Cleveland (1985). The Elements of Graphing Data. Summit, NJ: Hobart Press. ISBN 978-1-58465-512-1
Heiner Benking (1993), Visual Access Strategies for Multi-Dimensional Objects and Issues / "Our View of Life is too Flat", WFSF, Turku, FAW Report TR-93019
William S. Cleveland (1993). Visualizing Data. Summit, NJ: Hobart Press. ISBN 978-0-9634884-0-4
Sullivan, Peter. (1993) Information Graphics in Colour. IFRA, Darmstadt.
John Emerson (2008). Visualizing Information for Advocacy: An Introduction to Information Design. New York: OSI.
Paul Lewi (2006). "Speaking of Graphics".
Hankins, Thomas L. (1999). "Blood, Dirt, and Nomograms: A Particular History of Graphs". Isis. 90 (1): 50–80. doi:10.1086/384241. JSTOR 237474. S2CID 144376938.
Robert L. Harris (1999). Information Graphics: A Comprehensive Illustrated Reference. Oxford University Press.
Eric K. Meyer (1997). Designing Infographics. Hayden Books.
Edward R. Tufte (1983). The Visual Display of Quantitative Information. Edition, Cheshire, CT: Graphics Press.
Edward R. Tufte (1990). Envisioning Information. Cheshire, CT: Graphics Press.
Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire,
Edward R. Tufte (2006). Beautiful Evidence. Cheshire. CT: Graphics Press.
John Wilder Tukey (1977). Exploratory Data Analysis. Addison-Wesley.
Veszelszki, Ágnes (2014). Information visualization: Infographics from a linguistic point of view. In: Benedek, András − Nyíri, Kristóf (eds.): The Power of the Image Series Visual Learning, vol. 4. Frankfurt: Peter Lang, pp. 99−109.
Sandra Rendgen, Julius Wiedemann (2012). Information Graphics. Taschen Publishing. ISBN 978-3-8365-2879-5
Jason Lankow, Josh Ritchie, Ross Crooks (2012). Infographics: The Power of Visual Storytelling. Wiley. ISBN 978-1-118-31404-3
Murray Dick (2020). The Infographic: A History of Data Graphics in News and Communications. The MIT Press. ISBN 9780262043823
== External links ==
Milestones in the History of Thematic Cartography, Statistical Graphics and Data Visualization
Visual Display of Quantitative Information | Wikipedia/Infographic |
User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals (user-centered design). User-centered design is typically accomplished through the execution of modern design thinking which involves empathizing with the target audience, defining a problem statement, ideating potential solutions, prototyping wireframes, and testing prototypes in order to refine final interface mockups.
User interfaces are the points of interaction between users and designs.
== Three types of user interfaces ==
Graphical user interfaces (GUIs)
Users interact with visual representations on a computer's screen. The desktop is an example of a GUI.
Interfaces controlled through voice
Users interact with these through their voices. Most smart assistants, such as Siri on smartphones or Alexa on Amazon devices, use voice control.
Interactive interfaces utilizing gestures
Users interact with 3D design environments through their bodies, e.g., in virtual reality (VR) games.
Interface design is involved in a wide range of projects, from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered on their expertise, whether it is software design, user research, web design, or industrial design.
Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design and typography are utilized to support its usability, influencing how the user performs certain interactions and improving the aesthetic appeal of the design; design aesthetics may enhance or detract from the ability of users to use the functions of the interface. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.
== UI design vs. UX design ==
Compared to UX design, UI design is more about the surface and overall look of a design. User interface design is a craft in which designers perform an important function in creating the user experience. UI design should keep users informed about what is happening, giving appropriate feedback in a timely manner. The visual look and feel of UI design sets the tone for the user experience. On the other hand, the term UX design refers to the entire process of creating a user experience.
Don Norman and Jakob Nielsen said: It's important to distinguish the total user experience from the user interface (UI), even though the UI is obviously an extremely important part of the design. As an example, consider a website with movie reviews. Even if the UI for finding a film is perfect, the UX will be poor for a user who wants information about a small independent release if the underlying database only contains movies from the major studios.
== Design thinking ==
User interface design requires a good understanding of user needs. It mainly focuses on the needs of the platform and its user expectations. There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. The modern design thinking framework was created in 2004 by David M. Kelley, the founder of Stanford’s d.school, formally known as the Hasso Plattner Institute of Design. EDIPT is a common acronym used to describe Kelley’s design thinking framework—it stands for empathize, define, ideate, prototype, and test. Notably, the EDIPT framework is non-linear, therefore a UI designer may jump from one stage to another when developing a user-centric solution. Iteration is a common practice in the design thinking process; successful solutions often require testing and tweaking to ensure that the product fulfills user needs.
=== EDIPT ===
Empathize
Conducting user research to better understand the needs and pain points of the target audience. UI designers should avoid developing solutions based on personal beliefs and instead seek to understand the unique perspectives of various users. Qualitative data is often gathered in the form of semi-structured interviews.
Common areas of interest include:
What would the user want the system to do?
How would the system fit in with the user's normal workflow or daily activities?
How technically savvy is the user and what similar systems does the user already use?
What interface aesthetics and functionalities styles appeal to the user?
Define
Solidifying a problem statement that focuses on user needs and desires; effective problem statements are typically one sentence long and include the user, their specific need, and their desired outcome or goal.
Ideate
Brainstorming potential solutions to address the refined problem statement. The proposed solutions should ideally align with the stakeholders' feasibility and viability criteria while maintaining user desirability standards.
Prototype
Designing potential solutions of varying fidelity (low, mid, and high) while applying user experience principles and methodologies. Prototyping is an iterative process where UI designers should explore multiple design solutions rather than settling on the initial concept.
Test
Presenting the prototypes to the target audience to gather feedback and gain insights for improvement. Based on the results, UI designers may need to revisit earlier stages of the design process to enhance the prototype and user experience.
== Usability testing ==
The Nielsen Norman Group, co-founded by Jakob Nielsen and Don Norman in 1998, promotes user experience and interface design education. Jakob Nielsen pioneered the interface usability movement and created the "10 Usability Heuristics for User Interface Design." Usability is aimed at defining an interface’s quality when considering ease of use; an interface with low usability will burden a user and hinder them from achieving their goals, resulting in the dismissal of the interface. To enhance usability, user experience researchers may conduct usability testing—a process that evaluates how users interact with an interface. Usability testing can provide insight into user pain points by illustrating how efficiently a user can complete a task without error, highlighting areas for design improvement.
Usability inspection
Letting an evaluator inspect a user interface. This is generally considered to be cheaper to implement than usability testing (see step below), and can be used early on in the development process since it can be used to evaluate prototypes or specifications for the system, which usually cannot be tested on users. Some common usability inspection methods include cognitive walkthrough, which focuses the simplicity to accomplish tasks with the system for new users, heuristic evaluation, in which a set of heuristics are used to identify usability problems in the UI design, and pluralistic walkthrough, in which a selected group of people step through a task scenario and discuss usability issues.
Usability testing
Testing of the prototypes on an actual user—often using a technique called think aloud protocol where the user is asked to talk about their thoughts during the experience. User interface design testing allows the designer to understand the reception of the design from the viewer's standpoint, and thus facilitates creating successful applications.
== Requirements ==
The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface.
=== Seven dialogue principles ===
Suitability for the task
The dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.
Self-descriptiveness
The dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.
Controllability
The dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.
Conformity with user expectations
The dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.
Error tolerance
The dialogue is error-tolerant if, despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.
Suitability for individualization
The dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.
Suitability for learning
The dialogue is suitable for learning when it supports and guides the user in learning to use the system.
The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user.
Part 11 gives the following definition of usability:
Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).
The resources that have to be expended to achieve the intended goals (efficiency).
The extent to which the user finds the overall system acceptable (satisfaction).
Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures.
The information presented is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, colour, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes.
=== Seven presentation attributes ===
Clarity
The information content is conveyed quickly and accurately.
Discriminability
The displayed information can be distinguished accurately.
Conciseness
Users are not overloaded with extraneous information.
Consistency
A unique design, conformity with user's expectation.
Detectability
The user's attention is directed towards information required.
Legibility
Information is easy to read.
Comprehensibility
The meaning is clearly understandable, unambiguous, interpretable, and recognizable.
=== Usability ===
The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use.
User guidance can be given by the following five means:
Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input.
Feedback informing about the user's input timely, perceptible, and non-intrusive.
Status information indicating the continuing state of the application, the system's hardware and software components, and the user's activities.
Error management including error prevention, error correction, user support for error management, and error messages.
On-line help for system-initiated and user-initiated requests with specific information for the current context of use.
== Research ==
User interface design has been a topic of considerable research, including on its aesthetics. Standards have been developed as far back as the 1980s for defining the usability of software products.
One of the structural bases has become the IFIP user interface reference model.
The model proposes four dimensions to structure the user interface:
The input/output dimension (the look)
The dialogue dimension (the feel)
The technical or functional dimension (the access to tools and services)
The organizational dimension (the communication and co-operation support)
This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability.
The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use. Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.
Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's law, host very complex interfaces.
== See also ==
== References == | Wikipedia/User_interface_design |
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.
The domain of a function of n variables is the subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
.
== General definition ==
A real-valued function of n real variables is a function that takes as input n real numbers, commonly represented by the variables x1, x2, …, xn, for producing another real number, the value of the function, commonly denoted f(x1, x2, …, xn). For simplicity, in this article a real-valued function of several real variables will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.
Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset X of Rn, the domain of the function, which is always supposed to contain an open subset of Rn. In other words, a real-valued function of n real variables is a function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
such that its domain X is a subset of Rn that contains a nonempty open set.
An element of X being an n-tuple (x1, x2, …, xn) (usually delimited by parentheses), the general notation for denoting functions would be f((x1, x2, …, xn)). The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write f(x1, x2, …, xn).
It is also common to abbreviate the n-tuple (x1, x2, …, xn) by using a notation similar to that for vectors, like boldface x, underline x, or overarrow x→. This article will use bold.
A simple example of a function in two variables could be:
V
:
X
→
R
X
=
{
(
A
,
h
)
∈
R
2
∣
A
>
0
,
h
>
0
}
V
(
A
,
h
)
=
1
3
A
h
{\displaystyle {\begin{aligned}&V:X\to \mathbb {R} \\&X=\left\{(A,h)\in \mathbb {R} ^{2}\mid A>0,h>0\right\}\\&V(A,h)={\frac {1}{3}}Ah\end{aligned}}}
which is the volume V of a cone with base area A and height h measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive.
For an example of a function in two variables:
z
:
R
2
→
R
z
(
x
,
y
)
=
a
x
+
b
y
{\displaystyle {\begin{aligned}&z:\mathbb {R} ^{2}\to \mathbb {R} \\&z(x,y)=ax+by\end{aligned}}}
where a and b are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the xy plane is the domain R2 and the z axis is the codomain R, one can visualize the image to be a two-dimensional plane, with a slope of a in the positive x direction and a slope of b in the positive y direction. The function is well-defined at all points (x, y) in R2. The previous example can be extended easily to higher dimensions:
z
:
R
p
→
R
z
(
x
1
,
x
2
,
…
,
x
p
)
=
a
1
x
1
+
a
2
x
2
+
⋯
+
a
p
x
p
{\displaystyle {\begin{aligned}&z:\mathbb {R} ^{p}\to \mathbb {R} \\&z(x_{1},x_{2},\ldots ,x_{p})=a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{p}x_{p}\end{aligned}}}
for p non-zero real constants a1, a2, …, ap, which describes a p-dimensional hyperplane.
The Euclidean norm:
f
(
x
)
=
‖
x
‖
=
x
1
2
+
⋯
+
x
n
2
{\displaystyle f({\boldsymbol {x}})=\|{\boldsymbol {x}}\|={\sqrt {x_{1}^{2}+\cdots +x_{n}^{2}}}}
is also a function of n variables which is everywhere defined, while
g
(
x
)
=
1
f
(
x
)
{\displaystyle g({\boldsymbol {x}})={\frac {1}{f({\boldsymbol {x}})}}}
is defined only for x ≠ (0, 0, …, 0).
For a non-linear example function in two variables:
z
:
X
→
R
X
=
{
(
x
,
y
)
∈
R
2
:
x
2
+
y
2
≤
8
,
x
≠
0
,
y
≠
0
}
z
(
x
,
y
)
=
1
2
x
y
x
2
+
y
2
{\displaystyle {\begin{aligned}&z:X\to \mathbb {R} \\&X=\left\{(x,y)\in \mathbb {R} ^{2}\,:\,x^{2}+y^{2}\leq 8\,,\,x\neq 0\,,\,y\neq 0\right\}\\&z(x,y)={\frac {1}{2xy}}{\sqrt {x^{2}+y^{2}}}\end{aligned}}}
which takes in all points in X, a disk of radius √8 "punctured" at the origin (x, y) = (0, 0) in the plane R2, and returns a point in R. The function does not include the origin (x, y) = (0, 0), if it did then f would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy-plane as the domain R2, and the z axis the codomain R, the image can be visualized as a curved surface.
The function can be evaluated at the point (x, y) = (2, √3) in X:
z
(
2
,
3
)
=
1
2
⋅
2
⋅
3
(
2
)
2
+
(
3
)
2
=
1
4
3
7
,
{\displaystyle z\left(2,{\sqrt {3}}\right)={\frac {1}{2\cdot 2\cdot {\sqrt {3}}}}{\sqrt {\left(2\right)^{2}+\left({\sqrt {3}}\right)^{2}}}={\frac {1}{4{\sqrt {3}}}}{\sqrt {7}}\,,}
However, the function couldn't be evaluated at, say
(
x
,
y
)
=
(
65
,
10
)
⇒
x
2
+
y
2
=
(
65
)
2
+
(
10
)
2
>
8
{\displaystyle (x,y)=(65,{\sqrt {10}})\,\Rightarrow \,x^{2}+y^{2}=(65)^{2}+({\sqrt {10}})^{2}>8}
since these values of x and y do not satisfy the domain's rule.
=== Image ===
The image of a function f(x1, x2, …, xn) is the set of all values of f when the n-tuple (x1, x2, …, xn) runs in the whole domain of f. For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.
The preimage of a given real number c is called a level set. It is the set of the solutions of the equation f(x1, x2, …, xn) = c.
=== Domain ===
The domain of a function of several real variables is a subset of Rn that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted
f
|
Y
{\displaystyle f|_{Y}}
. In practice, it is often (but not always) not harmful to identify f and
f
|
Y
{\displaystyle f|_{Y}}
, and to omit the restrictor |Y.
Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation.
Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function f, it may be difficult to specify the domain of the function
g
(
x
)
=
1
/
f
(
x
)
.
{\displaystyle g({\boldsymbol {x}})=1/f({\boldsymbol {x}}).}
If f is a multivariate polynomial, (which has
R
n
{\displaystyle \mathbb {R} ^{n}}
as a domain), it is even difficult to test whether the domain of g is also
R
n
{\displaystyle \mathbb {R} ^{n}}
. This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial).
=== Algebraic structure ===
The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way:
For every real number r, the constant function
(
x
1
,
…
,
x
n
)
↦
r
{\displaystyle (x_{1},\ldots ,x_{n})\mapsto r}
is everywhere defined.
For every real number r and every function f, the function:
r
f
:
(
x
1
,
…
,
x
n
)
↦
r
f
(
x
1
,
…
,
x
n
)
{\displaystyle rf:(x_{1},\ldots ,x_{n})\mapsto rf(x_{1},\ldots ,x_{n})}
has the same domain as f (or is everywhere defined if r = 0).
If f and g are two functions of respective domains X and Y such that X ∩ Y contains a nonempty open subset of Rn, then
f
g
:
(
x
1
,
…
,
x
n
)
↦
f
(
x
1
,
…
,
x
n
)
g
(
x
1
,
…
,
x
n
)
{\displaystyle f\,g:(x_{1},\ldots ,x_{n})\mapsto f(x_{1},\ldots ,x_{n})\,g(x_{1},\ldots ,x_{n})}
and
g
f
:
(
x
1
,
…
,
x
n
)
↦
g
(
x
1
,
…
,
x
n
)
f
(
x
1
,
…
,
x
n
)
{\displaystyle g\,f:(x_{1},\ldots ,x_{n})\mapsto g(x_{1},\ldots ,x_{n})\,f(x_{1},\ldots ,x_{n})}
are functions that have a domain containing X ∩ Y.
It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (R-algebras). This is a prototypical example of a function space.
One may similarly define
1
/
f
:
(
x
1
,
…
,
x
n
)
↦
1
/
f
(
x
1
,
…
,
x
n
)
,
{\displaystyle 1/f:(x_{1},\ldots ,x_{n})\mapsto 1/f(x_{1},\ldots ,x_{n}),}
which is a function only if the set of the points (x1, …,xn) in the domain of f such that f(x1, …, xn) ≠ 0 contains an open subset of Rn. This constraint implies that the above two algebras are not fields.
=== Univariable functions associated with a multivariable function ===
One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if (a1, …, an) is a point of the interior of the domain of the function f, we can fix the values of x2, …, xn to a2, …, an respectively, to get a univariable function
x
↦
f
(
x
,
a
2
,
…
,
a
n
)
,
{\displaystyle x\mapsto f(x,a_{2},\ldots ,a_{n}),}
whose domain contains an interval centered at a1. This function may also be viewed as the restriction of the function f to the line defined by the equations xi = ai for i = 2, …, n.
Other univariable functions may be defined by restricting f to any line passing through (a1, …, an). These are the functions
x
↦
f
(
a
1
+
c
1
x
,
a
2
+
c
2
x
,
…
,
a
n
+
c
n
x
)
,
{\displaystyle x\mapsto f(a_{1}+c_{1}x,a_{2}+c_{2}x,\ldots ,a_{n}+c_{n}x),}
where the ci are real numbers that are not all zero.
In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true.
=== Continuity and limit ===
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space.
For defining the continuity, it is useful to consider the distance function of Rn, which is an everywhere defined function of 2n real variables:
d
(
x
,
y
)
=
d
(
x
1
,
…
,
x
n
,
y
1
,
…
,
y
n
)
=
(
x
1
−
y
1
)
2
+
⋯
+
(
x
n
−
y
n
)
2
{\displaystyle d({\boldsymbol {x}},{\boldsymbol {y}})=d(x_{1},\ldots ,x_{n},y_{1},\ldots ,y_{n})={\sqrt {(x_{1}-y_{1})^{2}+\cdots +(x_{n}-y_{n})^{2}}}}
A function f is continuous at a point a = (a1, …, an) which is interior to its domain, if, for every positive real number ε, there is a positive real number φ such that |f(x) − f(a)| < ε for all x such that d(x a) < φ. In other words, φ may be chosen small enough for having the image by f of the ball of radius φ centered at a contained in the interval of length 2ε centered at f(a). A function is continuous if it is continuous at every point of its domain.
If a function is continuous at f(a), then all the univariate functions that are obtained by fixing all the variables xi except one at the value ai, are continuous at f(a). The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at f(a). For an example, consider the function f such that f(0, 0) = 0, and is otherwise defined by
f
(
x
,
y
)
=
x
2
y
x
4
+
y
2
.
{\displaystyle f(x,y)={\frac {x^{2}y}{x^{4}+y^{2}}}.}
The functions x ↦ f(x, 0) and y ↦ f(0, y) are both constant and equal to zero, and are therefore continuous. The function f is not continuous at (0, 0), because, if ε < 1/2 and y = x2 ≠ 0, we have f(x, y) = 1/2, even if |x| is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through (0, 0) are also continuous. In fact, we have
f
(
x
,
λ
x
)
=
λ
x
x
2
+
λ
2
{\displaystyle f(x,\lambda x)={\frac {\lambda x}{x^{2}+\lambda ^{2}}}}
for λ ≠ 0.
The limit at a point of a real-valued function of several real variables is defined as follows. Let a = (a1, a2, …, an) be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted
L
=
lim
x
→
a
f
(
x
)
,
{\displaystyle L=\lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}f({\boldsymbol {x}}),}
if the following condition is satisfied:
For every positive real number ε > 0, there is a positive real number δ > 0 such that
|
f
(
x
)
−
L
|
<
ε
{\displaystyle |f({\boldsymbol {x}})-L|<\varepsilon }
for all x in the domain such that
d
(
x
,
a
)
<
δ
.
{\displaystyle d({\boldsymbol {x}},{\boldsymbol {a}})<\delta .}
If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have
f
(
a
)
=
lim
x
→
a
f
(
x
)
.
{\displaystyle f({\boldsymbol {a}})=\lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}f({\boldsymbol {x}}).}
When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a.
== Symmetry ==
A symmetric function is a function f that is unchanged when two variables xi and xj are interchanged:
f
(
…
,
x
i
,
…
,
x
j
,
…
)
=
f
(
…
,
x
j
,
…
,
x
i
,
…
)
{\displaystyle f(\ldots ,x_{i},\ldots ,x_{j},\ldots )=f(\ldots ,x_{j},\ldots ,x_{i},\ldots )}
where i and j are each one of 1, 2, …, n. For example:
f
(
x
,
y
,
z
,
t
)
=
t
2
−
x
2
−
y
2
−
z
2
{\displaystyle f(x,y,z,t)=t^{2}-x^{2}-y^{2}-z^{2}}
is symmetric in x, y, z since interchanging any pair of x, y, z leaves f unchanged, but is not symmetric in all of x, y, z, t, since interchanging t with x or y or z gives a different function.
== Function composition ==
Suppose the functions
ξ
1
=
ξ
1
(
x
1
,
x
2
,
…
,
x
n
)
,
ξ
2
=
ξ
2
(
x
1
,
x
2
,
…
,
x
n
)
,
…
ξ
m
=
ξ
m
(
x
1
,
x
2
,
…
,
x
n
)
,
{\displaystyle \xi _{1}=\xi _{1}(x_{1},x_{2},\ldots ,x_{n}),\quad \xi _{2}=\xi _{2}(x_{1},x_{2},\ldots ,x_{n}),\ldots \xi _{m}=\xi _{m}(x_{1},x_{2},\ldots ,x_{n}),}
or more compactly ξ = ξ(x), are all defined on a domain X. As the n-tuple x = (x1, x2, …, xn) varies in X, a subset of Rn, the m-tuple ξ = (ξ1, ξ2, …, ξm) varies in another region Ξ a subset of Rm. To restate this:
ξ
:
X
→
Ξ
.
{\displaystyle {\boldsymbol {\xi }}:X\to \Xi .}
Then, a function ζ of the functions ξ(x) defined on Ξ,
ζ
:
Ξ
→
R
,
ζ
=
ζ
(
ξ
1
,
ξ
2
,
…
,
ξ
m
)
,
{\displaystyle {\begin{aligned}&\zeta :\Xi \to \mathbb {R} ,\\&\zeta =\zeta (\xi _{1},\xi _{2},\ldots ,\xi _{m}),\end{aligned}}}
is a function composition defined on X, in other terms the mapping
ζ
:
X
→
R
,
ζ
=
ζ
(
ξ
1
,
ξ
2
,
…
,
ξ
m
)
=
f
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle {\begin{aligned}&\zeta :X\to \mathbb {R} ,\\&\zeta =\zeta (\xi _{1},\xi _{2},\ldots ,\xi _{m})=f(x_{1},x_{2},\ldots ,x_{n}).\end{aligned}}}
Note the numbers m and n do not need to be equal.
For example, the function
f
(
x
,
y
)
=
e
x
y
[
sin
3
(
x
−
y
)
−
cos
2
(
x
+
y
)
]
{\displaystyle f(x,y)=e^{xy}[\sin 3(x-y)-\cos 2(x+y)]}
defined everywhere on R2 can be rewritten by introducing
(
α
,
β
,
γ
)
=
(
α
(
x
,
y
)
,
β
(
x
,
y
)
,
γ
(
x
,
y
)
)
=
(
x
y
,
x
−
y
,
x
+
y
)
{\displaystyle (\alpha ,\beta ,\gamma )=(\alpha (x,y),\beta (x,y),\gamma (x,y))=(xy,x-y,x+y)}
which is also everywhere defined in R3 to obtain
f
(
x
,
y
)
=
ζ
(
α
(
x
,
y
)
,
β
(
x
,
y
)
,
γ
(
x
,
y
)
)
=
ζ
(
α
,
β
,
γ
)
=
e
α
[
sin
(
3
β
)
−
cos
(
2
γ
)
]
.
{\displaystyle f(x,y)=\zeta (\alpha (x,y),\beta (x,y),\gamma (x,y))=\zeta (\alpha ,\beta ,\gamma )=e^{\alpha }[\sin(3\beta )-\cos(2\gamma )]\,.}
Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations.
== Calculus ==
Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus.
=== Partial derivatives ===
Partial derivatives can be defined with respect to each variable:
∂
∂
x
1
f
(
x
1
,
x
2
,
…
,
x
n
)
,
∂
∂
x
2
f
(
x
1
,
x
2
,
…
x
n
)
,
…
,
∂
∂
x
n
f
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle {\frac {\partial }{\partial x_{1}}}f(x_{1},x_{2},\ldots ,x_{n})\,,\quad {\frac {\partial }{\partial x_{2}}}f(x_{1},x_{2},\ldots x_{n})\,,\ldots ,{\frac {\partial }{\partial x_{n}}}f(x_{1},x_{2},\ldots ,x_{n}).}
Partial derivatives themselves are functions, each of which represents the rate of change of f parallel to one of the x1, x2, …, xn axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number.
For real-valued functions of a real variable, y = f(x), its ordinary derivative dy/dx is geometrically the gradient of the tangent line to the curve y = f(x) at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve.
The second order partial derivatives can be calculated for every pair of variables:
∂
2
∂
x
1
2
f
(
x
1
,
x
2
,
…
,
x
n
)
,
∂
2
∂
x
1
x
2
f
(
x
1
,
x
2
,
…
x
n
)
,
…
,
∂
2
∂
x
n
2
f
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle {\frac {\partial ^{2}}{\partial x_{1}^{2}}}f(x_{1},x_{2},\ldots ,x_{n})\,,\quad {\frac {\partial ^{2}}{\partial x_{1}x_{2}}}f(x_{1},x_{2},\ldots x_{n})\,,\ldots ,{\frac {\partial ^{2}}{\partial x_{n}^{2}}}f(x_{1},x_{2},\ldots ,x_{n}).}
Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes.
This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization.
In general, partial derivatives of higher order p have the form:
∂
p
∂
x
1
p
1
∂
x
2
p
2
⋯
∂
x
n
p
n
f
(
x
1
,
x
2
,
…
,
x
n
)
≡
∂
p
1
∂
x
1
p
1
∂
p
2
∂
x
2
p
2
⋯
∂
p
n
∂
x
n
p
n
f
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle {\frac {\partial ^{p}}{\partial x_{1}^{p_{1}}\partial x_{2}^{p_{2}}\cdots \partial x_{n}^{p_{n}}}}f(x_{1},x_{2},\ldots ,x_{n})\equiv {\frac {\partial ^{p_{1}}}{\partial x_{1}^{p_{1}}}}{\frac {\partial ^{p_{2}}}{\partial x_{2}^{p_{2}}}}\cdots {\frac {\partial ^{p_{n}}}{\partial x_{n}^{p_{n}}}}f(x_{1},x_{2},\ldots ,x_{n})}
where p1, p2, …, pn are each integers between 0 and p such that p1 + p2 + ⋯ + pn = p, using the definitions of zeroth partial derivatives as identity operators:
∂
0
∂
x
1
0
f
(
x
1
,
x
2
,
…
,
x
n
)
=
f
(
x
1
,
x
2
,
…
,
x
n
)
,
…
,
∂
0
∂
x
n
0
f
(
x
1
,
x
2
,
…
,
x
n
)
=
f
(
x
1
,
x
2
,
…
,
x
n
)
.
{\displaystyle {\frac {\partial ^{0}}{\partial x_{1}^{0}}}f(x_{1},x_{2},\ldots ,x_{n})=f(x_{1},x_{2},\ldots ,x_{n})\,,\quad \ldots ,\,{\frac {\partial ^{0}}{\partial x_{n}^{0}}}f(x_{1},x_{2},\ldots ,x_{n})=f(x_{1},x_{2},\ldots ,x_{n})\,.}
The number of possible partial derivatives increases with p, although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some p.
=== Multivariable differentiability ===
A function f(x) is differentiable in a neighborhood of a point a if there is an n-tuple of numbers dependent on a in general, A(a) = (A1(a), A2(a), …, An(a)), so that:
f
(
x
)
=
f
(
a
)
+
A
(
a
)
⋅
(
x
−
a
)
+
α
(
x
)
|
x
−
a
|
{\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+{\boldsymbol {A}}({\boldsymbol {a}})\cdot ({\boldsymbol {x}}-{\boldsymbol {a}})+\alpha ({\boldsymbol {x}})|{\boldsymbol {x}}-{\boldsymbol {a}}|}
where
α
(
x
)
→
0
{\displaystyle \alpha ({\boldsymbol {x}})\to 0}
as
|
x
−
a
|
→
0
{\displaystyle |{\boldsymbol {x}}-{\boldsymbol {a}}|\to 0}
. This means that if f is differentiable at a point a, then f is continuous at x = a, although the converse is not true - continuity in the domain does not imply differentiability in the domain. If f is differentiable at a then the first order partial derivatives exist at a and:
∂
f
(
x
)
∂
x
i
|
x
=
a
=
A
i
(
a
)
{\displaystyle \left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{i}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}=A_{i}({\boldsymbol {a}})}
for i = 1, 2, …, n, which can be found from the definitions of the individual partial derivatives, so the partial derivatives of f exist.
Assuming an n-dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system:
∇
f
(
x
)
=
(
∂
∂
x
1
,
∂
∂
x
2
,
…
,
∂
∂
x
n
)
f
(
x
)
{\displaystyle \nabla f({\boldsymbol {x}})=\left({\frac {\partial }{\partial x_{1}}},{\frac {\partial }{\partial x_{2}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right)f({\boldsymbol {x}})}
used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus.
Then substituting the gradient ∇f (evaluated at x = a) with a slight rearrangement gives:
f
(
x
)
−
f
(
a
)
=
∇
f
(
a
)
⋅
(
x
−
a
)
+
α
|
x
−
a
|
{\displaystyle f({\boldsymbol {x}})-f({\boldsymbol {a}})=\nabla f({\boldsymbol {a}})\cdot ({\boldsymbol {x}}-{\boldsymbol {a}})+\alpha |{\boldsymbol {x}}-{\boldsymbol {a}}|}
where · denotes the dot product. This equation represents the best linear approximation of the function f at all points x within a neighborhood of a. For infinitesimal changes in f and x as x → a:
d
f
=
∂
f
(
x
)
∂
x
1
|
x
=
a
d
x
1
+
∂
f
(
x
)
∂
x
2
|
x
=
a
d
x
2
+
⋯
+
∂
f
(
x
)
∂
x
n
|
x
=
a
d
x
n
=
∇
f
(
a
)
⋅
d
x
{\displaystyle df=\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{1}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{1}+\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{2}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{2}+\dots +\left.{\frac {\partial f({\boldsymbol {x}})}{\partial x_{n}}}\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}dx_{n}=\nabla f({\boldsymbol {a}})\cdot d{\boldsymbol {x}}}
which is defined as the total differential, or simply differential, of f, at a. This expression corresponds to the total infinitesimal change of f, by adding all the infinitesimal changes of f in all the xi directions. Also, df can be construed as a covector with basis vectors as the infinitesimals dxi in each direction and partial derivatives of f as the components.
Geometrically ∇f is perpendicular to the level sets of f, given by f(x) = c which for some constant c describes an (n − 1)-dimensional hypersurface. The differential of a constant is zero:
d
f
=
(
∇
f
)
⋅
d
x
=
0
{\displaystyle df=(\nabla f)\cdot d{\boldsymbol {x}}=0}
in which dx is an infinitesimal change in x in the hypersurface f(x) = c, and since the dot product of ∇f and dx is zero, this means ∇f is perpendicular to dx.
In arbitrary curvilinear coordinate systems in n dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1.
=== Differentiability classes ===
If all first order partial derivatives evaluated at a point a in the domain:
∂
∂
x
1
f
(
x
)
|
x
=
a
,
∂
∂
x
2
f
(
x
)
|
x
=
a
,
…
,
∂
∂
x
n
f
(
x
)
|
x
=
a
{\displaystyle \left.{\frac {\partial }{\partial x_{1}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}\,,\quad \left.{\frac {\partial }{\partial x_{2}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}\,,\ldots ,\left.{\frac {\partial }{\partial x_{n}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}}
exist and are continuous for all a in the domain, f has differentiability class C1. In general, if all order p partial derivatives evaluated at a point a:
∂
p
∂
x
1
p
1
∂
x
2
p
2
⋯
∂
x
n
p
n
f
(
x
)
|
x
=
a
{\displaystyle \left.{\frac {\partial ^{p}}{\partial x_{1}^{p_{1}}\partial x_{2}^{p_{2}}\cdots \partial x_{n}^{p_{n}}}}f({\boldsymbol {x}})\right|_{{\boldsymbol {x}}={\boldsymbol {a}}}}
exist and are continuous, where p1, p2, …, pn, and p are as above, for all a in the domain, then f is differentiable to order p throughout the domain and has differentiability class C p.
If f is of differentiability class C∞, f has continuous partial derivatives of all order and is called smooth. If f is an analytic function and equals its Taylor series about any point in the domain, the notation Cω denotes this differentiability class.
=== Multiple integration ===
Definite integration can be extended to multiple integration over the several real variables with the notation;
∫
R
n
⋯
∫
R
2
∫
R
1
f
(
x
1
,
x
2
,
…
,
x
n
)
d
x
1
d
x
2
⋯
d
x
n
≡
∫
R
f
(
x
)
d
n
x
{\displaystyle \int _{R_{n}}\cdots \int _{R_{2}}\int _{R_{1}}f(x_{1},x_{2},\ldots ,x_{n})\,dx_{1}dx_{2}\cdots dx_{n}\equiv \int _{R}f({\boldsymbol {x}})\,d^{n}{\boldsymbol {x}}}
where each region R1, R2, …, Rn is a subset of or all of the real line:
R
1
⊆
R
,
R
2
⊆
R
,
…
,
R
n
⊆
R
,
{\displaystyle R_{1}\subseteq \mathbb {R} \,,\quad R_{2}\subseteq \mathbb {R} \,,\ldots ,R_{n}\subseteq \mathbb {R} ,}
and their Cartesian product gives the region to integrate over as a single set:
R
=
R
1
×
R
2
×
⋯
×
R
n
,
R
⊆
R
n
,
{\displaystyle R=R_{1}\times R_{2}\times \dots \times R_{n}\,,\quad R\subseteq \mathbb {R} ^{n}\,,}
an n-dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region R of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration.
The integral of a real-valued function of a real variable y = f(x) with respect to x has geometric interpretation as the area bounded by the curve y = f(x) and the x-axis. Multiple integrals extend the dimensionality of this concept: assuming an n-dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the n-dimensional hypervolume bounded by f(x) and the x1, x2, …, xn axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent).
While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if f is some scalar density field and x are the position vector coordinates, i.e. some scalar quantity per unit n-dimensional hypervolume, then integrating over the region R gives the total amount of quantity in R. The more formal notions of hypervolume is the subject of measure theory. Above we used the Lebesgue measure, see Lebesgue integration for more on this topic.
=== Theorems ===
With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign.
=== Vector calculus ===
One can collect a number of functions each of several real variables, say
y
1
=
f
1
(
x
1
,
x
2
,
…
,
x
n
)
,
y
2
=
f
2
(
x
1
,
x
2
,
…
,
x
n
)
,
…
,
y
m
=
f
m
(
x
1
,
x
2
,
⋯
x
n
)
{\displaystyle y_{1}=f_{1}(x_{1},x_{2},\ldots ,x_{n})\,,\quad y_{2}=f_{2}(x_{1},x_{2},\ldots ,x_{n})\,,\ldots ,y_{m}=f_{m}(x_{1},x_{2},\cdots x_{n})}
into an m-tuple, or sometimes as a column vector or row vector, respectively:
(
y
1
,
y
2
,
…
,
y
m
)
↔
[
f
1
(
x
1
,
x
2
,
…
,
x
n
)
f
2
(
x
1
,
x
2
,
⋯
x
n
)
⋮
f
m
(
x
1
,
x
2
,
…
,
x
n
)
]
↔
[
f
1
(
x
1
,
x
2
,
…
,
x
n
)
f
2
(
x
1
,
x
2
,
…
,
x
n
)
⋯
f
m
(
x
1
,
x
2
,
…
,
x
n
)
]
{\displaystyle (y_{1},y_{2},\ldots ,y_{m})\leftrightarrow {\begin{bmatrix}f_{1}(x_{1},x_{2},\ldots ,x_{n})\\f_{2}(x_{1},x_{2},\cdots x_{n})\\\vdots \\f_{m}(x_{1},x_{2},\ldots ,x_{n})\end{bmatrix}}\leftrightarrow {\begin{bmatrix}f_{1}(x_{1},x_{2},\ldots ,x_{n})&f_{2}(x_{1},x_{2},\ldots ,x_{n})&\cdots &f_{m}(x_{1},x_{2},\ldots ,x_{n})\end{bmatrix}}}
all treated on the same footing as an m-component vector field, and use whichever form is convenient. All the above notations have a common compact notation y = f(x). The calculus of such vector fields is vector calculus. For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus.
== Implicit functions ==
A real-valued implicit function of several real variables is not written in the form "y = f(…)". Instead, the mapping is from the space Rn + 1 to the zero element in R (just the ordinary zero 0):
ϕ
:
R
n
+
1
→
{
0
}
ϕ
(
x
1
,
x
2
,
…
,
x
n
,
y
)
=
0
{\displaystyle {\begin{aligned}&\phi :\mathbb {R} ^{n+1}\to \{0\}\\&\phi (x_{1},x_{2},\ldots ,x_{n},y)=0\end{aligned}}}
is an equation in all the variables. Implicit functions are a more general way to represent functions, since if:
y
=
f
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle y=f(x_{1},x_{2},\ldots ,x_{n})}
then we can always define:
ϕ
(
x
1
,
x
2
,
…
,
x
n
,
y
)
=
y
−
f
(
x
1
,
x
2
,
…
,
x
n
)
=
0
{\displaystyle \phi (x_{1},x_{2},\ldots ,x_{n},y)=y-f(x_{1},x_{2},\ldots ,x_{n})=0}
but the converse is not always possible, i.e. not all implicit functions have an explicit form.
For example, using interval notation, let
ϕ
:
X
→
{
0
}
ϕ
(
x
,
y
,
z
)
=
(
x
a
)
2
+
(
y
b
)
2
+
(
z
c
)
2
−
1
=
0
X
=
[
−
a
,
a
]
×
[
−
b
,
b
]
×
[
−
c
,
c
]
=
{
(
x
,
y
,
z
)
∈
R
3
:
−
a
≤
x
≤
a
,
−
b
≤
y
≤
b
,
−
c
≤
z
≤
c
}
.
{\displaystyle {\begin{aligned}&\phi :X\to \{0\}\\&\phi (x,y,z)=\left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}+\left({\frac {z}{c}}\right)^{2}-1=0\\&X=[-a,a]\times [-b,b]\times [-c,c]=\left\{(x,y,z)\in \mathbb {R} ^{3}\,:\,-a\leq x\leq a,-b\leq y\leq b,-c\leq z\leq c\right\}.\end{aligned}}}
Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin (x, y, z) = (0, 0, 0) with constant semi-major axes a, b, c, along the positive x, y and z axes respectively. In the case a = b = c = r, we have a sphere of radius r centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for x, y or z; however it is much tidier to write it in an implicit form.
For a more sophisticated example:
ϕ
:
R
4
→
{
0
}
ϕ
(
t
,
x
,
y
,
z
)
=
C
t
z
e
t
x
−
y
z
+
A
sin
(
3
ω
t
)
(
x
2
z
−
B
y
6
)
=
0
{\displaystyle {\begin{aligned}&\phi :\mathbb {R} ^{4}\to \{0\}\\&\phi (t,x,y,z)=Ctze^{tx-yz}+A\sin(3\omega t)\left(x^{2}z-By^{6}\right)=0\end{aligned}}}
for non-zero real constants A, B, C, ω, this function is well-defined for all (t, x, y, z), but it cannot be solved explicitly for these variables and written as "t =", "x =", etc.
The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let ϕ(x1, x2, …, xn) be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point (a, b) = (a1, a2, …, an, b) be zero:
ϕ
(
a
,
b
)
=
0
;
{\displaystyle \phi ({\boldsymbol {a}},b)=0;}
and let the first partial derivative of ϕ with respect to y evaluated at (a, b) be non-zero:
∂
ϕ
(
x
,
y
)
∂
y
|
(
x
,
y
)
=
(
a
,
b
)
≠
0.
{\displaystyle \left.{\frac {\partial \phi ({\boldsymbol {x}},y)}{\partial y}}\right|_{({\boldsymbol {x}},y)=({\boldsymbol {a}},b)}\neq 0.}
Then, there is an interval [y1, y2] containing b, and a region R containing (a, b), such that for every x in R there is exactly one value of y in [y1, y2] satisfying ϕ(x, y) = 0, and y is a continuous function of x so that ϕ(x, y(x)) = 0. The total differentials of the functions are:
d
y
=
∂
y
∂
x
1
d
x
1
+
∂
y
∂
x
2
d
x
2
+
⋯
+
∂
y
∂
x
n
d
x
n
;
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}dx_{1}+{\frac {\partial y}{\partial x_{2}}}dx_{2}+\dots +{\frac {\partial y}{\partial x_{n}}}dx_{n};}
d
ϕ
=
∂
ϕ
∂
x
1
d
x
1
+
∂
ϕ
∂
x
2
d
x
2
+
⋯
+
∂
ϕ
∂
x
n
d
x
n
+
∂
ϕ
∂
y
d
y
.
{\displaystyle d\phi ={\frac {\partial \phi }{\partial x_{1}}}dx_{1}+{\frac {\partial \phi }{\partial x_{2}}}dx_{2}+\dots +{\frac {\partial \phi }{\partial x_{n}}}dx_{n}+{\frac {\partial \phi }{\partial y}}dy.}
Substituting dy into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of y with respect to xi in terms of the derivatives of the original function, each as a solution of the linear equation
∂
ϕ
∂
x
i
+
∂
ϕ
∂
y
∂
y
∂
x
i
=
0
{\displaystyle {\frac {\partial \phi }{\partial x_{i}}}+{\frac {\partial \phi }{\partial y}}{\frac {\partial y}{\partial x_{i}}}=0}
for i = 1, 2, …, n.
== Complex-valued function of several real variables ==
A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.
If f(x1, …, xn) is such a complex valued function, it may be decomposed as
f
(
x
1
,
…
,
x
n
)
=
g
(
x
1
,
…
,
x
n
)
+
i
h
(
x
1
,
…
,
x
n
)
,
{\displaystyle f(x_{1},\ldots ,x_{n})=g(x_{1},\ldots ,x_{n})+ih(x_{1},\ldots ,x_{n}),}
where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.
This reduction works for the general properties. However, for an explicitly given function, such as:
z
(
x
,
y
,
α
,
a
,
q
)
=
q
2
π
[
ln
(
x
+
i
y
−
a
e
i
α
)
−
ln
(
x
+
i
y
+
a
e
−
i
α
)
]
{\displaystyle z(x,y,\alpha ,a,q)={\frac {q}{2\pi }}\left[\ln \left(x+iy-ae^{i\alpha }\right)-\ln \left(x+iy+ae^{-i\alpha }\right)\right]}
the computation of the real and the imaginary part may be difficult.
== Applications ==
Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities.
=== Examples of real-valued functions of several real variables ===
Examples in continuum mechanics include the local mass density ρ of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), r = (x, y, z), and time t:
ρ
=
ρ
(
r
,
t
)
=
ρ
(
x
,
y
,
z
,
t
)
{\displaystyle \rho =\rho (\mathbf {r} ,t)=\rho (x,y,z,t)}
Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields.
Another example is the velocity field, a vector field, which has components of velocity v = (vx, vy, vz) that are each multivariable functions of spatial coordinates and time similarly:
v
(
r
,
t
)
=
v
(
x
,
y
,
z
,
t
)
=
[
v
x
(
x
,
y
,
z
,
t
)
,
v
y
(
x
,
y
,
z
,
t
)
,
v
z
(
x
,
y
,
z
,
t
)
]
{\displaystyle \mathbf {v} (\mathbf {r} ,t)=\mathbf {v} (x,y,z,t)=[v_{x}(x,y,z,t),v_{y}(x,y,z,t),v_{z}(x,y,z,t)]}
Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields.
Another important example is the equation of state in thermodynamics, an equation relating pressure P, temperature T, and volume V of a fluid, in general it has an implicit form:
f
(
P
,
V
,
T
)
=
0
{\displaystyle f(P,V,T)=0}
The simplest example is the ideal gas law:
f
(
P
,
V
,
T
)
=
P
V
−
n
R
T
=
0
{\displaystyle f(P,V,T)=PV-nRT=0}
where n is the number of moles, constant for a fixed amount of substance, and R the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form.
Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production.
=== Examples of complex-valued functions of several real variables ===
Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature.
In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential
F
(
x
,
y
,
…
)
=
φ
(
x
,
y
,
…
)
+
i
ψ
(
x
,
y
,
…
)
{\displaystyle F(x,y,\ldots )=\varphi (x,y,\ldots )+i\psi (x,y,\ldots )}
is a complex valued function of the two spatial coordinates x and y, and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function.
The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the z-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles:
Y
ℓ
m
=
Y
ℓ
m
(
θ
,
ϕ
)
{\displaystyle Y_{\ell }^{m}=Y_{\ell }^{m}(\theta ,\phi )}
In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time t:
Ψ
=
Ψ
(
r
,
t
)
=
Ψ
(
x
,
y
,
z
,
t
)
,
Φ
=
Φ
(
p
,
t
)
=
Φ
(
p
x
,
p
y
,
p
z
,
t
)
{\displaystyle \Psi =\Psi (\mathbf {r} ,t)=\Psi (x,y,z,t)\,,\quad \Phi =\Phi (\mathbf {p} ,t)=\Phi (p_{x},p_{y},p_{z},t)}
where each is related by a Fourier transform.
== See also ==
Real coordinate space
Real analysis
Complex analysis
Function of several complex variables
Multivariate interpolation
Scalar fields
== References ==
F. Ayres, E. Mendelson (2009). Calculus. Schaum's outline series (5th ed.). McGraw Hill. ISBN 978-0-07-150861-2.
R. Wrede, M. R. Spiegel (2010). Advanced calculus. Schaum's outline series (3rd ed.). McGraw Hill. ISBN 978-0-07-162366-7.
W. F. Hughes, J. A. Brighton (1999). Fluid Dynamics. Schaum's outline series (3rd ed.). McGraw Hill. p. 160. ISBN 978-0-07-031118-3.
R. Penrose (2005). The Road to Reality. Vintage books. ISBN 978-00994-40680.
S. Dineen (2001). Multivariate Calculus and Geometry. Springer Undergraduate Mathematics Series (2 ed.). Springer. ISBN 185-233-472-X.
N. Bourbaki (2004). Functions of a Real Variable: Elementary Theory. Springer. ISBN 354-065-340-6.
M. A. Moskowitz, F. Paliogiannis (2011). Functions of Several Real Variables. World Scientific. ISBN 978-981-429-927-5.
W. Fleming (1977). Functions of Several Variables. Undergraduate Texts in Mathematics (2nd ed.). Springer. ISBN 0-387-902-066. | Wikipedia/Function_of_several_real_variables |
In mathematics, an integer-valued function is a function whose values are integers. In other words, it is a function that assigns an integer to each member of its domain.
The floor and ceiling functions are examples of integer-valued functions of a real variable, but on real numbers and, generally, on (non-disconnected) topological spaces integer-valued functions are not especially useful. Any such function on a connected space either has discontinuities or is constant. On the other hand, on discrete and other totally disconnected spaces integer-valued functions have roughly the same importance as real-valued functions have on non-discrete spaces.
Any function with natural, or non-negative integer values is a partial case of an integer-valued function.
== Examples ==
Integer-valued functions defined on the domain of all real numbers include the floor and ceiling functions, the Dirichlet function, the sign function and the Heaviside step function (except possibly at 0).
Integer-valued functions defined on the domain of non-negative real numbers include the integer square root function and the prime-counting function.
== Algebraic properties ==
On an arbitrary set X, integer-valued functions form a ring with pointwise operations of addition and multiplication, and also an algebra over the ring Z of integers. Since the latter is an ordered ring, the functions form a partially ordered ring:
f
≤
g
⟺
∀
x
:
f
(
x
)
≤
g
(
x
)
.
{\displaystyle f\leq g\quad \iff \quad \forall x:f(x)\leq g(x).}
== Uses ==
=== Graph theory and algebra ===
Integer-valued functions are ubiquitous in graph theory. They also have similar uses in geometric group theory, where length function represents the concept of norm, and word metric represents the concept of metric.
Integer-valued polynomials are important in ring theory.
=== Mathematical logic and computability theory ===
In mathematical logic, such concepts as primitive recursive functions and μ-recursive functions represent integer-valued functions of several natural variables or, in other words, functions on Nn. Gödel numbering, defined on well-formed formulae of some formal language, is a natural-valued function.
Computability theory is essentially based on natural numbers and natural (or integer) functions on them.
=== Number theory ===
In number theory, many arithmetic functions are integer-valued.
=== Computer science ===
In computer programming, many functions return values of integer type due to simplicity of implementation.
== See also ==
Integer-valued polynomial
Semi-continuity
Rank (disambiguation)#Mathematics
Grade (disambiguation)#In mathematics
== References ==
== Further reading ==
https://webusers.imj-prg.fr/~michel.waldschmidt/articles/pdf/SurveyIntegerValuedEntireFunctions.pdf | Wikipedia/Integer-valued_function |
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
Originally called infinitesimal calculus or "the calculus of infinitesimals", it has two major branches, differential calculus and integral calculus. The former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus. They make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. It is the "mathematical backbone" for dealing with problems where variables change with time or another reference variable.
Infinitesimal calculus was formulated separately in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. The concepts and techniques found in calculus have diverse applications in science, engineering, and other branches of mathematics.
== Etymology ==
In mathematics education, calculus is an abbreviation of both infinitesimal calculus and integral calculus, which denotes courses of elementary mathematical analysis.
In Latin, the word calculus means “small pebble”, (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to be the Latin word for calculation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton, who wrote their mathematical texts in Latin.
In addition to differential calculus and integral calculus, the term is also used for naming specific methods of computation or theories that imply some sort of computation. Examples of this usage include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, sequent calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
== History ==
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
=== Ancient precursors ===
==== Egypt ====
Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus (c. 1820 BC), but the formulae are simple instructions, with no indication as to how they were obtained.
==== Greece ====
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus (c. 390–337 BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
During the Hellenistic period, this method was further developed by Archimedes (c. 287 – c. 212 BC), who combined it with a concept of the indivisibles—a precursor to infinitesimals—allowing him to solve several problems now treated by integral calculus. In The Method of Mechanical Theorems he describes, for example, calculating the center of gravity of a solid hemisphere, the center of gravity of a frustum of a circular paraboloid, and the area of a region bounded by a parabola and one of its secant lines.
==== China ====
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method that would later be called Cavalieri's principle to find the volume of a sphere.
=== Medieval ===
==== Middle East ====
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. He determined the equations to calculate the area enclosed by the curve represented by
y
=
x
k
{\displaystyle y=x^{k}}
(which translates to the integral
∫
x
k
d
x
{\displaystyle \int x^{k}\,dx}
in contemporary notation), for any given non-negative integer value of
k
{\displaystyle k}
.He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
==== India ====
Bhāskara II (c. 1114–1185) was acquainted with some ideas of differential calculus and suggested that the "differential coefficient" vanishes at an extremum value of the function. In his astronomical work, he gave a procedure that looked like a precursor to infinitesimal methods. Namely, if
x
≈
y
{\displaystyle x\approx y}
then
sin
(
y
)
−
sin
(
x
)
≈
(
y
−
x
)
cos
(
y
)
.
{\displaystyle \sin(y)-\sin(x)\approx (y-x)\cos(y).}
This can be interpreted as the discovery that cosine is the derivative of sine. In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics stated components of calculus. They studied series equivalent to the Maclaurin expansions of
sin
(
x
)
{\displaystyle \sin(x)}
,
cos
(
x
)
{\displaystyle \cos(x)}
, and
arctan
(
x
)
{\displaystyle \arctan(x)}
more than two hundred years before their introduction in Europe. According to Victor J. Katz they were not able to "combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today".
=== Modern ===
Johannes Kepler's work Stereometria Doliorum (1615) formed the basis of integral calculus. Kepler developed a method to calculate the area of an ellipse by adding up the lengths of many radii drawn from a focus of the ellipse.
Significant work was a treatise, the origin being Kepler's methods, written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving predecessors to the second fundamental theorem of calculus around 1670.
The product rule and chain rule, the notions of higher derivatives and Taylor series, and of analytic functions were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics. Leibniz developed much of the notation used in calculus today.: 51–52 The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.: 100 The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.
=== Foundations ===
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus, the use of infinitesimal quantities was thought unrigorous and was fiercely criticized by several authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere "notions" of infinitely small quantities. The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation. In his work, Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called "infinitesimal calculus". Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to the complex plane with the development of complex analysis.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory, based on earlier developments by Émile Borel, and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher-power infinitesimals during derivations. Based on the ideas of F. W. Lawvere and employing the methods of category theory, smooth infinitesimal analysis views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold. The law of excluded middle is also rejected in constructive mathematics, a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
=== Significance ===
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization.: 341–453 Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure.: 685–700 More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
== Principles ==
=== Limits and infinitesimals ===
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols
d
x
{\displaystyle dx}
and
d
y
{\displaystyle dy}
were taken to be infinitesimal, and the derivative
d
y
/
d
x
{\displaystyle dy/dx}
was their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason, they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
=== Differential calculus ===
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.: 32
In more explicit terms the "doubling function" may be denoted by g(x) = 2x and the "squaring function" by f(x) = x2. The "derivative" now takes the function f(x), defined by the expression "x2", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out.
In Lagrange's notation, the symbol for a derivative is an apostrophe-like mark called a prime. Thus, the derivative of a function called f is denoted by f′, pronounced "f prime" or "f dash". For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above).
If the input of the function represents time, then the derivative represents change concerning time. For example, if f is a function that takes time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.: 18–20
If a function is linear (that is if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
m
=
rise
run
=
change in
y
change in
x
=
Δ
y
Δ
x
.
{\displaystyle m={\frac {\text{rise}}{\text{run}}}={\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {\Delta y}{\Delta x}}.}
This gives an exact value for the slope of a straight line.: 6 If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output concerning change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is
m
=
f
(
a
+
h
)
−
f
(
a
)
(
a
+
h
)
−
a
=
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle m={\frac {f(a+h)-f(a)}{(a+h)-a}}={\frac {f(a+h)-f(a)}{h}}.}
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The second line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
lim
h
→
0
f
(
a
+
h
)
−
f
(
a
)
h
.
{\displaystyle \lim _{h\to 0}{f(a+h)-f(a) \over {h}}.}
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.: 61–63
Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.
f
′
(
3
)
=
lim
h
→
0
(
3
+
h
)
2
−
3
2
h
=
lim
h
→
0
9
+
6
h
+
h
2
−
9
h
=
lim
h
→
0
6
h
+
h
2
h
=
lim
h
→
0
(
6
+
h
)
=
6
{\displaystyle {\begin{aligned}f'(3)&=\lim _{h\to 0}{(3+h)^{2}-3^{2} \over {h}}\\&=\lim _{h\to 0}{9+6h+h^{2}-9 \over {h}}\\&=\lim _{h\to 0}{6h+h^{2} \over {h}}\\&=\lim _{h\to 0}(6+h)\\&=6\end{aligned}}}
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.: 63
=== Leibniz notation ===
A common notation, introduced by Leibniz, for the derivative in the example above is
y
=
x
2
d
y
d
x
=
2
x
.
{\displaystyle {\begin{aligned}y&=x^{2}\\{\frac {dy}{dx}}&=2x.\end{aligned}}}
In an approach based on limits, the symbol dy/ dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above.: 74 Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/ dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
d
d
x
(
x
2
)
=
2
x
.
{\displaystyle {\frac {d}{dx}}(x^{2})=2x.}
In this usage, the dx in the denominator is read as "with respect to x".: 79 Another example of correct notation could be:
g
(
t
)
=
t
2
+
2
t
+
4
d
d
t
g
(
t
)
=
2
t
+
2
{\displaystyle {\begin{aligned}g(t)&=t^{2}+2t+4\\{d \over dt}g(t)&=2t+2\end{aligned}}}
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
=== Integral calculus ===
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration.: 508 The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative.: 163–165 F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.) The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.: 282
A motivating example is the distance traveled in a given time.: 153 If the speed is constant, only multiplication is needed:
D
i
s
t
a
n
c
e
=
S
p
e
e
d
⋅
T
i
m
e
{\displaystyle \mathrm {Distance} =\mathrm {Speed} \cdot \mathrm {Time} }
But if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, traveling a steady 50 mph for 3 hours results in a total distance of 150 miles. Plotting the velocity as a function of time yields a rectangle with a height equal to the velocity and a width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve.: 535 This connection between the area under a curve and the distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given period. If f(x) represents speed as it varies over time, the distance traveled between the times represented by a and b is the area of the region between f(x) and the x-axis, between x = a and x = b.
To approximate that area, an intuitive method would be to divide up the distance between a and b into several equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer, we need to take a limit as Δx approaches zero.: 512–522
The symbol of integration is
∫
{\displaystyle \int }
, an elongated S chosen to suggest summation.: 529 The definite integral is written as:
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
and is read "the integral from a to b of f-of-x with respect to x." The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles so that their width Δx becomes the infinitesimally small dx.: 44
The indefinite integral, or antiderivative, is written:
∫
f
(
x
)
d
x
.
{\displaystyle \int f(x)\,dx.}
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is a family of functions differing only by a constant.: 326 Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter is given by:
∫
2
x
d
x
=
x
2
+
C
.
{\displaystyle \int 2x\,dx=x^{2}+C.}
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.: 135
=== Fundamental theorem ===
The fundamental theorem of calculus states that differentiation and integration are inverse operations.: 290 More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
Furthermore, for every x in the interval (a, b),
d
d
x
∫
a
x
f
(
t
)
d
t
=
f
(
x
)
.
{\displaystyle {\frac {d}{dx}}\int _{a}^{x}f(t)\,dt=f(x).}
This realization, made by both Newton and Leibniz, was key to the proliferation of analytic results after their work became known. (The extent to which Newton and Leibniz were influenced by immediate predecessors, and particularly what Leibniz may have learned from the work of Isaac Barrow, is difficult to determine because of the priority dispute between them.) The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulae for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives and are ubiquitous in the sciences.: 351–352
== Applications ==
Calculus is used in every branch of the physical sciences,: 1 actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other. Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the "best fit" linear approximation for a set of points in a domain. Or, it can be used in probability theory to determine the expectation value of a continuous random variable given a probability density function.: 37 In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points. Calculus is also used to find approximate solutions to equations; in practice, it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero-gravity environments.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, and the potential energies due to gravitational and electromagnetic forces can all be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion, which states that the derivative of an object's momentum concerning time equals the net force upon it. Alternatively, Newton's second law can be expressed by saying that the net force equals the object's mass times its acceleration, which is the time derivative of velocity and thus the second time derivative of spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus.: 52–55 Chemistry also uses calculus in determining reaction rates: 599 and in studying radioactive decay.: 814 In biology, population dynamics starts with reproduction and death rates to model population changes.: 631
Green's theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel to maximize flow. Calculus can be applied to understand how quickly a drug is eliminated from a body or how quickly a cancerous tumor grows.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.: 387
== See also ==
Glossary of calculus
List of calculus topics
List of derivatives and integrals in alternative calculi
List of differentiation identities
Publications in calculus
Table of integrals
== References ==
== Further reading ==
== External links ==
"Calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Calculus". MathWorld.
Topics on Calculus at PlanetMath.
Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
Calculus on In Our Time at the BBC
Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis
The Role of Calculus in College Mathematics Archived 26 July 2021 at the Wayback Machine from ERICDigests.org
OpenCourseWare Calculus from the Massachusetts Institute of Technology
Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel.
Daniel Kleitman, MIT. "Calculus for Beginners and Artists".
Calculus training materials at imomath.com
(in English and Arabic) The Excursion of Calculus, 1772 | Wikipedia/Infinitesimal_calculus |
A plot is a graphical technique for representing a data set, usually as a graph showing the relationship between two or more variables. The plot can be drawn by hand or by a computer. In the past, sometimes mechanical or electronic plotters were used. Graphs are a visual representation of the relationship between variables, which are very useful for humans who can then quickly derive an understanding which may not have come from lists of values. Given a scale or ruler, graphs can also be used to read off the value of an unknown variable plotted as a function of a known one, but this can also be done with data presented in tabular form. Graphs of functions are used in mathematics, sciences, engineering, technology, finance, and other areas.
== Overview ==
Plots play an important role in statistics and data analysis. The procedures here can broadly be split into two parts: quantitative and graphical. Quantitative techniques are a set of statistical procedures that yield numeric or tabular output. Examples of quantitative techniques include:
hypothesis testing
analysis of variance
point estimates and confidence intervals
least squares regression
These and similar techniques are all valuable and are mainstream in terms of classical analysis. There are also many statistical tools generally referred to as graphical techniques. These include:
scatter plots
spectrum plots
histograms
probability plots
residual plots
box plots, and
block plots
Graphical procedures such as plots are a short path to gaining insight into a data set in terms of testing assumptions, model selection, model validation, estimator selection, relationship identification, factor effect determination, outlier detection. Statistical graphics give insight into aspects of the underlying structure of the data.
Graphs can also be used to solve some mathematical equations, typically by finding where two plots intersect.
== Types of plots ==
Biplot : These are a type of graph used in statistics. A biplot allows information on both samples and variables of a data matrix to be displayed graphically. Samples are displayed as points while variables are displayed either as vectors, linear axes or nonlinear trajectories. In the case of categorical variables, category level points may be used to represent the levels of a categorical variable. A generalised biplot displays information on both continuous and categorical variables.
Bland–Altman plot : In analytical chemistry and biostatistics this plot is a method of data plotting used in analysing the agreement between two different assays. It is identical to a Tukey mean-difference plot, which is what it is still known as in other fields, but was popularised in medical statistics by Bland and Altman.
Bode plots are used in control theory.
Box plot : In descriptive statistics, a boxplot, also known as a box-and-whisker diagram or plot, is a convenient way of graphically depicting groups of numerical data through their five-number summaries (the smallest observation, lower quartile (Q1), median (Q2), upper quartile (Q3), and largest observation). A boxplot may also indicate which observations, if any, might be considered outliers.
Carpet plot : A two-dimensional plot that illustrates the interaction between two and three independent variables and one to three dependent variables.
Comet plot : A two- or three-dimensional animated plot in which the data points are traced on the screen.
Contour plot : A two-dimensional plot which shows the one-dimensional curves, called contour lines on which the plotted quantity q is a constant. Optionally, the plotted values can be color-coded.
Dalitz plot : This a scatterplot often used in particle physics to represent the relative frequency of various (kinematically distinct) manners in which the products of certain (otherwise similar) three-body decays may move apart
Drain plot : A two-dimensional plot where the data are presented in a hierarchy with multiple levels. The levels are nested in the sense that the pieces in each pie chart add up to 100%. A waterfall or waterdrop metaphor is used to link each layer to the one below visually conveying the hierarchical structure. Drain Plot.
Funnel plot : This is a useful graph designed to check the existence of publication bias in meta-analyses. Funnel plots, introduced by Light and Pillemer in 1994 and discussed in detail by Egger and colleagues, are useful adjuncts to meta-analyses. A funnel plot is a scatterplot of treatment effect against a measure of study size. It is used primarily as a visual aid to detecting bias or systematic heterogeneity.
Dot plot (statistics) : A dot chart or dot plot is a statistical chart consisting of group of data points plotted on a simple scale. Dot plots are used for continuous, quantitative, univariate data. Data points may be labelled if there are few of them. Dot plots are one of the simplest plots available, and are suitable for small to moderate sized data sets. They are useful for highlighting clusters and gaps, as well as outliers.
Forest plot : is a graphical display that shows the strength of the evidence in quantitative scientific studies. It was developed for use in medical research as a means of graphically representing a meta-analysis of the results of randomized controlled trials. In the last twenty years, similar meta-analytical techniques have been applied in observational studies (e.g. environmental epidemiology) and forest plots are often used in presenting the results of such studies also.
Galbraith plot : In statistics, a Galbraith plot (also known as Galbraith's radial plot or just radial plot), is one way of displaying several estimates of the same quantity that have different standard errors. It can be used to examine heterogeneity in a meta-analysis, as an alternative or supplement to a forest plot.
Heat map
Lollipop plot
Nichols plot : This is a graph used in signal processing in which the logarithm of the magnitude is plotted against the phase of a frequency response on orthogonal axes.
Normal probability plot : The normal probability plot is a graphical technique for assessing whether or not a data set is approximately normally distributed. The data are plotted against a theoretical normal distribution in such a way that the points should form an approximate straight line. Departures from this straight line indicate departures from normality. The normal probability plot is a special case of the probability plot.
Nyquist plot : Plot is used in automatic control and signal processing for assessing the stability of a system with feedback. It is represented by a graph in polar coordinates in which the gain and phase of a frequency response are plotted. The plot of these phasor quantities shows the phase as the angle and the magnitude as the distance from the origin.
Partial regression plot : In applied statistics, a partial regression plot attempts to show the effect of adding another variable to the model (given that one or more independent variables are already in the model). Partial regression plots are also referred to as added variable plots, adjusted variable plots, and individual coefficient plots.
Partial residual plot : In applied statistics, a partial residual plot is a graphical technique that attempts to show the relationship between a given independent variable and the response variable given that other independent variables are also in the model.
Probability plot : The probability plot is a graphical technique for assessing whether or not a data set follows a given distribution such as the normal or Weibull, and for visually estimating the location and scale parameters of the chosen distribution. The data are plotted against a theoretical distribution in such a way that the points should form approximately a straight line. Departures from this straight line indicate departures from the specified distribution.
Ridgeline plot: Several line plots, vertically stacked and slightly overlapping.
Q–Q plot : In statistics, a Q–Q plot (Q stands for quantile) is a graphical method for diagnosing differences between the probability distribution of a statistical population from which a random sample has been taken and a comparison distribution. An example of the kind of differences that can be tested for is non-normality of the population distribution.
Recurrence plot : In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for a given moment in time, the times at which a phase space. In other words, it is a graph of
x
→
(
i
)
≈
x
→
(
j
)
,
{\displaystyle {\vec {x}}(i)\approx {\vec {x}}(j),\,}
showing
i
{\displaystyle i}
on a horizontal axis and
j
{\displaystyle j}
on a vertical axis, where
x
→
{\displaystyle {\vec {x}}}
is a phase space trajectory.
Scatterplot : A scatter graph or scatter plot is a type of display using variables for a set of data. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.
Shmoo plot : In electrical engineering, a shmoo plot is a graphical display of the response of a component or system varying over a range of conditions and inputs. Often used to represent the results of the testing of complex electronic systems such as computers, ASICs or microprocessors. The plot usually shows the range of conditions in which the device under test will operate.
Spaghetti plots are a method of viewing data to visualize possible flows through systems. Flows depicted in this manner appear like noodles, hence the coining of this term. This method of statistics was first used to track routing through factories. Visualizing flow in this manner can reduce inefficiency within the flow of a system.
Stemplot : A stemplot (or stem-and-leaf plot), in statistics, is a device for presenting quantitative data in a graphical format, similar to a histogram, to assist in visualizing the shape of a distribution. They evolved from Arthur Bowley's work in the early 1900s, and are useful tools in exploratory data analysis. Unlike histograms, stemplots retain the original data to at least two significant digits, and put the data in order, thereby easing the move to order-based inference and non-parametric statistics.
Star plot : A graphical method of displaying multivariate data. Each star represents a single observation. Typically, star plots are generated in a multi-plot format with many stars on each page and each star representing one observation.
Surface plot : In this visualization of the graph of a bivariate function, a surface is plotted to fit a set of data triplets (X, Y, Z), where Z if obtained by the function to be plotted Z=f(X, Y). Usually, the set of X and Y values are equally spaced. Optionally, the plotted values can be color-coded.
Ternary plot : A ternary plot, ternary graph, triangle plot, simplex plot, or de Finetti diagram is a barycentric plot on three variables which sum to a constant. It graphically depicts the ratios of the three variables as positions in an equilateral triangle. It is used in petrology, mineralogy, metallurgy, and other physical sciences to show the compositions of systems composed of three species. In population genetics, it is often called a de Finetti diagram. In game theory, it is often called a simplex plot.
Vector field : Vector field plots (or quiver plots) show the direction and the strength of a vector associated with a 2D or 3D points. They are typically used to show the strength of the gradient over the plane or a surface area.
Violin plot : Violin plots are a method of plotting numeric data. They are similar to box plots, except that they also show the probability density of the data at different values (in the simplest case this could be a histogram). Typically violin plots will include a marker for the median of the data and a box indicating the interquartile range, as in standard box plots. Overlaid on this box plot is a kernel density estimation. Violin plots are available as extensions to a number of software packages, including R through the vioplot library, and Stata through the vioplot add-in.
=== Plots for specific quantities ===
Arrhenius plot : This plot compares the logarithm of a reaction rate (
ln
(
k
)
{\displaystyle \ln(k)}
, ordinate axis) plotted against inverse temperature (
1
/
T
{\displaystyle 1/T}
, abscissa). Arrhenius plots are often used to analyze the effect of temperature on the rates of chemical reactions.
Dot plot (bioinformatics) : This plot compares two biological sequences and is a graphical method that allows the identification of regions of close similarity between them. It is a kind of recurrence plot.
Lineweaver–Burk plot : This plot compares the reciprocals of reaction rate and substrate concentration. It is used to represent and determine enzyme kinetics.
=== 3D plots ===
== Examples ==
Types of graphs and their uses vary very widely. A few typical examples are:
Simple graph: Supply and demand curves, simple graphs used in economics to relate supply and demand to price. The graphs can be used together to determine the economic equilibrium (essentially, to solve an equation).
Simple graph used for reading values: the bell-shaped normal or Gaussian probability distribution, from which, for example, the probability of a man's height being in a specified range can be derived, given data for the adult male population.
Very complex graph: the psychrometric chart, relating temperature, pressure, humidity, and other quantities.
Non-rectangular coordinates: the above all use two-dimensional rectangular coordinates; an example of a graph using polar coordinates, sometimes in three dimensions, is the antenna radiation pattern chart, which represents the power radiated in all directions by an antenna of specified type.
== See also ==
Chart
Diagram
Graph of a function
Line chart
List of charting software
List of graphical methods
Plotting software
Plotter
List of plotting programs
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology
== External links ==
Dataplot gallery of some useful graphical techniques at itl.nist.gov. | Wikipedia/Surface_plot_(graphics) |
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function.
Formal definitions, first devised in the early 19th century, are given below. Informally, a function f assigns an output f(x) to every input x. We say that the function has a limit L at an input p, if f(x) gets closer and closer to L as x moves closer and closer to p. More specifically, the output value can be made arbitrarily close to L if the input to f is taken sufficiently close to p. On the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist.
The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function.
== History ==
Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bernard Bolzano who, in 1817, introduced the basics of the epsilon-delta technique (see (ε, δ)-definition of limit below) to define continuous functions. However, his work was not known during his lifetime. Bruce Pourciau argues that Isaac Newton, in his 1687 Principia, demonstrates a more sophisticated understanding of limits than he is generally given credit for, including being the first to present an epsilon argument.
In his 1821 book Cours d'analyse, Augustin-Louis Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of
y
=
f
(
x
)
{\displaystyle y=f(x)}
by saying that an infinitesimal change in x necessarily produces an infinitesimal change in y, while Grabiner claims that he used a rigorous epsilon-delta definition in proofs. In 1861, Karl Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations
lim
{\textstyle \lim }
and
lim
x
→
x
0
.
{\textstyle \textstyle \lim _{x\to x_{0}}\displaystyle .}
The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, which is introduced in his book A Course of Pure Mathematics in 1908.
== Motivation ==
Imagine a person walking on a landscape represented by the graph y = f(x). Their horizontal position is given by x, much like the position given by a map of the land or by a global positioning system. Their altitude is given by the coordinate y. Suppose they walk towards a position x = p, as they get closer and closer to this point, they will notice that their altitude approaches a specific value L. If asked about the altitude corresponding to x = p, they would reply by saying y = L.
What, then, does it mean to say, their altitude is approaching L? It means that their altitude gets nearer and nearer to L—except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of L. They report back that indeed, they can get within ten vertical meters of L, arguing that as long as they are within fifty horizontal meters of p, their altitude is always within ten meters of L.
The accuracy goal is then changed: can they get within one vertical meter? Yes, supposing that they are able to move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude L. Summarizing the aforementioned concept we can say that the traveler's altitude approaches L as their horizontal position approaches p, so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of p where all (not just some) altitudes correspond to all the horizontal positions, except maybe the horizontal position p itself, in that neighbourhood fulfill that accuracy goal.
The initial informal statement can now be explicated:
In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space.
More specifically, to say that
lim
x
→
p
f
(
x
)
=
L
,
{\displaystyle \lim _{x\to p}f(x)=L,}
is to say that f(x) can be made as close to L as desired, by making x close enough, but not equal, to p.
The following definitions, known as (ε, δ)-definitions, are the generally accepted definitions for the limit of a function in various contexts.
== Functions of a single variable ==
=== (ε, δ)-definition of limit ===
Suppose
f
:
R
→
R
{\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }
is a function defined on the real line, and there are two real numbers p and L. One would say: The limit of f of x, as x approaches p, exists, and it equals L. and write,
lim
x
→
p
f
(
x
)
=
L
,
{\displaystyle \lim _{x\to p}f(x)=L,}
or alternatively, say f(x) tends to L as x tends to p, and write,
f
(
x
)
→
L
as
x
→
p
,
{\displaystyle f(x)\to L{\text{ as }}x\to p,}
if the following property holds: for every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < |x − p| < δ implies |f(x) − L| < ε. Symbolically,
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
R
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in \mathbb {R} )\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).}
For example, we may say
lim
x
→
2
(
4
x
+
1
)
=
9
{\displaystyle \lim _{x\to 2}(4x+1)=9}
because for every real ε > 0, we can take δ = ε/4, so that for all real x, if 0 < |x − 2| < δ, then |4x + 1 − 9| < ε.
A more general definition applies for functions defined on subsets of the real line. Let S be a subset of
R
.
{\displaystyle \mathbb {R} .}
Let
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
be a real-valued function. Let p be a point such that there exists some open interval (a, b) containing p with
(
a
,
p
)
∪
(
p
,
b
)
⊂
S
.
{\displaystyle (a,p)\cup (p,b)\subset S.}
It is then said that the limit of f as x approaches p is L, if:
Or, symbolically:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
(
a
,
b
)
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).}
For example, we may say
lim
x
→
1
x
+
3
=
2
{\displaystyle \lim _{x\to 1}{\sqrt {x+3}}=2}
because for every real ε > 0, we can take δ = ε, so that for all real x ≥ −3, if 0 < |x − 1| < δ, then |f(x) − 2| < ε. In this example, S = [−3, ∞) contains open intervals around the point 1 (for example, the interval (0, 2)).
Here, note that the value of the limit does not depend on f being defined at p, nor on the value f(p)—if it is defined. For example, let
f
:
[
0
,
1
)
∪
(
1
,
2
]
→
R
,
f
(
x
)
=
2
x
2
−
x
−
1
x
−
1
.
{\displaystyle f:[0,1)\cup (1,2]\to \mathbb {R} ,f(x)={\tfrac {2x^{2}-x-1}{x-1}}.}
lim
x
→
1
f
(
x
)
=
3
{\displaystyle \lim _{x\to 1}f(x)=3}
because for every ε > 0, we can take δ = ε/2, so that for all real x ≠ 1, if 0 < |x − 1| < δ, then |f(x) − 3| < ε. Note that here f(1) is undefined.
In fact, a limit can exist in
{
p
∈
R
|
∃
(
a
,
b
)
⊂
R
:
p
∈
(
a
,
b
)
and
(
a
,
p
)
∪
(
p
,
b
)
⊂
S
}
,
{\displaystyle \{p\in \mathbb {R} \,|\,\exists (a,b)\subset \mathbb {R} :\,p\in (a,b){\text{ and }}(a,p)\cup (p,b)\subset S\},}
which equals
int
S
∪
iso
S
c
,
{\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},}
where int S is the interior of S, and iso Sc are the isolated points of the complement of S. In our previous example where
S
=
[
0
,
1
)
∪
(
1
,
2
]
,
{\displaystyle S=[0,1)\cup (1,2],}
int
S
=
(
0
,
1
)
∪
(
1
,
2
)
,
{\displaystyle \operatorname {int} S=(0,1)\cup (1,2),}
iso
S
c
=
{
1
}
.
{\displaystyle \operatorname {iso} S^{c}=\{1\}.}
We see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2.
The letters ε and δ can be understood as "error" and "distance". In fact, Cauchy used ε as an abbreviation for "error" in some of his work, though in his definition of continuity, he used an infinitesimal
α
{\displaystyle \alpha }
rather than either ε or δ (see Cours d'Analyse). In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired, by reducing the distance (δ) to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that δ and ε represent distances helps suggest these generalizations.
=== Existence and one-sided limits ===
Alternatively, x may approach p from above (right) or below (left), in which case the limits may be written as
lim
x
→
p
+
f
(
x
)
=
L
{\displaystyle \lim _{x\to p^{+}}f(x)=L}
or
lim
x
→
p
−
f
(
x
)
=
L
{\displaystyle \lim _{x\to p^{-}}f(x)=L}
respectively. If these limits exist at p and are equal there, then this can be referred to as the limit of f(x) at p. If the one-sided limits exist at p, but are unequal, then there is no limit at p (i.e., the limit at p does not exist). If either one-sided limit does not exist at p, then the limit at p also does not exist.
A formal definition is as follows. The limit of f as x approaches p from above is L if:
For every ε > 0, there exists a δ > 0 such that whenever 0 < x − p < δ, we have |f(x) − L| < ε.
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
(
a
,
b
)
)
(
0
<
x
−
p
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<x-p<\delta \implies |f(x)-L|<\varepsilon ).}
The limit of f as x approaches p from below is L if:
For every ε > 0, there exists a δ > 0 such that whenever 0 < p − x < δ, we have |f(x) − L| < ε.
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
(
a
,
b
)
)
(
0
<
p
−
x
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in (a,b))\,(0<p-x<\delta \implies |f(x)-L|<\varepsilon ).}
If the limit does not exist, then the oscillation of f at p is non-zero.
=== More general definition using limit points and subsets ===
Limits can also be defined by approaching from subsets of the domain.
In general: Let
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
be a real-valued function defined on some
S
⊆
R
.
{\displaystyle S\subseteq \mathbb {R} .}
Let p be a limit point of some
T
⊂
S
{\displaystyle T\subset S}
—that is, p is the limit of some sequence of elements of T distinct from p. Then we say the limit of f, as x approaches p from values in T, is L, written
lim
x
→
p
x
∈
T
f
(
x
)
=
L
{\displaystyle \lim _{{x\to p} \atop {x\in T}}f(x)=L}
if the following holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
T
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in T)\,(0<|x-p|<\delta \implies |f(x)-L|<\varepsilon ).}
Note, T can be any subset of S, the domain of f. And the limit might depend on the selection of T. This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking T to be an open interval of the form (–∞, a)), and right-handed limits (e.g., by taking T to be an open interval of the form (a, ∞)). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
can have limit 0 as x approaches 0 from above:
lim
x
→
0
x
∈
[
0
,
∞
)
x
=
0
{\displaystyle \lim _{{x\to 0} \atop {x\in [0,\infty )}}{\sqrt {x}}=0}
since for every ε > 0, we may take δ = ε2 such that for all x ≥ 0, if 0 < |x − 0| < δ, then |f(x) − 0| < ε.
This definition allows a limit to be defined at limit points of the domain S, if a suitable subset T which has the same limit point is chosen.
Notably, the previous two-sided definition works on
int
S
∪
iso
S
c
,
{\displaystyle \operatorname {int} S\cup \operatorname {iso} S^{c},}
which is a subset of the limit points of S.
For example, let
S
=
[
0
,
1
)
∪
(
1
,
2
]
.
{\displaystyle S=[0,1)\cup (1,2].}
The previous two-sided definition would work at
1
∈
iso
S
c
=
{
1
}
,
{\displaystyle 1\in \operatorname {iso} S^{c}=\{1\},}
but it wouldn't work at 0 or 2, which are limit points of S.
=== Deleted versus non-deleted limits ===
The definition of limit given here does not depend on how (or whether) f is defined at p. Bartle refers to this as a deleted limit, because it excludes the value of f at p. The corresponding non-deleted limit does depend on the value of f at p, if p is in the domain of f. Let
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
be a real-valued function. The non-deleted limit of f, as x approaches p, is L if
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
|
x
−
p
|
<
δ
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(|x-p|<\delta \implies |f(x)-L|<\varepsilon ).}
The definition is the same, except that the neighborhood |x − p| < δ now includes the point p, in contrast to the deleted neighborhood 0 < |x − p| < δ. This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits).
Bartle notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular.
=== Examples ===
==== Non-existence of one-sided limit(s) ====
The function
f
(
x
)
=
{
sin
5
x
−
1
for
x
<
1
0
for
x
=
1
1
10
x
−
10
for
x
>
1
{\displaystyle f(x)={\begin{cases}\sin {\frac {5}{x-1}}&{\text{ for }}x<1\\0&{\text{ for }}x=1\\[2pt]{\frac {1}{10x-10}}&{\text{ for }}x>1\end{cases}}}
has no limit at x0 = 1 (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture), but has a limit at every other x-coordinate.
The function
f
(
x
)
=
{
1
x
rational
0
x
irrational
{\displaystyle f(x)={\begin{cases}1&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}}
(a.k.a., the Dirichlet function) has no limit at any x-coordinate.
==== Non-equality of one-sided limits ====
The function
f
(
x
)
=
{
1
for
x
<
0
2
for
x
≥
0
{\displaystyle f(x)={\begin{cases}1&{\text{ for }}x<0\\2&{\text{ for }}x\geq 0\end{cases}}}
has a limit at every non-zero x-coordinate (the limit equals 1 for negative x and equals 2 for positive x). The limit at x = 0 does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2).
==== Limits at only one point ====
The functions
f
(
x
)
=
{
x
x
rational
0
x
irrational
{\displaystyle f(x)={\begin{cases}x&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}}
and
f
(
x
)
=
{
|
x
|
x
rational
0
x
irrational
{\displaystyle f(x)={\begin{cases}|x|&x{\text{ rational }}\\0&x{\text{ irrational }}\end{cases}}}
both have a limit at x = 0 and it equals 0.
==== Limits at countably many points ====
The function
f
(
x
)
=
{
sin
x
x
irrational
1
x
rational
{\displaystyle f(x)={\begin{cases}\sin x&x{\text{ irrational }}\\1&x{\text{ rational }}\end{cases}}}
has a limit at any x-coordinate of the form
π
2
+
2
n
π
,
{\displaystyle {\tfrac {\pi }{2}}+2n\pi ,}
where n is any integer.
== Limits involving infinity ==
=== Limits at infinity ===
Let
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
be a function defined on
S
⊆
R
.
{\displaystyle S\subseteq \mathbb {R} .}
The limit of f as x approaches infinity is L, denoted
lim
x
→
∞
f
(
x
)
=
L
,
{\displaystyle \lim _{x\to \infty }f(x)=L,}
means that:
(
∀
ε
>
0
)
(
∃
c
>
0
)
(
∀
x
∈
S
)
(
x
>
c
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x>c\implies |f(x)-L|<\varepsilon ).}
Similarly, the limit of f as x approaches minus infinity is L, denoted
lim
x
→
−
∞
f
(
x
)
=
L
,
{\displaystyle \lim _{x\to -\infty }f(x)=L,}
means that:
(
∀
ε
>
0
)
(
∃
c
>
0
)
(
∀
x
∈
S
)
(
x
<
−
c
⟹
|
f
(
x
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(x<-c\implies |f(x)-L|<\varepsilon ).}
For example,
lim
x
→
∞
(
−
3
sin
x
x
+
4
)
=
4
{\displaystyle \lim _{x\to \infty }\left(-{\frac {3\sin x}{x}}+4\right)=4}
because for every ε > 0, we can take c = 3/ε such that for all real x, if x > c, then |f(x) − 4| < ε.
Another example is that
lim
x
→
−
∞
e
x
=
0
{\displaystyle \lim _{x\to -\infty }e^{x}=0}
because for every ε > 0, we can take c = max{1, −ln(ε)} such that for all real x, if x < −c, then |f(x) − 0| < ε.
=== Infinite limits ===
For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values.
Let
f
:
S
→
R
{\displaystyle f:S\to \mathbb {R} }
be a function defined on
S
⊆
R
.
{\displaystyle S\subseteq \mathbb {R} .}
The statement the limit of f as x approaches p is infinity, denoted
lim
x
→
p
f
(
x
)
=
∞
,
{\displaystyle \lim _{x\to p}f(x)=\infty ,}
means that:
(
∀
N
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
0
<
|
x
−
p
|
<
δ
⟹
f
(
x
)
>
N
)
.
{\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)>N).}
The statement the limit of f as x approaches p is minus infinity, denoted
lim
x
→
p
f
(
x
)
=
−
∞
,
{\displaystyle \lim _{x\to p}f(x)=-\infty ,}
means that:
(
∀
N
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
0
<
|
x
−
p
|
<
δ
⟹
f
(
x
)
<
−
N
)
.
{\displaystyle (\forall N>0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies f(x)<-N).}
For example,
lim
x
→
1
1
(
x
−
1
)
2
=
∞
{\displaystyle \lim _{x\to 1}{\frac {1}{(x-1)^{2}}}=\infty }
because for every N > 0, we can take
δ
=
1
N
δ
=
1
N
{\textstyle \delta ={\tfrac {1}{{\sqrt {N}}\delta }}={\tfrac {1}{\sqrt {N}}}}
such that for all real x > 0, if 0 < x − 1 < δ, then f(x) > N.
These ideas can be used together to produce definitions for different combinations, such as
lim
x
→
∞
f
(
x
)
=
∞
,
{\displaystyle \lim _{x\to \infty }f(x)=\infty ,}
or
lim
x
→
p
+
f
(
x
)
=
−
∞
.
{\displaystyle \lim _{x\to p^{+}}f(x)=-\infty .}
For example,
lim
x
→
0
+
ln
x
=
−
∞
{\displaystyle \lim _{x\to 0^{+}}\ln x=-\infty }
because for every N > 0, we can take δ = e−N such that for all real x > 0, if 0 < x − 0 < δ, then f(x) < −N.
Limits involving infinity are connected with the concept of asymptotes.
These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if
a neighborhood of −∞ is defined to contain an interval [−∞, c) for some
c
∈
R
,
{\displaystyle c\in \mathbb {R} ,}
a neighborhood of ∞ is defined to contain an interval (c, ∞] where
c
∈
R
,
{\displaystyle c\in \mathbb {R} ,}
and
a neighborhood of
a
∈
R
{\displaystyle a\in \mathbb {R} }
is defined in the normal way metric space
R
.
{\displaystyle \mathbb {R} .}
In this case,
R
¯
{\displaystyle {\overline {\mathbb {R} }}}
is a topological space and any function of the form
f
:
X
→
Y
{\displaystyle f:X\to Y}
with
X
,
Y
⊆
R
¯
{\displaystyle X,Y\subseteq {\overline {\mathbb {R} }}}
is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense.
=== Alternative notation ===
Many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as
R
∪
{
−
∞
,
+
∞
}
{\displaystyle \mathbb {R} \cup \{-\infty ,+\infty \}}
and the projectively extended real line is
R
∪
{
∞
}
{\displaystyle \mathbb {R} \cup \{\infty \}}
where a neighborhood of ∞ is a set of the form
{
x
:
|
x
|
>
c
}
.
{\displaystyle \{x:|x|>c\}.}
The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases.
As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line,
x
−
1
{\displaystyle x^{-1}}
does not possess a central limit (which is normal):
lim
x
→
0
+
1
x
=
+
∞
,
lim
x
→
0
−
1
x
=
−
∞
.
{\displaystyle \lim _{x\to 0^{+}}{1 \over x}=+\infty ,\quad \lim _{x\to 0^{-}}{1 \over x}=-\infty .}
In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit does exist in that context:
lim
x
→
0
+
1
x
=
lim
x
→
0
−
1
x
=
lim
x
→
0
1
x
=
∞
.
{\displaystyle \lim _{x\to 0^{+}}{1 \over x}=\lim _{x\to 0^{-}}{1 \over x}=\lim _{x\to 0}{1 \over x}=\infty .}
In fact there are a plethora of conflicting formal systems in use.
In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes.
A simple reason has to do with the converse of
lim
x
→
0
−
x
−
1
=
−
∞
,
{\displaystyle \lim _{x\to 0^{-}}{x^{-1}}=-\infty ,}
namely, it is convenient for
lim
x
→
−
∞
x
−
1
=
−
0
{\displaystyle \lim _{x\to -\infty }{x^{-1}}=-0}
to be considered true.
Such zeroes can be seen as an approximation to infinitesimals.
=== Limits at infinity for rational functions ===
There are three basic rules for evaluating limits at infinity for a rational function
f
(
x
)
=
p
(
x
)
q
(
x
)
{\displaystyle f(x)={\tfrac {p(x)}{q(x)}}}
(where p and q are polynomials):
If the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients;
If the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q;
If the degree of p is less than the degree of q, the limit is 0.
If the limit at infinity exists, it represents a horizontal asymptote at y = L. Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions.
== Functions of more than one variable ==
=== Ordinary limits ===
By noting that |x − p| represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function
f
:
S
×
T
→
R
{\displaystyle f:S\times T\to \mathbb {R} }
defined on
S
×
T
⊆
R
2
,
{\displaystyle S\times T\subseteq \mathbb {R} ^{2},}
we defined the limit as follows: the limit of f as (x, y) approaches (p, q) is L, written
lim
(
x
,
y
)
→
(
p
,
q
)
f
(
x
,
y
)
=
L
{\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=L}
if the following condition holds:
For every ε > 0, there exists a δ > 0 such that for all x in S and y in T, whenever
0
<
(
x
−
p
)
2
+
(
y
−
q
)
2
<
δ
,
{\textstyle 0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta ,}
we have |f(x, y) − L| < ε,
or formally:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
0
<
(
x
−
p
)
2
+
(
y
−
q
)
2
<
δ
⟹
|
f
(
x
,
y
)
−
L
|
<
ε
)
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies |f(x,y)-L|<\varepsilon )).}
Here
(
x
−
p
)
2
+
(
y
−
q
)
2
{\textstyle {\sqrt {(x-p)^{2}+(y-q)^{2}}}}
is the Euclidean distance between (x, y) and (p, q). (This can in fact be replaced by any norm ||(x, y) − (p, q)||, and be extended to any number of variables.)
For example, we may say
lim
(
x
,
y
)
→
(
0
,
0
)
x
4
x
2
+
y
2
=
0
{\displaystyle \lim _{(x,y)\to (0,0)}{\frac {x^{4}}{x^{2}+y^{2}}}=0}
because for every ε > 0, we can take
δ
=
ε
{\textstyle \delta ={\sqrt {\varepsilon }}}
such that for all real x ≠ 0 and real y ≠ 0, if
0
<
(
x
−
0
)
2
+
(
y
−
0
)
2
<
δ
,
{\textstyle 0<{\sqrt {(x-0)^{2}+(y-0)^{2}}}<\delta ,}
then |f(x, y) − 0| < ε.
Similar to the case in single variable, the value of f at (p, q) does not matter in this definition of limit.
For such a multivariable limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q). In the above example, the function
f
(
x
,
y
)
=
x
4
x
2
+
y
2
{\displaystyle f(x,y)={\frac {x^{4}}{x^{2}+y^{2}}}}
satisfies this condition. This can be seen by considering the polar coordinates
(
x
,
y
)
=
(
r
cos
θ
,
r
sin
θ
)
→
(
0
,
0
)
,
{\displaystyle (x,y)=(r\cos \theta ,r\sin \theta )\to (0,0),}
which gives
lim
r
→
0
f
(
r
cos
θ
,
r
sin
θ
)
=
lim
r
→
0
r
4
cos
4
θ
r
2
=
lim
r
→
0
r
2
cos
4
θ
.
{\displaystyle \lim _{r\to 0}f(r\cos \theta ,r\sin \theta )=\lim _{r\to 0}{\frac {r^{4}\cos ^{4}\theta }{r^{2}}}=\lim _{r\to 0}r^{2}\cos ^{4}\theta .}
Here θ = θ(r) is a function of r which controls the shape of the path along which f is approaching (p, q). Since cos θ is bounded between [−1, 1], by the sandwich theorem, this limit tends to 0.
In contrast, the function
f
(
x
,
y
)
=
x
y
x
2
+
y
2
{\displaystyle f(x,y)={\frac {xy}{x^{2}+y^{2}}}}
does not have a limit at (0, 0). Taking the path (x, y) = (t, 0) → (0, 0), we obtain
lim
t
→
0
f
(
t
,
0
)
=
lim
t
→
0
0
t
2
=
0
,
{\displaystyle \lim _{t\to 0}f(t,0)=\lim _{t\to 0}{\frac {0}{t^{2}}}=0,}
while taking the path (x, y) = (t, t) → (0, 0), we obtain
lim
t
→
0
f
(
t
,
t
)
=
lim
t
→
0
t
2
t
2
+
t
2
=
1
2
.
{\displaystyle \lim _{t\to 0}f(t,t)=\lim _{t\to 0}{\frac {t^{2}}{t^{2}+t^{2}}}={\frac {1}{2}}.}
Since the two values do not agree, f does not tend to a single value as (x, y) approaches (0, 0).
=== Multiple limits ===
Although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. For a two-variable function, this is the double limit. Let
f
:
S
×
T
→
R
{\displaystyle f:S\times T\to \mathbb {R} }
be defined on
S
×
T
⊆
R
2
,
{\displaystyle S\times T\subseteq \mathbb {R} ^{2},}
we say the double limit of f as x approaches p and y approaches q is L, written
lim
x
→
p
y
→
q
f
(
x
,
y
)
=
L
{\displaystyle \lim _{{x\to p} \atop {y\to q}}f(x,y)=L}
if the following condition holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
(
0
<
|
x
−
p
|
<
δ
)
∧
(
0
<
|
y
−
q
|
<
δ
)
⟹
|
f
(
x
,
y
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,((0<|x-p|<\delta )\land (0<|y-q|<\delta )\implies |f(x,y)-L|<\varepsilon ).}
For such a double limit to exist, this definition requires the value of f approaches L along every possible path approaching (p, q), excluding the two lines x = p and y = q. As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals L, then the multiple limit exists and also equals L. The converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example
f
(
x
,
y
)
=
{
1
for
x
y
≠
0
0
for
x
y
=
0
{\displaystyle f(x,y)={\begin{cases}1\quad {\text{for}}\quad xy\neq 0\\0\quad {\text{for}}\quad xy=0\end{cases}}}
where
lim
x
→
0
y
→
0
f
(
x
,
y
)
=
1
{\displaystyle \lim _{{x\to 0} \atop {y\to 0}}f(x,y)=1}
but
lim
(
x
,
y
)
→
(
0
,
0
)
f
(
x
,
y
)
{\displaystyle \lim _{(x,y)\to (0,0)}f(x,y)}
does not exist.
If the domain of f is restricted to
(
S
∖
{
p
}
)
×
(
T
∖
{
q
}
)
,
{\displaystyle (S\setminus \{p\})\times (T\setminus \{q\}),}
then the two definitions of limits coincide.
=== Multiple limits at infinity ===
The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For
f
:
S
×
T
→
R
,
{\displaystyle f:S\times T\to \mathbb {R} ,}
we say the double limit of f as x and y approaches infinity is L, written
lim
x
→
∞
y
→
∞
f
(
x
,
y
)
=
L
{\displaystyle \lim _{{x\to \infty } \atop {y\to \infty }}f(x,y)=L}
if the following condition holds:
(
∀
ε
>
0
)
(
∃
c
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
(
x
>
c
)
∧
(
y
>
c
)
⟹
|
f
(
x
,
y
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x>c)\land (y>c)\implies |f(x,y)-L|<\varepsilon ).}
We say the double limit of f as x and y approaches minus infinity is L, written
lim
x
→
−
∞
y
→
−
∞
f
(
x
,
y
)
=
L
{\displaystyle \lim _{{x\to -\infty } \atop {y\to -\infty }}f(x,y)=L}
if the following condition holds:
(
∀
ε
>
0
)
(
∃
c
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
(
x
<
−
c
)
∧
(
y
<
−
c
)
⟹
|
f
(
x
,
y
)
−
L
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists c>0)\,(\forall x\in S)\,(\forall y\in T)\,((x<-c)\land (y<-c)\implies |f(x,y)-L|<\varepsilon ).}
=== Pointwise limits and uniform limits ===
Let
f
:
S
×
T
→
R
.
{\displaystyle f:S\times T\to \mathbb {R} .}
Instead of taking limit as (x, y) → (p, q), we may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely
g
:
T
→
R
.
{\displaystyle g:T\to \mathbb {R} .}
In fact, this limiting process can be done in two distinct ways. The first one is called pointwise limit. We say the pointwise limit of f as x approaches p is g, denoted
lim
x
→
p
f
(
x
,
y
)
=
g
(
y
)
,
{\displaystyle \lim _{x\to p}f(x,y)=g(y),}
or
lim
x
→
p
f
(
x
,
y
)
=
g
(
y
)
pointwise
.
{\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{pointwise}}.}
Alternatively, we may say f tends to g pointwise as x approaches p, denoted
f
(
x
,
y
)
→
g
(
y
)
as
x
→
p
,
{\displaystyle f(x,y)\to g(y)\;\;{\text{as}}\;\;x\to p,}
or
f
(
x
,
y
)
→
g
(
y
)
pointwise
as
x
→
p
.
{\displaystyle f(x,y)\to g(y)\;\;{\text{pointwise}}\;\;{\text{as}}\;\;x\to p.}
This limit exists if the following holds:
(
∀
ε
>
0
)
(
∀
y
∈
T
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
(
x
,
y
)
−
g
(
y
)
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\forall y\in T)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).}
Here, δ = δ(ε, y) is a function of both ε and y. Each δ is chosen for a specific point of y. Hence we say the limit is pointwise in y. For example,
f
(
x
,
y
)
=
x
cos
y
{\displaystyle f(x,y)={\frac {x}{\cos y}}}
has a pointwise limit of constant zero function
lim
x
→
0
f
(
x
,
y
)
=
0
(
y
)
pointwise
{\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{pointwise}}}
because for every fixed y, the limit is clearly 0. This argument fails if y is not fixed: if y is very close to π/2, the value of the fraction may deviate from 0.
This leads to another definition of limit, namely the uniform limit. We say the uniform limit of f on T as x approaches p is g, denoted
u
n
i
f
lim
x
→
p
y
∈
T
f
(
x
,
y
)
=
g
(
y
)
,
{\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),}
or
lim
x
→
p
f
(
x
,
y
)
=
g
(
y
)
uniformly on
T
.
{\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T.}
Alternatively, we may say f tends to g uniformly on T as x approaches p, denoted
f
(
x
,
y
)
⇉
g
(
y
)
on
T
as
x
→
p
,
{\displaystyle f(x,y)\rightrightarrows g(y)\;{\text{on}}\;T\;\;{\text{as}}\;\;x\to p,}
or
f
(
x
,
y
)
→
g
(
y
)
uniformly on
T
as
x
→
p
.
{\displaystyle f(x,y)\to g(y)\;\;{\text{uniformly on}}\;T\;\;{\text{as}}\;\;x\to p.}
This limit exists if the following holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
(
x
,
y
)
−
g
(
y
)
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,(0<|x-p|<\delta \implies |f(x,y)-g(y)|<\varepsilon ).}
Here, δ = δ(ε) is a function of only ε but not y. In other words, δ is uniformly applicable to all y in T. Hence we say the limit is uniform in y. For example,
f
(
x
,
y
)
=
x
cos
y
{\displaystyle f(x,y)=x\cos y}
has a uniform limit of constant zero function
lim
x
→
0
f
(
x
,
y
)
=
0
(
y
)
uniformly on
R
{\displaystyle \lim _{x\to 0}f(x,y)=0(y)\;\;{\text{ uniformly on}}\;\mathbb {R} }
because for all real y, cos y is bounded between [−1, 1]. Hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0.
=== Iterated limits ===
Let
f
:
S
×
T
→
R
.
{\displaystyle f:S\times T\to \mathbb {R} .}
We may consider taking the limit of just one variable, say, x → p, to obtain a single-variable function of y, namely
g
:
T
→
R
,
{\displaystyle g:T\to \mathbb {R} ,}
and then take limit in the other variable, namely y → q, to get a number L. Symbolically,
lim
y
→
q
lim
x
→
p
f
(
x
,
y
)
=
lim
y
→
q
g
(
y
)
=
L
.
{\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)=\lim _{y\to q}g(y)=L.}
This limit is known as iterated limit of the multivariable function. The order of taking limits may affect the result, i.e.,
lim
y
→
q
lim
x
→
p
f
(
x
,
y
)
≠
lim
x
→
p
lim
y
→
q
f
(
x
,
y
)
{\displaystyle \lim _{y\to q}\lim _{x\to p}f(x,y)\neq \lim _{x\to p}\lim _{y\to q}f(x,y)}
in general.
A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit
lim
x
→
p
f
(
x
,
y
)
=
g
(
y
)
{\displaystyle \lim _{x\to p}f(x,y)=g(y)}
to be uniform on T.
== Functions on metric spaces ==
Suppose M and N are subsets of metric spaces A and B, respectively, and f : M → N is defined between M and N, with x ∈ M, p a limit point of M and L ∈ N. It is said that the limit of f as x approaches p is L and write
lim
x
→
p
f
(
x
)
=
L
{\displaystyle \lim _{x\to p}f(x)=L}
if the following property holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
M
)
(
0
<
d
A
(
x
,
p
)
<
δ
⟹
d
B
(
f
(
x
)
,
L
)
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in M)\,(0<d_{A}(x,p)<\delta \implies d_{B}(f(x),L)<\varepsilon ).}
Again, note that p need not be in the domain of f, nor does L need to be in the range of f, and even if f(p) is defined it need not be equal to L.
=== Euclidean metric ===
The limit in Euclidean space is a direct generalization of limits to vector-valued functions. For example, we may consider a function
f
:
S
×
T
→
R
3
{\displaystyle f:S\times T\to \mathbb {R} ^{3}}
such that
f
(
x
,
y
)
=
(
f
1
(
x
,
y
)
,
f
2
(
x
,
y
)
,
f
3
(
x
,
y
)
)
.
{\displaystyle f(x,y)=(f_{1}(x,y),f_{2}(x,y),f_{3}(x,y)).}
Then, under the usual Euclidean metric,
lim
(
x
,
y
)
→
(
p
,
q
)
f
(
x
,
y
)
=
(
L
1
,
L
2
,
L
3
)
{\displaystyle \lim _{(x,y)\to (p,q)}f(x,y)=(L_{1},L_{2},L_{3})}
if the following holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
∀
y
∈
T
)
(
0
<
(
x
−
p
)
2
+
(
y
−
q
)
2
<
δ
⟹
(
f
1
−
L
1
)
2
+
(
f
2
−
L
2
)
2
+
(
f
3
−
L
3
)
2
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(\forall y\in T)\,\left(0<{\sqrt {(x-p)^{2}+(y-q)^{2}}}<\delta \implies {\sqrt {(f_{1}-L_{1})^{2}+(f_{2}-L_{2})^{2}+(f_{3}-L_{3})^{2}}}<\varepsilon \right).}
In this example, the function concerned are finite-dimension vector-valued function. In this case, the limit theorem for vector-valued function states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit:
lim
(
x
,
y
)
→
(
p
,
q
)
(
f
1
(
x
,
y
)
,
f
2
(
x
,
y
)
,
f
3
(
x
,
y
)
)
=
(
lim
(
x
,
y
)
→
(
p
,
q
)
f
1
(
x
,
y
)
,
lim
(
x
,
y
)
→
(
p
,
q
)
f
2
(
x
,
y
)
,
lim
(
x
,
y
)
→
(
p
,
q
)
f
3
(
x
,
y
)
)
.
{\displaystyle \lim _{(x,y)\to (p,q)}{\Bigl (}f_{1}(x,y),f_{2}(x,y),f_{3}(x,y){\Bigr )}=\left(\lim _{(x,y)\to (p,q)}f_{1}(x,y),\lim _{(x,y)\to (p,q)}f_{2}(x,y),\lim _{(x,y)\to (p,q)}f_{3}(x,y)\right).}
=== Manhattan metric ===
One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider
f
:
S
→
R
2
{\displaystyle f:S\to \mathbb {R} ^{2}}
such that
f
(
x
)
=
(
f
1
(
x
)
,
f
2
(
x
)
)
.
{\displaystyle f(x)=(f_{1}(x),f_{2}(x)).}
Then, under the Manhattan metric,
lim
x
→
p
f
(
x
)
=
(
L
1
,
L
2
)
{\displaystyle \lim _{x\to p}f(x)=(L_{1},L_{2})}
if the following holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
0
<
|
x
−
p
|
<
δ
⟹
|
f
1
−
L
1
|
+
|
f
2
−
L
2
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies |f_{1}-L_{1}|+|f_{2}-L_{2}|<\varepsilon ).}
Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies.
=== Uniform metric ===
Finally, we will discuss the limit in function space, which has infinite dimensions. Consider a function f(x, y) in the function space
S
×
T
→
R
.
{\displaystyle S\times T\to \mathbb {R} .}
We want to find out as x approaches p, how f(x, y) will tend to another function g(y), which is in the function space
T
→
R
.
{\displaystyle T\to \mathbb {R} .}
The "closeness" in this function space may be measured under the uniform metric. Then, we will say the uniform limit of f on T as x approaches p is g and write
u
n
i
f
lim
x
→
p
y
∈
T
f
(
x
,
y
)
=
g
(
y
)
,
{\displaystyle {\underset {{x\to p} \atop {y\in T}}{\mathrm {unif} \lim \;}}f(x,y)=g(y),}
or
lim
x
→
p
f
(
x
,
y
)
=
g
(
y
)
uniformly on
T
,
{\displaystyle \lim _{x\to p}f(x,y)=g(y)\;\;{\text{uniformly on}}\;T,}
if the following holds:
(
∀
ε
>
0
)
(
∃
δ
>
0
)
(
∀
x
∈
S
)
(
0
<
|
x
−
p
|
<
δ
⟹
sup
y
∈
T
|
f
(
x
,
y
)
−
g
(
y
)
|
<
ε
)
.
{\displaystyle (\forall \varepsilon >0)\,(\exists \delta >0)\,(\forall x\in S)\,(0<|x-p|<\delta \implies \sup _{y\in T}|f(x,y)-g(y)|<\varepsilon ).}
In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section.
== Functions on topological spaces ==
Suppose
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are topological spaces with
Y
{\displaystyle Y}
a Hausdorff space. Let
p
{\displaystyle p}
be a limit point of
Ω
⊆
X
{\displaystyle \Omega \subseteq X}
, and
L
∈
Y
{\displaystyle L\in Y}
. For a function
f
:
Ω
→
Y
{\displaystyle f:\Omega \to Y}
, it is said that the limit of
f
{\displaystyle f}
as
x
{\displaystyle x}
approaches
p
{\displaystyle p}
is
L
{\displaystyle L}
, written
lim
x
→
p
f
(
x
)
=
L
,
{\displaystyle \lim _{x\to p}f(x)=L,}
if the following property holds:
for every open neighborhood
V
{\displaystyle V}
of
L
{\displaystyle L}
, there exists an open neighborhood
U
{\displaystyle U}
of
p
{\displaystyle p}
such that
f
(
U
∩
Ω
−
{
p
}
)
⊆
V
{\displaystyle f(U\cap \Omega -\{p\})\subseteq V}
.
This last part of the definition can also be phrased as "there exists an open punctured neighbourhood
U
{\displaystyle U}
of
p
{\displaystyle p}
such that
f
(
U
∩
Ω
)
⊆
V
{\displaystyle f(U\cap \Omega )\subseteq V}
.
The domain of
f
{\displaystyle f}
does not need to contain
p
{\displaystyle p}
. If it does, then the value of
f
{\displaystyle f}
at
p
{\displaystyle p}
is irrelevant to the definition of the limit. In particular, if the domain of
f
{\displaystyle f}
is
X
∖
{
p
}
{\displaystyle X\setminus \{p\}}
(or all of
X
{\displaystyle X}
), then the limit of
f
{\displaystyle f}
as
x
→
p
{\displaystyle x\to p}
exists and is equal to L if, for all subsets Ω of X with limit point
p
{\displaystyle p}
, the limit of the restriction of
f
{\displaystyle f}
to Ω exists and is equal to L. Sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on
R
{\displaystyle \mathbb {R} }
by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets.
Alternatively, the requirement that
Y
{\displaystyle Y}
be a Hausdorff space can be relaxed to the assumption that
Y
{\displaystyle Y}
be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point.
A function is continuous at a limit point
p
{\displaystyle p}
of and in its domain if and only if
f
(
p
)
{\displaystyle f(p)}
is the (or, in the general case, a) limit of
f
(
x
)
{\displaystyle f(x)}
as
x
{\displaystyle x}
tends to
p
{\displaystyle p}
.
There is another type of limit of a function, namely the sequential limit. Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a mapping from a topological space X into a Hausdorff space Y,
p
∈
X
{\displaystyle p\in X}
a limit point of X and L ∈ Y. The sequential limit of
f
{\displaystyle f}
as
x
{\displaystyle x}
tends to
p
{\displaystyle p}
is L if
For every sequence
(
x
n
)
{\displaystyle (x_{n})}
in
X
∖
{
p
}
{\displaystyle X\setminus \{p\}}
that converges to
p
{\displaystyle p}
, the sequence
f
(
x
n
)
{\displaystyle f(x_{n})}
converges to L.
If L is the limit (in the sense above) of
f
{\displaystyle f}
as
x
{\displaystyle x}
approaches
p
{\displaystyle p}
, then it is a sequential limit as well; however, the converse need not hold in general. If in addition X is metrizable, then L is the sequential limit of
f
{\displaystyle f}
as
x
{\displaystyle x}
approaches
p
{\displaystyle p}
if and only if it is the limit (in the sense above) of
f
{\displaystyle f}
as
x
{\displaystyle x}
approaches
p
{\displaystyle p}
.
== Other characterizations ==
=== In terms of sequences ===
For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting:
lim
x
→
a
f
(
x
)
=
L
{\displaystyle \lim _{x\to a}f(x)=L}
if, and only if, for all sequences xn (with, for all n, xn not equal to a) converging to a the sequence f(xn) converges to L. It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence xn to converge to a requires the epsilon, delta method.
Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let f be a real-valued function with the domain Dm(f ). Let a be the limit of a sequence of elements of Dm(f ) \ {a}. Then the limit (in this sense) of f is L as x approaches a
if for every sequence xn ∈ Dm(f ) \ {a} (so that for all n, xn is not equal to a) that converges to a, the sequence f(xn) converges to L. This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset Dm(f ) of
R
{\displaystyle \mathbb {R} }
as a metric space with the induced metric.
=== In non-standard calculus ===
In non-standard calculus the limit of a function is defined by:
lim
x
→
a
f
(
x
)
=
L
{\displaystyle \lim _{x\to a}f(x)=L}
if and only if for all
x
∈
R
∗
,
{\displaystyle x\in \mathbb {R} ^{*},}
f
∗
(
x
)
−
L
{\displaystyle f^{*}(x)-L}
is infinitesimal whenever x − a is infinitesimal. Here
R
∗
{\displaystyle \mathbb {R} ^{*}}
are the hyperreal numbers and f* is the natural extension of f to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full.
Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".
=== In terms of nearness ===
At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point x is defined to be near a set
A
⊆
R
{\displaystyle A\subseteq \mathbb {R} }
if for every r > 0 there is a point a ∈ A so that |x − a| < r. In this setting the
lim
x
→
a
f
(
x
)
=
L
{\displaystyle \lim _{x\to a}f(x)=L}
if and only if for all
A
⊆
R
,
{\displaystyle A\subseteq \mathbb {R} ,}
L is near f(A) whenever a is near A.
Here f(A) is the set
{
f
(
x
)
|
x
∈
A
}
.
{\displaystyle \{f(x)|x\in A\}.}
This definition can also be extended to metric and topological spaces.
== Relationship to continuity ==
The notion of the limit of a function is very closely related to the concept of continuity. A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:
lim
x
→
c
f
(
x
)
=
f
(
c
)
.
{\displaystyle \lim _{x\to c}f(x)=f(c).}
We have here assumed that c is a limit point of the domain of f.
== Properties ==
If a function f is real-valued, then the limit of f at p is L if and only if both the right-handed limit and left-handed limit of f at p exist and are equal to L.
The function f is continuous at p if and only if the limit of f(x) as x approaches p exists and is equal to f(p). If f : M → N is a function between metric spaces M and N, then it is equivalent that f transforms every sequence in M which converges towards p into a sequence in N which converges towards f(p).
If N is a normed vector space, then the limit operation is linear in the following sense: if the limit of f(x) as x approaches p is L and the limit of g(x) as x approaches p is P, then the limit of f(x) + g(x) as x approaches p is L + P. If a is a scalar from the base field, then the limit of af(x) as x approaches p is aL.
If f and g are real-valued (or complex-valued) functions, then taking the limit of an operation on f(x) and g(x) (e.g., f + g, f − g, f × g, f / g, f g) under certain conditions is compatible with the operation of limits of f(x) and g(x). This fact is often called the algebraic limit theorem. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite).
lim
x
→
p
(
f
(
x
)
+
g
(
x
)
)
=
lim
x
→
p
f
(
x
)
+
lim
x
→
p
g
(
x
)
lim
x
→
p
(
f
(
x
)
−
g
(
x
)
)
=
lim
x
→
p
f
(
x
)
−
lim
x
→
p
g
(
x
)
lim
x
→
p
(
f
(
x
)
⋅
g
(
x
)
)
=
lim
x
→
p
f
(
x
)
⋅
lim
x
→
p
g
(
x
)
lim
x
→
p
(
f
(
x
)
/
g
(
x
)
)
=
lim
x
→
p
f
(
x
)
/
lim
x
→
p
g
(
x
)
lim
x
→
p
f
(
x
)
g
(
x
)
=
lim
x
→
p
f
(
x
)
lim
x
→
p
g
(
x
)
{\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to p}(f(x)+g(x))&=&\displaystyle \lim _{x\to p}f(x)+\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)-g(x))&=&\displaystyle \lim _{x\to p}f(x)-\lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)\cdot g(x))&=&\displaystyle \lim _{x\to p}f(x)\cdot \lim _{x\to p}g(x)\\\displaystyle \lim _{x\to p}(f(x)/g(x))&=&\displaystyle {\lim _{x\to p}f(x)/\lim _{x\to p}g(x)}\\\displaystyle \lim _{x\to p}f(x)^{g(x)}&=&\displaystyle {\lim _{x\to p}f(x)^{\lim _{x\to p}g(x)}}\end{array}}}
These rules are also valid for one-sided limits, including when p is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules.
q
+
∞
=
∞
if
q
≠
−
∞
q
×
∞
=
{
∞
if
q
>
0
−
∞
if
q
<
0
q
∞
=
0
if
q
≠
∞
and
q
≠
−
∞
∞
q
=
{
0
if
q
<
0
∞
if
q
>
0
q
∞
=
{
0
if
0
<
q
<
1
∞
if
q
>
1
q
−
∞
=
{
∞
if
0
<
q
<
1
0
if
q
>
1
{\displaystyle {\begin{array}{rcl}q+\infty &=&\infty {\text{ if }}q\neq -\infty \\[8pt]q\times \infty &=&{\begin{cases}\infty &{\text{if }}q>0\\-\infty &{\text{if }}q<0\end{cases}}\\[6pt]\displaystyle {\frac {q}{\infty }}&=&0{\text{ if }}q\neq \infty {\text{ and }}q\neq -\infty \\[6pt]\infty ^{q}&=&{\begin{cases}0&{\text{if }}q<0\\\infty &{\text{if }}q>0\end{cases}}\\[4pt]q^{\infty }&=&{\begin{cases}0&{\text{if }}0<q<1\\\infty &{\text{if }}q>1\end{cases}}\\[4pt]q^{-\infty }&=&{\begin{cases}\infty &{\text{if }}0<q<1\\0&{\text{if }}q>1\end{cases}}\end{array}}}
(see also Extended real number line).
In other cases the limit on the left may still exist, although the right-hand side, called an indeterminate form, does not allow one to determine the result. This depends on the functions f and g. These indeterminate forms are:
0
0
±
∞
±
∞
0
×
±
∞
∞
+
−
∞
0
0
∞
0
1
±
∞
{\displaystyle {\begin{array}{cc}\displaystyle {\frac {0}{0}}&\displaystyle {\frac {\pm \infty }{\pm \infty }}\\[6pt]0\times \pm \infty &\infty +-\infty \\[8pt]\qquad 0^{0}\qquad &\qquad \infty ^{0}\qquad \\[8pt]1^{\pm \infty }\end{array}}}
See further L'Hôpital's rule below and Indeterminate form.
=== Limits of compositions of functions ===
In general, from knowing that
lim
y
→
b
f
(
y
)
=
c
{\displaystyle \lim _{y\to b}f(y)=c}
and
lim
x
→
a
g
(
x
)
=
b
,
{\displaystyle \lim _{x\to a}g(x)=b,}
it does not follow that
lim
x
→
a
f
(
g
(
x
)
)
=
c
.
{\displaystyle \lim _{x\to a}f(g(x))=c.}
However, this "chain rule" does hold if one of the following additional conditions holds:
f(b) = c (that is, f is continuous at b), or
g does not take the value b near a (that is, there exists a δ > 0 such that if 0 < |x − a| < δ then |g(x) − b| > 0).
As an example of this phenomenon, consider the following function that violates both additional restrictions:
f
(
x
)
=
g
(
x
)
=
{
0
if
x
≠
0
1
if
x
=
0
{\displaystyle f(x)=g(x)={\begin{cases}0&{\text{if }}x\neq 0\\1&{\text{if }}x=0\end{cases}}}
Since the value at f(0) is a removable discontinuity,
lim
x
→
a
f
(
x
)
=
0
{\displaystyle \lim _{x\to a}f(x)=0}
for all a.
Thus, the naïve chain rule would suggest that the limit of f(f(x)) is 0. However, it is the case that
f
(
f
(
x
)
)
=
{
1
if
x
≠
0
0
if
x
=
0
{\displaystyle f(f(x))={\begin{cases}1&{\text{if }}x\neq 0\\0&{\text{if }}x=0\end{cases}}}
and so
lim
x
→
a
f
(
f
(
x
)
)
=
1
{\displaystyle \lim _{x\to a}f(f(x))=1}
for all a.
=== Limits of special interest ===
==== Rational functions ====
For n a nonnegative integer and constants
a
1
,
a
2
,
a
3
,
…
,
a
n
{\displaystyle a_{1},a_{2},a_{3},\ldots ,a_{n}}
and
b
1
,
b
2
,
b
3
,
…
,
b
n
,
{\displaystyle b_{1},b_{2},b_{3},\ldots ,b_{n},}
lim
x
→
∞
a
1
x
n
+
a
2
x
n
−
1
+
a
3
x
n
−
2
+
⋯
+
a
n
b
1
x
n
+
b
2
x
n
−
1
+
b
3
x
n
−
2
+
⋯
+
b
n
=
a
1
b
1
{\displaystyle \lim _{x\to \infty }{\frac {a_{1}x^{n}+a_{2}x^{n-1}+a_{3}x^{n-2}+\dots +a_{n}}{b_{1}x^{n}+b_{2}x^{n-1}+b_{3}x^{n-2}+\dots +b_{n}}}={\frac {a_{1}}{b_{1}}}}
This can be proven by dividing both the numerator and denominator by xn. If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0.
==== Trigonometric functions ====
lim
x
→
0
sin
x
x
=
1
lim
x
→
0
1
−
cos
x
x
=
0
{\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {1-\cos x}{x}}&=&0\end{array}}}
==== Exponential functions ====
lim
x
→
0
(
1
+
x
)
1
x
=
lim
r
→
∞
(
1
+
1
r
)
r
=
e
lim
x
→
0
e
x
−
1
x
=
1
lim
x
→
0
e
a
x
−
1
b
x
=
a
b
lim
x
→
0
c
a
x
−
1
b
x
=
a
b
ln
c
lim
x
→
0
+
x
x
=
1
{\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}(1+x)^{\frac {1}{x}}&=&\displaystyle \lim _{r\to \infty }\left(1+{\frac {1}{r}}\right)^{r}=e\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{x}-1}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {e^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {c^{ax}-1}{bx}}&=&\displaystyle {\frac {a}{b}}\ln c\\[4pt]\displaystyle \lim _{x\to 0^{+}}x^{x}&=&1\end{array}}}
==== Logarithmic functions ====
lim
x
→
0
ln
(
1
+
x
)
x
=
1
lim
x
→
0
ln
(
1
+
a
x
)
b
x
=
a
b
lim
x
→
0
log
c
(
1
+
a
x
)
b
x
=
a
b
ln
c
{\displaystyle {\begin{array}{lcl}\displaystyle \lim _{x\to 0}{\frac {\ln(1+x)}{x}}&=&1\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\ln(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b}}\\[4pt]\displaystyle \lim _{x\to 0}{\frac {\log _{c}(1+ax)}{bx}}&=&\displaystyle {\frac {a}{b\ln c}}\end{array}}}
=== L'Hôpital's rule ===
This rule uses derivatives to find limits of indeterminate forms 0/0 or ±∞/∞, and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions f(x) and g(x), defined over an open interval I containing the desired limit point c, then if:
lim
x
→
c
f
(
x
)
=
lim
x
→
c
g
(
x
)
=
0
,
{\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0,}
or
lim
x
→
c
f
(
x
)
=
±
lim
x
→
c
g
(
x
)
=
±
∞
,
{\displaystyle \lim _{x\to c}f(x)=\pm \lim _{x\to c}g(x)=\pm \infty ,}
and
f
{\displaystyle f}
and
g
{\displaystyle g}
are differentiable over
I
∖
{
c
}
,
{\displaystyle I\setminus \{c\},}
and
g
′
(
x
)
≠
0
{\displaystyle g'(x)\neq 0}
for all
x
∈
I
∖
{
c
}
,
{\displaystyle x\in I\setminus \{c\},}
and
lim
x
→
c
f
′
(
x
)
g
′
(
x
)
{\displaystyle \lim _{x\to c}{\tfrac {f'(x)}{g'(x)}}}
exists,
then:
lim
x
→
c
f
(
x
)
g
(
x
)
=
lim
x
→
c
f
′
(
x
)
g
′
(
x
)
.
{\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=\lim _{x\to c}{\frac {f'(x)}{g'(x)}}.}
Normally, the first condition is the most important one.
For example:
lim
x
→
0
sin
(
2
x
)
sin
(
3
x
)
=
lim
x
→
0
2
cos
(
2
x
)
3
cos
(
3
x
)
=
2
⋅
1
3
⋅
1
=
2
3
.
{\displaystyle \lim _{x\to 0}{\frac {\sin(2x)}{\sin(3x)}}=\lim _{x\to 0}{\frac {2\cos(2x)}{3\cos(3x)}}={\frac {2\cdot 1}{3\cdot 1}}={\frac {2}{3}}.}
=== Summations and integrals ===
Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit.
A short way to write the limit
lim
n
→
∞
∑
i
=
s
n
f
(
i
)
{\displaystyle \lim _{n\to \infty }\sum _{i=s}^{n}f(i)}
is
∑
i
=
s
∞
f
(
i
)
.
{\displaystyle \sum _{i=s}^{\infty }f(i).}
An important example of limits of sums such as these are series.
A short way to write the limit
lim
x
→
∞
∫
a
x
f
(
t
)
d
t
{\displaystyle \lim _{x\to \infty }\int _{a}^{x}f(t)\;dt}
is
∫
a
∞
f
(
t
)
d
t
.
{\displaystyle \int _{a}^{\infty }f(t)\;dt.}
A short way to write the limit
lim
x
→
−
∞
∫
x
b
f
(
t
)
d
t
{\displaystyle \lim _{x\to -\infty }\int _{x}^{b}f(t)\;dt}
is
∫
−
∞
b
f
(
t
)
d
t
.
{\displaystyle \int _{-\infty }^{b}f(t)\;dt.}
== See also ==
Big O notation – Describes limiting behavior of a function
L'Hôpital's rule – Mathematical rule for evaluating some limits
List of limits
Limit of a sequence – Value to which tends an infinite sequence
Limit point – Cluster point in a topological spacePages displaying short descriptions of redirect targets
Limit superior and limit inferior – Bounds of a sequencePages displaying short descriptions of redirect targets
Net (mathematics) – Generalization of a sequence of points
Non-standard calculus – Modern application of infinitesimalsPages displaying short descriptions of redirect targets
Squeeze theorem – Method for finding limits in calculus
Subsequential limit – The limit of some subsequence
== Notes ==
== References ==
Apostol, Tom M. (1974). Mathematical Analysis (2 ed.). Addison–Wesley. ISBN 0-201-00288-4.
Bartle, Robert (1967). The elements of real analysis. Wiley.
Bartle, Robert G.; Sherbert, Donald R. (2000). Introduction to real analysis. Wiley.
Courant, Richard (1924). Vorlesungen über Differential- und Integralrechnung (in German). Springer.
Hardy, G. H. (1921). A course in pure mathematics. Cambridge University Press.
Hubbard, John H. (2015). Vector calculus, linear algebra, and differential forms: A unified approach (5th ed.). Matrix Editions.
Page, Warren; Hersh, Reuben; Selden, Annie; et al., eds. (2002). "Media Highlights". The College Mathematics. 33 (2): 147–154. JSTOR 2687124..
Rudin, Walter (1964). Principles of mathematical analysis. McGraw-Hill.
Sutherland, W. A. (1975). Introduction to Metric and Topological Spaces. Oxford: Oxford University Press. ISBN 0-19-853161-3.
Whittaker; Watson (1904). A Course of Modern Analysis. Cambridge University Press.
== External links ==
MacTutor History of Weierstrass.
MacTutor History of Bolzano
Visual Calculus by Lawrence S. Husch, University of Tennessee (2001) | Wikipedia/Limit_of_a_function |
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at every point on its domain) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). Roughly speaking, the two operations can be thought of as inverses of each other.
The first part of the theorem, the first fundamental theorem of calculus, states that for a continuous function f , an antiderivative or indefinite integral F can be obtained as the integral of f over an interval with a variable upper bound.
Conversely, the second part of the theorem, the second fundamental theorem of calculus, states that the integral of a function f over a fixed interval is equal to the change of any antiderivative F between the ends of the interval. This greatly simplifies the calculation of a definite integral provided an antiderivative can be found by symbolic integration, thus avoiding numerical integration.
== History ==
The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration. The origins of differentiation likewise predate the fundamental theorem of calculus by hundreds of years; for example, in the fourteenth century the notions of continuity of functions and motion were studied by the Oxford Calculators and other scholars. The historical relevance of the fundamental theorem of calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of gradients) are actually closely related.
Calculus as a unified theory of integration and differentiation started from the conjecture and the proof of the fundamental theorem of calculus. The first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory (1638–1675). Isaac Barrow (1630–1677) proved a more generalized version of the theorem, while his student Isaac Newton (1642–1727) completed the development of the surrounding mathematical theory. Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities and introduced the notation used today.
== Geometric meaning/Proof ==
The first fundamental theorem may be interpreted as follows. Given a continuous function
y
=
f
(
x
)
{\displaystyle y=f(x)}
whose graph is plotted as a curve, one defines a corresponding "area function"
x
↦
A
(
x
)
{\displaystyle x\mapsto A(x)}
such that A(x) is the area beneath the curve between 0 and x. The area A(x) may not be easily computable, but it is assumed to be well defined.
The area under the curve between x and x + h could be computed by finding the area between 0 and x + h, then subtracting the area between 0 and x. In other words, the area of this "strip" would be A(x + h) − A(x).
There is another way to estimate the area of this same strip. As shown in the accompanying figure, h is multiplied by f(x) to find the area of a rectangle that is approximately the same size as this strip. So:
A
(
x
+
h
)
−
A
(
x
)
≈
f
(
x
)
⋅
h
{\displaystyle A(x+h)-A(x)\approx f(x)\cdot h}
Dividing by h on both sides, we get:
A
(
x
+
h
)
−
A
(
x
)
h
≈
f
(
x
)
{\displaystyle {\frac {A(x+h)-A(x)}{h}}\approx f(x)}
This estimate becomes a perfect equality when h approaches 0:
f
(
x
)
=
lim
h
→
0
A
(
x
+
h
)
−
A
(
x
)
h
=
def
A
′
(
x
)
.
{\displaystyle f(x)=\lim _{h\to 0}{\frac {A(x+h)-A(x)}{h}}\ {\stackrel {\text{def}}{=}}\ A'(x).}
That is, the derivative of the area function A(x) exists and is equal to the original function f(x), so the area function is an antiderivative of the original function.
Thus, the derivative of the integral of a function (the area) is the original function, so that derivative and integral are inverse operations which reverse each other. This is the essence of the Fundamental Theorem.
== Physical intuition ==
Intuitively, the fundamental theorem states that integration and differentiation are inverse operations which reverse each other.
The second fundamental theorem says that the sum of infinitesimal changes in a quantity (the integral of the derivative of the quantity) adds up to the net change in the quantity. To visualize this, imagine traveling in a car and wanting to know the distance traveled (the net change in position along the highway). You can see the velocity on the speedometer but cannot look out to see your location. Each second, you can find how far the car has traveled using distance = speed × time, that is, multiplying the current speed (in kilometers or miles per hour) by the time interval (1 second =
1
3600
{\displaystyle {\tfrac {1}{3600}}}
hour). By summing up all these small steps, you can approximate the total distance traveled, in spite of not looking outside the car:
distance traveled
=
∑
(
velocity at
each time
)
×
(
time
interval
)
=
∑
v
t
×
Δ
t
.
{\displaystyle {\text{distance traveled}}=\sum \left({\begin{array}{c}{\text{velocity at}}\\{\text{each time}}\end{array}}\right)\times \left({\begin{array}{c}{\text{time}}\\{\text{interval}}\end{array}}\right)=\sum v_{t}\times \Delta t.}
As
Δ
t
{\displaystyle \Delta t}
becomes infinitesimally small, the summing up corresponds to integration. Thus, the integral of the velocity function (the derivative of position) computes how far the car has traveled (the net change in position).
The first fundamental theorem says that the value of any function is the rate of change (the derivative) of its integral from a fixed starting point up to any chosen end point. Continuing the above example using a velocity as the function, you can integrate it from the starting time up to any given time to obtain a distance function whose derivative is that velocity. (To obtain your highway-marker position, you would need to add your starting position to this integral and to take into account whether your travel was in the direction of increasing or decreasing mile markers.)
== Formal statements ==
There are two parts to the theorem. The first part deals with the derivative of an antiderivative, while the second part deals with the relationship between antiderivatives and definite integrals.
=== First part ===
This part is sometimes referred to as the first fundamental theorem of calculus.
Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}
Then F is uniformly continuous on [a, b] and differentiable on the open interval (a, b), and
F
′
(
x
)
=
f
(
x
)
{\displaystyle F'(x)=f(x)}
for all x in (a, b) so F is an antiderivative of f.
=== Corollary ===
The fundamental theorem is often employed to compute the definite integral of a function
f
{\displaystyle f}
for which an antiderivative
F
{\displaystyle F}
is known. Specifically, if
f
{\displaystyle f}
is a real-valued continuous function on
[
a
,
b
]
{\displaystyle [a,b]}
and
F
{\displaystyle F}
is an antiderivative of
f
{\displaystyle f}
in
[
a
,
b
]
{\displaystyle [a,b]}
, then
∫
a
b
f
(
t
)
d
t
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(t)\,dt=F(b)-F(a).}
The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following part of the theorem.
=== Second part ===
This part is sometimes referred to as the second fundamental theorem of calculus or the Newton–Leibniz theorem.
Let
f
{\displaystyle f}
be a real-valued function on a closed interval
[
a
,
b
]
{\displaystyle [a,b]}
and
F
{\displaystyle F}
a continuous function on
[
a
,
b
]
{\displaystyle [a,b]}
which is an antiderivative of
f
{\displaystyle f}
in
(
a
,
b
)
{\displaystyle (a,b)}
:
F
′
(
x
)
=
f
(
x
)
.
{\displaystyle F'(x)=f(x).}
If
f
{\displaystyle f}
is Riemann integrable on
[
a
,
b
]
{\displaystyle [a,b]}
then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
The second part is somewhat stronger than the corollary because it does not assume that
f
{\displaystyle f}
is continuous.
When an antiderivative
F
{\displaystyle F}
of
f
{\displaystyle f}
exists, then there are infinitely many antiderivatives for
f
{\displaystyle f}
, obtained by adding an arbitrary constant to
F
{\displaystyle F}
. Also, by the first part of the theorem, antiderivatives of
f
{\displaystyle f}
always exist when
f
{\displaystyle f}
is continuous.
== Proof of the first part ==
For a given function f, define the function F(x) as
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}
For any two numbers x1 and x1 + Δx in [a, b], we have
F
(
x
1
+
Δ
x
)
−
F
(
x
1
)
=
∫
a
x
1
+
Δ
x
f
(
t
)
d
t
−
∫
a
x
1
f
(
t
)
d
t
=
∫
x
1
x
1
+
Δ
x
f
(
t
)
d
t
,
{\displaystyle {\begin{aligned}F(x_{1}+\Delta x)-F(x_{1})&=\int _{a}^{x_{1}+\Delta x}f(t)\,dt-\int _{a}^{x_{1}}f(t)\,dt\\&=\int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt,\end{aligned}}}
the latter equality resulting from the basic properties of integrals and the additivity of areas.
According to the mean value theorem for integration, there exists a real number
c
∈
[
x
1
,
x
1
+
Δ
x
]
{\displaystyle c\in [x_{1},x_{1}+\Delta x]}
such that
∫
x
1
x
1
+
Δ
x
f
(
t
)
d
t
=
f
(
c
)
⋅
Δ
x
.
{\displaystyle \int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt=f(c)\cdot \Delta x.}
It follows that
F
(
x
1
+
Δ
x
)
−
F
(
x
1
)
=
f
(
c
)
⋅
Δ
x
,
{\displaystyle F(x_{1}+\Delta x)-F(x_{1})=f(c)\cdot \Delta x,}
and thus that
F
(
x
1
+
Δ
x
)
−
F
(
x
1
)
Δ
x
=
f
(
c
)
.
{\displaystyle {\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=f(c).}
Taking the limit as
Δ
x
→
0
,
{\displaystyle \Delta x\to 0,}
and keeping in mind that
c
∈
[
x
1
,
x
1
+
Δ
x
]
,
{\displaystyle c\in [x_{1},x_{1}+\Delta x],}
one gets
lim
Δ
x
→
0
F
(
x
1
+
Δ
x
)
−
F
(
x
1
)
Δ
x
=
lim
Δ
x
→
0
f
(
c
)
,
{\displaystyle \lim _{\Delta x\to 0}{\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=\lim _{\Delta x\to 0}f(c),}
that is,
F
′
(
x
1
)
=
f
(
x
1
)
,
{\displaystyle F'(x_{1})=f(x_{1}),}
according to the definition of the derivative, the continuity of f, and the squeeze theorem.
== Proof of the corollary ==
Suppose F is an antiderivative of f, with f continuous on [a, b]. Let
G
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle G(x)=\int _{a}^{x}f(t)\,dt.}
By the first part of the theorem, we know G is also an antiderivative of f. Since F′ − G′ = 0 the mean value theorem implies that F − G is a constant function, that is, there is a number c such that G(x) = F(x) + c for all x in [a, b]. Letting x = a, we have
F
(
a
)
+
c
=
G
(
a
)
=
∫
a
a
f
(
t
)
d
t
=
0
,
{\displaystyle F(a)+c=G(a)=\int _{a}^{a}f(t)\,dt=0,}
which means c = −F(a). In other words, G(x) = F(x) − F(a), and so
∫
a
b
f
(
x
)
d
x
=
G
(
b
)
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=G(b)=F(b)-F(a).}
== Proof of the second part ==
This is a limit proof by Riemann sums.
To begin, we recall the mean value theorem. Stated briefly, if F is continuous on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists some c in (a, b) such that
F
′
(
c
)
(
b
−
a
)
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle F'(c)(b-a)=F(b)-F(a).}
Let f be (Riemann) integrable on the interval [a, b], and let f admit an antiderivative F on (a, b) such that F is continuous on [a, b]. Begin with the quantity F(b) − F(a). Let there be numbers x0, ..., xn such that
a
=
x
0
<
x
1
<
x
2
<
⋯
<
x
n
−
1
<
x
n
=
b
.
{\displaystyle a=x_{0}<x_{1}<x_{2}<\cdots <x_{n-1}<x_{n}=b.}
It follows that
F
(
b
)
−
F
(
a
)
=
F
(
x
n
)
−
F
(
x
0
)
.
{\displaystyle F(b)-F(a)=F(x_{n})-F(x_{0}).}
Now, we add each F(xi) along with its additive inverse, so that the resulting quantity is equal:
F
(
b
)
−
F
(
a
)
=
F
(
x
n
)
+
[
−
F
(
x
n
−
1
)
+
F
(
x
n
−
1
)
]
+
⋯
+
[
−
F
(
x
1
)
+
F
(
x
1
)
]
−
F
(
x
0
)
=
[
F
(
x
n
)
−
F
(
x
n
−
1
)
]
+
[
F
(
x
n
−
1
)
−
F
(
x
n
−
2
)
]
+
⋯
+
[
F
(
x
2
)
−
F
(
x
1
)
]
+
[
F
(
x
1
)
−
F
(
x
0
)
]
.
{\displaystyle {\begin{aligned}F(b)-F(a)&=F(x_{n})+[-F(x_{n-1})+F(x_{n-1})]+\cdots +[-F(x_{1})+F(x_{1})]-F(x_{0})\\&=[F(x_{n})-F(x_{n-1})]+[F(x_{n-1})-F(x_{n-2})]+\cdots +[F(x_{2})-F(x_{1})]+[F(x_{1})-F(x_{0})].\end{aligned}}}
The above quantity can be written as the following sum:
The function F is differentiable on the interval (a, b) and continuous on the closed interval [a, b]; therefore, it is also differentiable on each interval (xi−1, xi) and continuous on each interval [xi−1, xi]. According to the mean value theorem (above), for each i there exists a
c
i
{\displaystyle c_{i}}
in (xi−1, xi) such that
F
(
x
i
)
−
F
(
x
i
−
1
)
=
F
′
(
c
i
)
(
x
i
−
x
i
−
1
)
.
{\displaystyle F(x_{i})-F(x_{i-1})=F'(c_{i})(x_{i}-x_{i-1}).}
Substituting the above into (1'), we get
F
(
b
)
−
F
(
a
)
=
∑
i
=
1
n
[
F
′
(
c
i
)
(
x
i
−
x
i
−
1
)
]
.
{\displaystyle F(b)-F(a)=\sum _{i=1}^{n}[F'(c_{i})(x_{i}-x_{i-1})].}
The assumption implies
F
′
(
c
i
)
=
f
(
c
i
)
.
{\displaystyle F'(c_{i})=f(c_{i}).}
Also,
x
i
−
x
i
−
1
{\displaystyle x_{i}-x_{i-1}}
can be expressed as
Δ
x
{\displaystyle \Delta x}
of partition
i
{\displaystyle i}
.
We are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the mean value theorem, describes an approximation of the curve section it is drawn over. Also
Δ
x
i
{\displaystyle \Delta x_{i}}
need not be the same for all values of i, or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with n rectangles. Now, as the size of the partitions get smaller and n increases, resulting in more partitions to cover the space, we get closer and closer to the actual area of the curve.
By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral. We know that this limit exists because f was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity.
So, we take the limit on both sides of (2'). This gives us
lim
‖
Δ
x
i
‖
→
0
F
(
b
)
−
F
(
a
)
=
lim
‖
Δ
x
i
‖
→
0
∑
i
=
1
n
[
f
(
c
i
)
(
Δ
x
i
)
]
.
{\displaystyle \lim _{\|\Delta x_{i}\|\to 0}F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].}
Neither F(b) nor F(a) is dependent on
‖
Δ
x
i
‖
{\displaystyle \|\Delta x_{i}\|}
, so the limit on the left side remains F(b) − F(a).
F
(
b
)
−
F
(
a
)
=
lim
‖
Δ
x
i
‖
→
0
∑
i
=
1
n
[
f
(
c
i
)
(
Δ
x
i
)
]
.
{\displaystyle F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].}
The expression on the right side of the equation defines the integral over f from a to b. Therefore, we obtain
F
(
b
)
−
F
(
a
)
=
∫
a
b
f
(
x
)
d
x
,
{\displaystyle F(b)-F(a)=\int _{a}^{b}f(x)\,dx,}
which completes the proof.
== Relationship between the parts ==
As discussed above, a slightly weaker version of the second part follows from the first part.
Similarly, it almost looks like the first part of the theorem follows directly from the second. That is, suppose G is an antiderivative of f. Then by the second theorem,
G
(
x
)
−
G
(
a
)
=
∫
a
x
f
(
t
)
d
t
{\textstyle G(x)-G(a)=\int _{a}^{x}f(t)\,dt}
. Now, suppose
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
=
G
(
x
)
−
G
(
a
)
{\textstyle F(x)=\int _{a}^{x}f(t)\,dt=G(x)-G(a)}
. Then F has the same derivative as G, and therefore F′ = f. This argument only works, however, if we already know that f has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem.
For example, if f(x) = e−x2, then f has an antiderivative, namely
G
(
x
)
=
∫
0
x
f
(
t
)
d
t
{\displaystyle G(x)=\int _{0}^{x}f(t)\,dt}
and there is no simpler expression for this function. It is therefore important not to interpret the second part of the theorem as the definition of the integral. Indeed, there are many functions that are integrable but lack elementary antiderivatives, and discontinuous functions can be integrable but lack any antiderivatives at all. Conversely, many functions that have antiderivatives are not Riemann integrable (see Volterra's function).
== Examples ==
=== Computing a particular integral ===
Suppose the following is to be calculated:
∫
2
5
x
2
d
x
.
{\displaystyle \int _{2}^{5}x^{2}\,dx.}
Here,
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
and we can use
F
(
x
)
=
1
3
x
3
{\textstyle F(x)={\frac {1}{3}}x^{3}}
as the antiderivative. Therefore:
∫
2
5
x
2
d
x
=
F
(
5
)
−
F
(
2
)
=
5
3
3
−
2
3
3
=
125
3
−
8
3
=
117
3
=
39.
{\displaystyle \int _{2}^{5}x^{2}\,dx=F(5)-F(2)={\frac {5^{3}}{3}}-{\frac {2^{3}}{3}}={\frac {125}{3}}-{\frac {8}{3}}={\frac {117}{3}}=39.}
=== Using the first part ===
Suppose
d
d
x
∫
0
x
t
3
d
t
{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt}
is to be calculated. Using the first part of the theorem with
f
(
t
)
=
t
3
{\displaystyle f(t)=t^{3}}
gives
d
d
x
∫
0
x
t
3
d
t
=
f
(
x
)
=
x
3
.
{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt=f(x)=x^{3}.}
This can also be checked using the second part of the theorem. Specifically,
F
(
t
)
=
1
4
t
4
{\textstyle F(t)={\frac {1}{4}}t^{4}}
is an antiderivative of
f
(
t
)
{\displaystyle f(t)}
, so
d
d
x
∫
0
x
t
3
d
t
=
d
d
x
F
(
x
)
−
d
d
x
F
(
0
)
=
d
d
x
x
4
4
=
x
3
.
{\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt={\frac {d}{dx}}F(x)-{\frac {d}{dx}}F(0)={\frac {d}{dx}}{\frac {x^{4}}{4}}=x^{3}.}
=== An integral where the corollary is insufficient ===
Suppose
f
(
x
)
=
{
sin
(
1
x
)
−
1
x
cos
(
1
x
)
x
≠
0
0
x
=
0
{\displaystyle f(x)={\begin{cases}\sin \left({\frac {1}{x}}\right)-{\frac {1}{x}}\cos \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0\\\end{cases}}}
Then
f
(
x
)
{\displaystyle f(x)}
is not continuous at zero. Moreover, this is not just a matter of how
f
{\displaystyle f}
is defined at zero, since the limit as
x
→
0
{\displaystyle x\to 0}
of
f
(
x
)
{\displaystyle f(x)}
does not exist. Therefore, the corollary cannot be used to compute
∫
0
1
f
(
x
)
d
x
.
{\displaystyle \int _{0}^{1}f(x)\,dx.}
But consider the function
F
(
x
)
=
{
x
sin
(
1
x
)
x
≠
0
0
x
=
0.
{\displaystyle F(x)={\begin{cases}x\sin \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0.\\\end{cases}}}
Notice that
F
(
x
)
{\displaystyle F(x)}
is continuous on
[
0
,
1
]
{\displaystyle [0,1]}
(including at zero by the squeeze theorem), and
F
(
x
)
{\displaystyle F(x)}
is differentiable on
(
0
,
1
)
{\displaystyle (0,1)}
with
F
′
(
x
)
=
f
(
x
)
.
{\displaystyle F'(x)=f(x).}
Therefore, part two of the theorem applies, and
∫
0
1
f
(
x
)
d
x
=
F
(
1
)
−
F
(
0
)
=
sin
(
1
)
.
{\displaystyle \int _{0}^{1}f(x)\,dx=F(1)-F(0)=\sin(1).}
=== Theoretical example ===
The theorem can be used to prove that
∫
a
b
f
(
x
)
d
x
=
∫
a
c
f
(
x
)
d
x
+
∫
c
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)dx=\int _{a}^{c}f(x)dx+\int _{c}^{b}f(x)dx.}
Since,
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
,
∫
a
c
f
(
x
)
d
x
=
F
(
c
)
−
F
(
a
)
,
and
∫
c
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
c
)
,
{\displaystyle {\begin{aligned}\int _{a}^{b}f(x)dx&=F(b)-F(a),\\\int _{a}^{c}f(x)dx&=F(c)-F(a),{\text{ and }}\\\int _{c}^{b}f(x)dx&=F(b)-F(c),\end{aligned}}}
the result follows from,
F
(
b
)
−
F
(
a
)
=
F
(
c
)
−
F
(
a
)
+
F
(
b
)
−
F
(
c
)
.
{\displaystyle F(b)-F(a)=F(c)-F(a)+F(b)-F(c).}
== Generalizations ==
The function f does not have to be continuous over the whole interval. Part I of the theorem then says: if f is any Lebesgue integrable function on [a, b] and x0 is a number in [a, b] such that f is continuous at x0, then
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt}
is differentiable for x = x0 with F′(x0) = f(x0). We can relax the conditions on f still further and suppose that it is merely locally integrable. In that case, we can conclude that the function F is differentiable almost everywhere and F′(x) = f(x) almost everywhere. On the real line this statement is equivalent to Lebesgue's differentiation theorem. These results remain true for the Henstock–Kurzweil integral, which allows a larger class of integrable functions.
In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost every x, the average value of a function f over a ball of radius r centered at x tends to f(x) as r tends to 0.
Part II of the theorem is true for any Lebesgue integrable function f, which has an antiderivative F (not all integrable functions do, though). In other words, if a real function F on [a, b] admits a derivative f(x) at every point x of [a, b] and if this derivative f is Lebesgue integrable on [a, b], then
F
(
b
)
−
F
(
a
)
=
∫
a
b
f
(
t
)
d
t
.
{\displaystyle F(b)-F(a)=\int _{a}^{b}f(t)\,dt.}
This result may fail for continuous functions F that admit a derivative f(x) at almost every point x, as the example of the Cantor function shows. However, if F is absolutely continuous, it admits a derivative F′(x) at almost every point x, and moreover F′ is integrable, with F(b) − F(a) equal to the integral of F′ on [a, b]. Conversely, if f is any integrable function, then F as given in the first formula will be absolutely continuous with F′ = f almost everywhere.
The conditions of this theorem may again be relaxed by considering the integrals involved as Henstock–Kurzweil integrals. Specifically, if a continuous function F(x) admits a derivative f(x) at all but countably many points, then f(x) is Henstock–Kurzweil integrable and F(b) − F(a) is equal to the integral of f on [a, b]. The difference here is that the integrability of f does not need to be assumed.
The version of Taylor's theorem that expresses the error term as an integral can be seen as a generalization of the fundamental theorem.
There is a version of the theorem for complex functions: suppose U is an open set in C and f : U → C is a function that has a holomorphic antiderivative F on U. Then for every curve γ : [a, b] → U, the curve integral can be computed as
∫
γ
f
(
z
)
d
z
=
F
(
γ
(
b
)
)
−
F
(
γ
(
a
)
)
.
{\displaystyle \int _{\gamma }f(z)\,dz=F(\gamma (b))-F(\gamma (a)).}
The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds. One such generalization offered by the calculus of moving surfaces is the time evolution of integrals. The most familiar extensions of the fundamental theorem of calculus in higher dimensions are the divergence theorem and the gradient theorem.
One of the most powerful generalizations in this direction is the generalized Stokes theorem (sometimes known as the fundamental theorem of multivariable calculus): Let M be an oriented piecewise smooth manifold of dimension n and let
ω
{\displaystyle \omega }
be a smooth compactly supported (n − 1)-form on M. If ∂M denotes the boundary of M given its induced orientation, then
∫
M
d
ω
=
∫
∂
M
ω
.
{\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .}
Here d is the exterior derivative, which is defined using the manifold structure only.
The theorem is often used in situations where M is an embedded oriented submanifold of some bigger manifold (e.g. Rk) on which the form
ω
{\displaystyle \omega }
is defined.
The fundamental theorem of calculus allows us to pose a definite integral as a first-order ordinary differential equation.
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
can be posed as
d
y
d
x
=
f
(
x
)
,
y
(
a
)
=
0
{\displaystyle {\frac {dy}{dx}}=f(x),\;\;y(a)=0}
with
y
(
b
)
{\displaystyle y(b)}
as the value of the integral.
== See also ==
Differentiation under the integral sign
Telescoping series
Fundamental theorem of calculus for line integrals
Notation for differentiation
== Notes ==
== References ==
=== Bibliography ===
== Further reading ==
== External links ==
"Fundamental theorem of calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
James Gregory's Euclidean Proof of the Fundamental Theorem of Calculus at Convergence
Isaac Barrow's proof of the Fundamental Theorem of Calculus
Fundamental Theorem of Calculus at imomath.com
Alternative proof of the fundamental theorem of calculus
Fundamental Theorem of Calculus MIT.
Fundamental Theorem of Calculus Mathworld. | Wikipedia/Fundamental_theorem_of_calculus |
In statistics, a misleading graph, also known as a distorted graph, is a graph that misrepresents data, constituting a misuse of statistics and with the result that an incorrect conclusion may be derived from it.
Graphs may be misleading by being excessively complex or poorly constructed. Even when constructed to display the characteristics of their data accurately, graphs can be subject to different interpretations, or unintended kinds of data can seemingly and ultimately erroneously be derived.
Misleading graphs may be created intentionally to hinder the proper interpretation of data or accidentally due to unfamiliarity with graphing software, misinterpretation of data, or because data cannot be accurately conveyed. Misleading graphs are often used in false advertising. One of the first authors to write about misleading graphs was Darrell Huff, publisher of the 1954 book How to Lie with Statistics.
Data journalist John Burn-Murdoch has suggested that people are more likely to express scepticism towards data communicated within written text than data of similar quality presented as a graphic, arguing that this is partly the result of the teaching of critical thinking focusing on engaging with written works rather than diagrams, resulting in visual literacy being neglected. He has also highlighted the concentration of data scientists in employment by technology companies, which he believes can result in the hampering of the evaluation of their visualisations due to the proprietary and closed nature of much of the data they work with.
The field of data visualization describes ways to present information that avoids creating misleading graphs.
== Misleading graph methods ==
[A misleading graph] is vastly more effective, however, because it contains no adjectives or adverbs to spoil the illusion of objectivity, there's nothing anyone can pin on you.
There are numerous ways in which a misleading graph may be constructed.
=== Excessive usage ===
The use of graphs where they are not needed can lead to unnecessary confusion/interpretation. Generally, the more explanation a graph needs, the less the graph itself is needed. Graphs do not always convey information better than tables.
=== Biased labeling ===
The use of biased or loaded words in the graph's title, axis labels, or caption may inappropriately prime the reader.
==== Fabricated trends ====
Similarly, attempting to draw trend lines through uncorrelated data may mislead the reader into believing a trend exists where there is none. This can be both the result of intentionally attempting to mislead the reader or due to the phenomenon of illusory correlation.
=== Pie chart ===
Comparing pie charts of different sizes could be misleading as people cannot accurately read the comparative area of circles.
The usage of thin slices, which are hard to discern, may be difficult to interpret.
The usage of percentages as labels on a pie chart can be misleading when the sample size is small.
Making a pie chart 3D or adding a slant will make interpretation difficult due to distorted effect of perspective. Bar-charted pie graphs in which the height of the slices is varied may confuse the reader.
==== Comparing pie charts ====
Comparing data on barcharts is generally much easier. In the image below, it is very hard to tell where the blue sector is bigger than the green sector on the piecharts.
==== 3D Pie chart slice perspective ====
A perspective (3D) pie chart is used to give the chart a 3D look. Often used for aesthetic reasons, the third dimension does not improve the reading of the data; on the contrary, these plots are difficult to interpret because of the distorted effect of perspective associated with the third dimension. The use of superfluous dimensions not used to display the data of interest is discouraged for charts in general, not only for pie charts. In a 3D pie chart, the slices that are closer to the reader appear to be larger than those in the back due to the angle at which they're presented. This effect makes readers less performant in judging the relative magnitude of each slice when using 3D than 2D
Item C appears to be at least as large as Item A in the misleading pie chart, whereas in actuality, it is less than half as large. Item D looks a lot larger than item B, but they are the same size.
Edward Tufte, a prominent American statistician, noted why tables may be preferred to pie charts in The Visual Display of Quantitative Information:
Tables are preferable to graphics for many small data sets. A table is nearly always better than a dumb pie chart; the only thing worse than a pie chart is several of them, for then the viewer is asked to compare quantities located in spatial disarray both within and between pies – Given their low data-density and failure to order numbers along a visual dimension, pie charts should never be used.
==== Visual Distortion in Pie Charts ====
Misleading Pie Chart: This pie chart misrepresents the data by artificially inflating the smaller categories (C and D), and shrinking the larger categories (A and B). Without percentages, it will be assumed that C and D are more significant than they actually are, demonstrating how visual manipulation can distort data perception.
Accurate Pie Chart: This pie chart correctly represents the proportions, and maintains accurate visual scaling for each category. The inclusion of percentages ensure that viewers interpret the data correctly, avoiding common data visualization misleading techniques.
=== Improper scaling ===
Using pictograms in bar graphs should not be scaled uniformly, as this creates a perceptually misleading comparison. The area of the pictogram is interpreted instead of only its height or width. This causes the scaling to make the difference appear to be squared.
In the improperly scaled pictogram bar graph, the image for B is actually 9 times as large as A.
The perceived size increases when scaling.
The effect of improper scaling of pictograms is further exemplified when the pictogram has 3 dimensions, in which case the effect is cubed.
The graph of house sales (left) is misleading. It appears that home sales have grown eightfold in 2001 over the previous year, whereas they have actually grown twofold. Besides, the number of sales is not specified.
An improperly scaled pictogram may also suggest that the item itself has changed in size.
Assuming the pictures represent equivalent quantities, the misleading graph shows that there are more bananas because the bananas occupy the most area and are furthest to the right.
==== Logarithmic scaling ====
Logarithmic (or log) scales are a valid means of representing data. But when used without being clearly labeled as log scales or displayed to a reader unfamiliar with them, they can be misleading. Log scales put the data values in terms of a chosen number (the base of the log) to a particular power. The base is often e (2.71828...) or 10. For example, log scales may give a height of 1 for a value of 10 in the data and a height of 6 for a value of 1,000,000 (106) in the data. Log scales and variants are commonly used, for instance, for the volcanic explosivity index, the Richter scale for earthquakes, the magnitude of stars, and the pH of acidic and alkaline solutions. Even in these cases, the log scale can make the data less apparent to the eye. Often the reason for the use of log scales is that the graph's author wishes to display vastly different scales on the same axis. Without log scales, comparing quantities such as 1000 (103) versus 109 (1,000,000,000) becomes visually impractical. A graph with a log scale that was not clearly labeled as such, or a graph with a log scale presented to a viewer who did not know logarithmic scales, would generally result in a representation that made data values look of similar size, in fact, being of widely differing magnitudes. Misuse of a log scale can make vastly different values (such as 10 and 10,000) appear close together (on a base-10 log scale, they would be only 1 and 4). Or it can make small values appear to be negative due to how logarithmic scales represent numbers smaller than the base.
Misuse of log scales may also cause relationships between quantities to appear linear whilst those relationships are exponentials or power laws that rise very rapidly towards higher values. It has been stated, although mainly in a humorous way, that "anything looks linear on a log-log plot with thick marker pen" .
Both graphs show an identical exponential function of f(x) = 2x. The graph on the left uses a linear scale, showing clearly an exponential trend. The graph on the right, however uses a logarithmic scale, which generates a straight line. If the graph viewer were not aware of this, the graph would appear to show a linear trend.
=== Truncated graph ===
A truncated graph (also known as a torn graph) has a y axis that does not start at 0. These graphs can create the impression of important change where there is relatively little change.
While truncated graphs can be used to overdraw differences or to save space, their use is often discouraged. Commercial software such as MS Excel will tend to truncate graphs by default if the values are all within a narrow range, as in this example. To show relative differences in values over time, an index chart can be used. Truncated diagrams will always distort the underlying numbers visually. Several studies found that even if people were correctly informed that the y-axis was truncated, they still overestimated the actual differences, often substantially.
These graphs display identical data; however, in the truncated bar graph on the left, the data appear to show significant differences, whereas, in the regular bar graph on the right, these differences are hardly visible.
There are several ways to indicate y-axis breaks:
=== Axis changes ===
Changing the y-axis maximum affects how the graph appears. A higher maximum will cause the graph to appear to have less volatility, less growth, and a less steep line than a lower maximum.
Changing the ratio of a graph's dimensions will affect how the graph appears.
=== No scale ===
The scales of a graph are often used to exaggerate or minimize differences.
The lack of a starting value for the y axis makes it unclear whether the graph is truncated. Additionally, the lack of tick marks prevents the reader from determining whether the graph bars are properly scaled. Without a scale, the visual difference between the bars can be easily manipulated.
Though all three graphs share the same data, and hence the actual slope of the (x, y) data is the same, the way that the data is plotted can change the visual appearance of the angle made by the line on the graph. This is because each plot has a different scale on its vertical axis. Because the scale is not shown, these graphs can be misleading.
=== Improper intervals or units ===
The intervals and units used in a graph may be manipulated to create or mitigate change expression.
=== Omitting data ===
Graphs created with omitted data remove information from which to base a conclusion.
In the scatter plot with missing categories on the left, the growth appears to be more linear with less variation.
In financial reports, negative returns or data that do not correlate with a positive outlook may be excluded to create a more favorable visual impression.
=== 3D ===
The use of a superfluous third dimension, which does not contain information, is strongly discouraged, as it may confuse the reader.
=== Complexity ===
Graphs are designed to allow easier interpretation of statistical data. However, graphs with excessive complexity can obfuscate the data and make interpretation difficult.
=== Poor construction ===
Poorly constructed graphs can make data difficult to discern and thus interpret.
=== Extrapolation ===
Misleading graphs may be used in turn to extrapolate misleading trends.
== Measuring distortion ==
Several methods have been developed to determine whether graphs are distorted and to quantify this distortion.
=== Lie factor ===
Lie factor
=
size of effect shown in graphic
size of effect shown in data
,
{\displaystyle {\text{Lie factor}}={\frac {\text{size of effect shown in graphic}}{\text{size of effect shown in data}}},}
where
size of effect
=
|
second value
−
first value
first value
|
.
{\displaystyle {\text{size of effect}}=\left|{\frac {{\text{second value}}-{\text{first value}}}{\text{first value}}}\right|.}
A graph with a high lie factor (>1) would exaggerate change in the data it represents, while one with a small lie factor (>0, <1) would obscure change in the data. A perfectly accurate graph would exhibit a lie factor of 1.
=== Graph discrepancy index ===
graph discrepancy index
=
100
(
a
b
−
1
)
,
{\displaystyle {\text{graph discrepancy index}}=100\left({\frac {a}{b}}-1\right),}
where
a
=
percentage change depicted in graph
,
{\displaystyle a={\text{percentage change depicted in graph}},}
b
=
percentage change in data
.
{\displaystyle b={\text{percentage change in data}}.}
The graph discrepancy index, also known as the graph distortion index (GDI), was originally proposed by Paul John Steinbart in 1998. GDI is calculated as a percentage ranging from −100% to positive infinity, with zero percent indicating that the graph has been properly constructed and anything outside the ±5% margin is considered to be distorted. Research into the usage of GDI as a measure of graphics distortion has found it to be inconsistent and discontinuous, making the usage of GDI as a measurement for comparisons difficult.
=== Data-ink ratio ===
data-ink ratio
=
“ink” used to display the data
total “ink” used to display the graphic
{\displaystyle {\text{data-ink ratio}}={\frac {\text{“ink” used to display the data}}{\text{total “ink” used to display the graphic}}}}
The data-ink ratio should be relatively high. Otherwise, the chart may have unnecessary graphics.
=== Data density ===
data density
=
number of entries in data matrix
area of data graphic
{\displaystyle {\text{data density}}={\frac {\text{number of entries in data matrix}}{\text{area of data graphic}}}}
The data density should be relatively high, otherwise a table may be better suited for displaying the data.
== Usage in finance and corporate reports ==
Graphs are useful in the summary and interpretation of financial data. Graphs allow trends in large data sets to be seen while also allowing the data to be interpreted by non-specialists.
Graphs are often used in corporate annual reports as a form of impression management. In the United States, graphs do not have to be audited, as they fall under AU Section 550 Other Information in Documents Containing Audited Financial Statements.
Several published studies have looked at the usage of graphs in corporate reports for different corporations in different countries and have found frequent usage of improper design, selectivity, and measurement distortion within these reports. The presence of misleading graphs in annual reports has led to requests for standards to be set.
Research has found that while readers with poor levels of financial understanding have a greater chance of being misinformed by misleading graphs, even those with financial understanding, such as loan officers, may be misled.
== Academia ==
The perception of graphs is studied in psychophysics, cognitive psychology, and computational visions.
== See also ==
Chartjunk
Impression management
Misuse of statistics
Simpson's paradox
How to Lie with Statistics
How to Lie with Maps
== References ==
=== Books ===
== Further reading ==
== External links ==
Gallery of Data Visualization The Best and Worst of Statistical Graphics, York University | Wikipedia/Misleading_graph |
A set-valued function, also called a correspondence or set-valued relation, is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Set-valued functions are also known as multivalued functions in some references, but this article and the article Multivalued function follow the authors who make a distinction.
== Distinction from multivalued functions ==
Although other authors may distinguish them differently (or not at all), Wriggers and Panatiotopoulos (2014) distinguish multivalued functions from set-valued functions (which they called set-valued relations) by the fact that multivalued functions only take multiple values at finitely (or denumerably) many points, and otherwise behave like a function. Geometrically, this means that the graph of a multivalued function is necessarily a line of zero area that doesn't loop, while the graph of a set-valued relation may contain solid filled areas or loops.
Alternatively, a multivalued function is a set-valued function f that has a further continuity property, namely that the choice of an element in the set
f
(
x
)
{\displaystyle f(x)}
defines a corresponding element in each set
f
(
y
)
{\displaystyle f(y)}
for y close to x, and thus defines locally an ordinary function.
== Example ==
The argmax of a function is in general, multivalued. For example,
argmax
x
∈
R
cos
(
x
)
=
{
2
π
k
∣
k
∈
Z
}
{\displaystyle \operatorname {argmax} _{x\in \mathbb {R} }\cos(x)=\{2\pi k\mid k\in \mathbb {Z} \}}
.
== Set-valued analysis ==
Set-valued analysis is the study of sets in the spirit of mathematical analysis and general topology.
Instead of considering collections of only points, set-valued analysis considers collections of sets. If a collection of sets is endowed with a topology, or inherits an appropriate topology from an underlying topological space, then the convergence of sets can be studied.
Much of set-valued analysis arose through the study of mathematical economics and optimal control, partly as a generalization of convex analysis; the term "variational analysis" is used by authors such as R. Tyrrell Rockafellar and Roger J-B Wets, Jonathan Borwein and Adrian Lewis, and Boris Mordukhovich. In optimization theory, the convergence of approximating subdifferentials to a subdifferential is important in understanding necessary or sufficient conditions for any minimizing point.
There exist set-valued extensions of the following concepts from point-valued analysis: continuity, differentiation, integration, implicit function theorem, contraction mappings, measure theory, fixed-point theorems, optimization, and topological degree theory. In particular, equations are generalized to inclusions, while differential equations are generalized to differential inclusions.
One can distinguish multiple concepts generalizing continuity, such as the closed graph property and upper and lower hemicontinuity. There are also various generalizations of measure to multifunctions.
== Applications ==
Set-valued functions arise in optimal control theory, especially differential inclusions and related subjects as game theory, where the Kakutani fixed-point theorem for set-valued functions has been applied to prove existence of Nash equilibria. This among many other properties loosely associated with approximability of upper hemicontinuous multifunctions via continuous functions explains why upper hemicontinuity is more preferred than lower hemicontinuity.
Nevertheless, lower semi-continuous multifunctions usually possess continuous selections as stated in the Michael selection theorem, which provides another characterisation of paracompact spaces. Other selection theorems, like Bressan-Colombo directional continuous selection, Kuratowski and Ryll-Nardzewski measurable selection theorem, Aumann measurable selection, and Fryszkowski selection for decomposable maps are important in optimal control and the theory of differential inclusions.
== Notes ==
== References ==
== Further reading ==
K. Deimling, Multivalued Differential Equations, Walter de Gruyter, 1992
C. D. Aliprantis and K. C. Border, Infinite dimensional analysis. Hitchhiker's guide, Springer-Verlag Berlin Heidelberg, 2006
J. Andres and L. Górniewicz, Topological Fixed Point Principles for Boundary Value Problems, Kluwer Academic Publishers, 2003
J.-P. Aubin and A. Cellina, Differential Inclusions, Set-Valued Maps And Viability Theory, Grundl. der Math. Wiss. 264, Springer - Verlag, Berlin, 1984
J.-P. Aubin and H. Frankowska, Set-Valued Analysis, Birkhäuser, Basel, 1990
D. Repovš and P.V. Semenov, Continuous Selections of Multivalued Mappings, Kluwer Academic Publishers, Dordrecht 1998
E. U. Tarafdar and M. S. R. Chowdhury, Topological methods for set-valued nonlinear analysis, World Scientific, Singapore, 2008
Mitroi, F.-C.; Nikodem, K.; Wąsowicz, S. (2013). "Hermite-Hadamard inequalities for convex set-valued functions". Demonstratio Mathematica. 46 (4): 655–662. doi:10.1515/dema-2013-0483.
== See also ==
Selection theorem
Ursescu theorem
Binary relation | Wikipedia/Set-valued_function |
In mathematics, the epigraph or supergraph of a function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}
is the set
epi
f
=
{
(
x
,
r
)
∈
X
×
R
:
r
≥
f
(
x
)
}
{\displaystyle \operatorname {epi} f=\{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}}
consisting of all points in the Cartesian product
X
×
R
{\displaystyle X\times \mathbb {R} }
lying on or above the function's graph.
Similarly, the strict epigraph
epi
S
f
{\displaystyle \operatorname {epi} _{S}f}
is the set of points in
X
×
R
{\displaystyle X\times \mathbb {R} }
lying strictly above its graph.
Importantly, unlike the graph of
f
,
{\displaystyle f,}
the epigraph always consists entirely of points in
X
×
R
{\displaystyle X\times \mathbb {R} }
(this is true of the graph only when
f
{\displaystyle f}
is real-valued).
If the function takes
±
∞
{\displaystyle \pm \infty }
as a value then
graph
f
{\displaystyle \operatorname {graph} f}
will not be a subset of its epigraph
epi
f
.
{\displaystyle \operatorname {epi} f.}
For example, if
f
(
x
0
)
=
∞
{\displaystyle f\left(x_{0}\right)=\infty }
then the point
(
x
0
,
f
(
x
0
)
)
=
(
x
0
,
∞
)
{\displaystyle \left(x_{0},f\left(x_{0}\right)\right)=\left(x_{0},\infty \right)}
will belong to
graph
f
{\displaystyle \operatorname {graph} f}
but not to
epi
f
.
{\displaystyle \operatorname {epi} f.}
These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa.
The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions. Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in
[
−
∞
,
∞
]
{\displaystyle [-\infty ,\infty ]}
instead of continuous functions valued in a vector space (such as
R
{\displaystyle \mathbb {R} }
or
R
2
{\displaystyle \mathbb {R} ^{2}}
). This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph. Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples.
== Definition ==
The definition of the epigraph was inspired by that of the graph of a function, where the graph of
f
:
X
→
Y
{\displaystyle f:X\to Y}
is defined to be the set
graph
f
:=
{
(
x
,
y
)
∈
X
×
Y
:
y
=
f
(
x
)
}
.
{\displaystyle \operatorname {graph} f:=\{(x,y)\in X\times Y~:~y=f(x)\}.}
The epigraph or supergraph of a function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}
is the set
epi
f
=
{
(
x
,
r
)
∈
X
×
R
:
r
≥
f
(
x
)
}
=
[
f
−
1
(
−
∞
)
×
R
]
∪
⋃
x
∈
f
−
1
(
R
)
(
{
x
}
×
[
f
(
x
)
,
∞
)
)
{\displaystyle {\begin{alignedat}{4}\operatorname {epi} f&=\{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}\\&=\left[f^{-1}(-\infty )\times \mathbb {R} \right]\cup \bigcup _{x\in f^{-1}(\mathbb {R} )}(\{x\}\times [f(x),\infty ))\end{alignedat}}}
where all sets being unioned in the last line are pairwise disjoint.
In the union over
x
∈
f
−
1
(
R
)
{\displaystyle x\in f^{-1}(\mathbb {R} )}
that appears above on the right hand side of the last line, the set
{
x
}
×
[
f
(
x
)
,
∞
)
{\displaystyle \{x\}\times [f(x),\infty )}
may be interpreted as being a "vertical ray" consisting of
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
and all points in
X
×
R
{\displaystyle X\times \mathbb {R} }
"directly above" it.
Similarly, the set of points on or below the graph of a function is its hypograph.
The strict epigraph is the epigraph with the graph removed:
epi
S
f
=
{
(
x
,
r
)
∈
X
×
R
:
r
>
f
(
x
)
}
=
epi
f
∖
graph
f
=
⋃
x
∈
X
(
{
x
}
×
(
f
(
x
)
,
∞
)
)
{\displaystyle {\begin{alignedat}{4}\operatorname {epi} _{S}f&=\{(x,r)\in X\times \mathbb {R} ~:~r>f(x)\}\\&=\operatorname {epi} f\setminus \operatorname {graph} f\\&=\bigcup _{x\in X}\left(\{x\}\times (f(x),\infty )\right)\end{alignedat}}}
where all sets being unioned in the last line are pairwise disjoint, and some may be empty.
== Relationships with other sets ==
Despite the fact that
f
{\displaystyle f}
might take one (or both) of
±
∞
{\displaystyle \pm \infty }
as a value (in which case its graph would not be a subset of
X
×
R
{\displaystyle X\times \mathbb {R} }
), the epigraph of
f
{\displaystyle f}
is nevertheless defined to be a subset of
X
×
R
{\displaystyle X\times \mathbb {R} }
rather than of
X
×
[
−
∞
,
∞
]
.
{\displaystyle X\times [-\infty ,\infty ].}
This is intentional because when
X
{\displaystyle X}
is a vector space then so is
X
×
R
{\displaystyle X\times \mathbb {R} }
but
X
×
[
−
∞
,
∞
]
{\displaystyle X\times [-\infty ,\infty ]}
is never a vector space (since the extended real number line
[
−
∞
,
∞
]
{\displaystyle [-\infty ,\infty ]}
is not a vector space).
This deficiency in
X
×
[
−
∞
,
∞
]
{\displaystyle X\times [-\infty ,\infty ]}
remains even if instead of being a vector space,
X
{\displaystyle X}
is merely a non-empty subset of some vector space.
The epigraph being a subset of a vector space allows for tools related to real analysis and functional analysis (and other fields) to be more readily applied.
The domain (rather than the codomain) of the function is not particularly important for this definition; it can be any linear space or even an arbitrary set instead of
R
n
{\displaystyle \mathbb {R} ^{n}}
.
The strict epigraph
epi
S
f
{\displaystyle \operatorname {epi} _{S}f}
and the graph
graph
f
{\displaystyle \operatorname {graph} f}
are always disjoint.
The epigraph of a function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
is related to its graph and strict epigraph by
epi
f
⊆
epi
S
f
∪
graph
f
{\displaystyle \,\operatorname {epi} f\,\subseteq \,\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f}
where set equality holds if and only if
f
{\displaystyle f}
is real-valued. However,
epi
f
=
[
epi
S
f
∪
graph
f
]
∩
[
X
×
R
]
{\displaystyle \operatorname {epi} f=\left[\operatorname {epi} _{S}f\,\cup \,\operatorname {graph} f\right]\,\cap \,[X\times \mathbb {R} ]}
always holds.
== Reconstructing functions from epigraphs ==
The epigraph is empty if and only if the function is identically equal to infinity.
Just as any function can be reconstructed from its graph, so too can any extended real-valued function
f
{\displaystyle f}
on
X
{\displaystyle X}
be reconstructed from its epigraph
E
:=
epi
f
{\displaystyle E:=\operatorname {epi} f}
(even when
f
{\displaystyle f}
takes on
±
∞
{\displaystyle \pm \infty }
as a value). Given
x
∈
X
,
{\displaystyle x\in X,}
the value
f
(
x
)
{\displaystyle f(x)}
can be reconstructed from the intersection
E
∩
(
{
x
}
×
R
)
{\displaystyle E\cap (\{x\}\times \mathbb {R} )}
of
E
{\displaystyle E}
with the "vertical line"
{
x
}
×
R
{\displaystyle \{x\}\times \mathbb {R} }
passing through
x
{\displaystyle x}
as follows:
case 1:
E
∩
(
{
x
}
×
R
)
=
∅
{\displaystyle E\cap (\{x\}\times \mathbb {R} )=\varnothing }
if and only if
f
(
x
)
=
∞
,
{\displaystyle f(x)=\infty ,}
case 2:
E
∩
(
{
x
}
×
R
)
=
{
x
}
×
R
{\displaystyle E\cap (\{x\}\times \mathbb {R} )=\{x\}\times \mathbb {R} }
if and only if
f
(
x
)
=
−
∞
,
{\displaystyle f(x)=-\infty ,}
case 3: otherwise,
E
∩
(
{
x
}
×
R
)
{\displaystyle E\cap (\{x\}\times \mathbb {R} )}
is necessarily of the form
{
x
}
×
[
f
(
x
)
,
∞
)
,
{\displaystyle \{x\}\times [f(x),\infty ),}
from which the value of
f
(
x
)
{\displaystyle f(x)}
can be obtained by taking the infimum of the interval.
The above observations can be combined to give a single formula for
f
(
x
)
{\displaystyle f(x)}
in terms of
E
:=
epi
f
.
{\displaystyle E:=\operatorname {epi} f.}
Specifically, for any
x
∈
X
,
{\displaystyle x\in X,}
f
(
x
)
=
inf
{
r
∈
R
:
(
x
,
r
)
∈
E
}
{\displaystyle f(x)=\inf _{}\{r\in \mathbb {R} ~:~(x,r)\in E\}}
where by definition,
inf
∅
:=
∞
.
{\displaystyle \inf _{}\varnothing :=\infty .}
This same formula can also be used to reconstruct
f
{\displaystyle f}
from its strict epigraph
E
:=
epi
S
f
.
{\displaystyle E:=\operatorname {epi} _{S}f.}
== Relationships between properties of functions and their epigraphs ==
A function is convex if and only if its epigraph is a convex set. The epigraph of a real affine function
g
:
R
n
→
R
{\displaystyle g:\mathbb {R} ^{n}\to \mathbb {R} }
is a halfspace in
R
n
+
1
.
{\displaystyle \mathbb {R} ^{n+1}.}
A function is lower semicontinuous if and only if its epigraph is closed.
== See also ==
Effective domain
Hypograph (mathematics) – Region underneath a graph
Proper convex function
== Citations ==
== References ==
Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544.
Rockafellar, Ralph Tyrell (1996), Convex Analysis, Princeton University Press, Princeton, NJ. ISBN 0-691-01586-4. | Wikipedia/Epigraph_(mathematics) |
Differential geometry of curves is the branch of geometry that deals with smooth curves in the plane and the Euclidean space by methods of differential and integral calculus.
Many specific curves have been thoroughly investigated using the synthetic approach. Differential geometry takes another path: curves are represented in a parametrized form, and their geometric properties and various quantities associated with them, such as the curvature and the arc length, are expressed via derivatives and integrals using vector calculus. One of the most important tools used to analyze a curve is the Frenet frame, a moving frame that provides a coordinate system at each point of the curve that is "best adapted" to the curve near that point.
The theory of curves is much simpler and narrower in scope than the theory of surfaces and its higher-dimensional generalizations because a regular curve in a Euclidean space has no intrinsic geometry. Any regular curve may be parametrized by the arc length (the natural parametrization). From the point of view of a theoretical point particle on the curve that does not know anything about the ambient space, all curves would appear the same. Different space curves are only distinguished by how they bend and twist. Quantitatively, this is measured by the differential-geometric invariants called the curvature and the torsion of a curve. The fundamental theorem of curves asserts that the knowledge of these invariants completely determines the curve.
== Definitions ==
A parametric Cr-curve or a Cr-parametrization is a vector-valued function
γ
:
I
→
R
n
{\displaystyle \gamma :I\to \mathbb {R} ^{n}}
that is r-times continuously differentiable (that is, the component functions of γ are continuously differentiable), where
n
∈
N
{\displaystyle n\in \mathbb {N} }
,
r
∈
N
∪
{
∞
}
{\displaystyle r\in \mathbb {N} \cup \{\infty \}}
, and I is a non-empty interval of real numbers. The image of the parametric curve is
γ
[
I
]
⊆
R
n
{\displaystyle \gamma [I]\subseteq \mathbb {R} ^{n}}
. The parametric curve γ and its image γ[I] must be distinguished because a given subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
can be the image of many distinct parametric curves. The parameter t in γ(t) can be thought of as representing time, and γ the trajectory of a moving point in space. When I is a closed interval [a,b], γ(a) is called the starting point and γ(b) is the endpoint of γ. If the starting and the end points coincide (that is, γ(a) = γ(b)), then γ is a closed curve or a loop. To be a Cr-loop, the function γ must be r-times continuously differentiable and satisfy γ(k)(a) = γ(k)(b) for 0 ≤ k ≤ r.
The parametric curve is simple if
γ
|
(
a
,
b
)
:
(
a
,
b
)
→
R
n
{\displaystyle \gamma |_{(a,b)}:(a,b)\to \mathbb {R} ^{n}}
is injective. It is analytic if each component function of γ is an analytic function, that is, it is of class Cω.
The curve γ is regular of order m (where m ≤ r) if, for every t ∈ I,
{
γ
′
(
t
)
,
γ
″
(
t
)
,
…
,
γ
(
m
)
(
t
)
}
{\displaystyle \left\{\gamma '(t),\gamma ''(t),\ldots ,{\gamma ^{(m)}}(t)\right\}}
is a linearly independent subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
. In particular, a parametric C1-curve γ is regular if and only if γ′(t) ≠ 0 for any t ∈ I.
== Re-parametrization and equivalence relation ==
Given the image of a parametric curve, there are several different parametrizations of the parametric curve. Differential geometry aims to describe the properties of parametric curves that are invariant under certain reparametrizations. A suitable equivalence relation on the set of all parametric curves must be defined. The differential-geometric properties of a parametric curve (such as its length, its Frenet frame, and its generalized curvature) are invariant under reparametrization and therefore properties of the equivalence class itself. The equivalence classes are called Cr-curves and are central objects studied in the differential geometry of curves.
Two parametric Cr-curves,
γ
1
:
I
1
→
R
n
{\displaystyle \gamma _{1}:I_{1}\to \mathbb {R} ^{n}}
and
γ
2
:
I
2
→
R
n
{\displaystyle \gamma _{2}:I_{2}\to \mathbb {R} ^{n}}
, are said to be equivalent if and only if there exists a bijective Cr-map φ : I1 → I2 such that
∀
t
∈
I
1
:
φ
′
(
t
)
≠
0
{\displaystyle \forall t\in I_{1}:\quad \varphi '(t)\neq 0}
and
∀
t
∈
I
1
:
γ
2
(
φ
(
t
)
)
=
γ
1
(
t
)
.
{\displaystyle \forall t\in I_{1}:\quad \gamma _{2}{\bigl (}\varphi (t){\bigr )}=\gamma _{1}(t).}
γ2 is then said to be a re-parametrization of γ1.
Re-parametrization defines an equivalence relation on the set of all parametric Cr-curves of class Cr. The equivalence class of this relation simply a Cr-curve.
An even finer equivalence relation of oriented parametric Cr-curves can be defined by requiring φ to satisfy φ′(t) > 0.
Equivalent parametric Cr-curves have the same image, and equivalent oriented parametric Cr-curves even traverse the image in the same direction.
== Length and natural parametrization ==
The length l of a parametric C1-curve
γ
:
[
a
,
b
]
→
R
n
{\displaystyle \gamma :[a,b]\to \mathbb {R} ^{n}}
is defined as
l
=
def
∫
a
b
‖
γ
′
(
t
)
‖
d
t
.
{\displaystyle l~{\stackrel {\text{def}}{=}}~\int _{a}^{b}\left\|\gamma '(t)\right\|\,\mathrm {d} {t}.}
The length of a parametric curve is invariant under reparametrization and is therefore a differential-geometric property of the parametric curve.
For each regular parametric Cr-curve
γ
:
[
a
,
b
]
→
R
n
{\displaystyle \gamma :[a,b]\to \mathbb {R} ^{n}}
, where r ≥ 1, the function is defined
∀
t
∈
[
a
,
b
]
:
s
(
t
)
=
def
∫
a
t
‖
γ
′
(
x
)
‖
d
x
.
{\displaystyle \forall t\in [a,b]:\quad s(t)~{\stackrel {\text{def}}{=}}~\int _{a}^{t}\left\|\gamma '(x)\right\|\,\mathrm {d} {x}.}
Writing γ(s) = γ(t(s)), where t(s) is the inverse function of s(t). This is a re-parametrization γ of γ that is called an arc-length parametrization, natural parametrization, unit-speed parametrization. The parameter s(t) is called the natural parameter of γ.
This parametrization is preferred because the natural parameter s(t) traverses the image of γ at unit speed, so that
∀
t
∈
I
:
‖
γ
¯
′
(
s
(
t
)
)
‖
=
1.
{\displaystyle \forall t\in I:\quad \left\|{\overline {\gamma }}'{\bigl (}s(t){\bigr )}\right\|=1.}
In practice, it is often very difficult to calculate the natural parametrization of a parametric curve, but it is useful for theoretical arguments.
For a given parametric curve γ, the natural parametrization is unique up to a shift of parameter.
The quantity
E
(
γ
)
=
def
1
2
∫
a
b
‖
γ
′
(
t
)
‖
2
d
t
{\displaystyle E(\gamma )~{\stackrel {\text{def}}{=}}~{\frac {1}{2}}\int _{a}^{b}\left\|\gamma '(t)\right\|^{2}~\mathrm {d} {t}}
is sometimes called the energy or action of the curve; this name is justified because the geodesic equations are the Euler–Lagrange equations of motion for this action.
== Frenet frame ==
A Frenet frame is a moving reference frame of n orthonormal vectors ei(t) which are used to describe a curve locally at each point γ(t). It is the main tool in the differential geometric treatment of curves because it is far easier and more natural to describe local properties (e.g. curvature, torsion) in terms of a local reference system than using a global one such as Euclidean coordinates.
Given a Cn + 1-curve γ in
R
n
{\displaystyle \mathbb {R} ^{n}}
which is regular of order n the Frenet frame for the curve is the set of orthonormal vectors
e
1
(
t
)
,
…
,
e
n
(
t
)
{\displaystyle \mathbf {e} _{1}(t),\ldots ,\mathbf {e} _{n}(t)}
called Frenet vectors. They are constructed from the derivatives of γ(t) using the Gram–Schmidt orthogonalization algorithm with
e
1
(
t
)
=
γ
′
(
t
)
‖
γ
′
(
t
)
‖
e
j
(
t
)
=
e
j
¯
(
t
)
‖
e
j
¯
(
t
)
‖
,
e
j
¯
(
t
)
=
γ
(
j
)
(
t
)
−
∑
i
=
1
j
−
1
⟨
γ
(
j
)
(
t
)
,
e
i
(
t
)
⟩
e
i
(
t
)
⟨
{\displaystyle {\begin{aligned}\mathbf {e} _{1}(t)&={\frac {{\boldsymbol {\gamma }}'(t)}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}\\[1ex]\mathbf {e} _{j}(t)&={\frac {{\overline {\mathbf {e} _{j}}}(t)}{\left\|{\overline {\mathbf {e} _{j}}}(t)\right\|}},&{\overline {\mathbf {e} _{j}}}(t)&={\boldsymbol {\gamma }}^{(j)}(t)-\sum _{i=1}^{j-1}\left\langle {\boldsymbol {\gamma }}^{(j)}(t),\,\mathbf {e} _{i}(t)\right\rangle \,\mathbf {e} _{i}(t){\vphantom {\Bigg \langle }}\end{aligned}}}
The real-valued functions χi(t) are called generalized curvatures and are defined as
χ
i
(
t
)
=
⟨
e
i
′
(
t
)
,
e
i
+
1
(
t
)
⟩
‖
γ
′
(
t
)
‖
{\displaystyle \chi _{i}(t)={\frac {{\bigl \langle }\mathbf {e} _{i}'(t),\mathbf {e} _{i+1}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}^{'}(t)\right\|}}}
The Frenet frame and the generalized curvatures are invariant under reparametrization and are therefore differential geometric properties of the curve. For curves in
R
3
{\displaystyle \mathbb {R} ^{3}}
χ
1
(
t
)
{\displaystyle \chi _{1}(t)}
is the curvature and
χ
2
(
t
)
{\displaystyle \chi _{2}(t)}
is the torsion.
=== Bertrand curve ===
A Bertrand curve is a regular curve in
R
3
{\displaystyle \mathbb {R} ^{3}}
with the additional property that there is a second curve in
R
3
{\displaystyle \mathbb {R} ^{3}}
such that the principal normal vectors to these two curves are identical at each corresponding point. In other words, if γ1(t) and γ2(t) are two curves in
R
3
{\displaystyle \mathbb {R} ^{3}}
such that for any t, the two principal normals N1(t), N2(t) are equal, then γ1 and γ2 are Bertrand curves, and γ2 is called the Bertrand mate of γ1. We can write γ2(t) = γ1(t) + r N1(t) for some constant r.
According to problem 25 in Kühnel's "Differential Geometry Curves – Surfaces – Manifolds", it is also true that two Bertrand curves that do not lie in the same two-dimensional plane are characterized by the existence of a linear relation a κ(t) + b τ(t) = 1 where κ(t) and τ(t) are the curvature and torsion of γ1(t) and a and b are real constants with a ≠ 0. Furthermore, the product of torsions of a Bertrand pair of curves is constant.
If γ1 has more than one Bertrand mate then it has infinitely many. This only occurs when γ1 is a circular helix.
== Special Frenet vectors and generalized curvatures ==
The first three Frenet vectors and generalized curvatures can be visualized in three-dimensional space. They have additional names and more semantic information attached to them.
=== Tangent vector ===
If a curve γ represents the path of a particle over time, then the instantaneous velocity of the particle at a given position P is expressed by a vector, called the tangent vector to the curve at P. Mathematically, given a parametrized C1 curve γ = γ(t), for every value t = t0 of the time parameter, the vector
γ
′
(
t
0
)
=
d
d
t
γ
(
t
)
|
t
=
t
0
{\displaystyle {\boldsymbol {\gamma }}'(t_{0})=\left.{\frac {\mathrm {d} }{\mathrm {d} t}}{\boldsymbol {\gamma }}(t)\right|_{t=t_{0}}}
is the tangent vector at the point P = γ(t0). Generally speaking, the tangent vector may be zero. The tangent vector's magnitude
‖
γ
′
(
t
0
)
‖
{\displaystyle \left\|{\boldsymbol {\gamma }}'(t_{0})\right\|}
is the speed at the time t0.
The first Frenet vector e1(t) is the unit tangent vector in the same direction, called simply the tangent direction, defined at each regular point of γ:
e
1
(
t
)
=
γ
′
(
t
)
‖
γ
′
(
t
)
‖
.
{\displaystyle \mathbf {e} _{1}(t)={\frac {{\boldsymbol {\gamma }}'(t)}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}.}
If the time parameter is replaced by the arc length, t = s, then the tangent vector has unit length and the formula simplifies:
e
1
(
s
)
=
γ
′
(
s
)
.
{\displaystyle \mathbf {e} _{1}(s)={\boldsymbol {\gamma }}'(s).}
However, then it is no longer applicable the interpretation in terms of the particle's velocity (with dimension of length per time).
The tangent direction determines the orientation of the curve, or the forward direction, corresponding to the increasing values of the parameter. The tangent direction taken as a curve traces the spherical image of the original curve.
=== Normal vector or curvature vector ===
A curve normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line.
It is defined as the vector rejection of the particle's acceleration from the tangent direction:
e
2
¯
(
t
)
=
γ
″
(
t
)
−
⟨
γ
″
(
t
)
,
e
1
(
t
)
⟩
e
1
(
t
)
,
{\displaystyle {\overline {\mathbf {e} _{2}}}(t)={\boldsymbol {\gamma }}''(t)-{\bigl \langle }{\boldsymbol {\gamma }}''(t),\mathbf {e} _{1}(t){\bigr \rangle }\,\mathbf {e} _{1}(t),}
where the acceleration is defined as the second derivative of position with respect to time:
γ
″
(
t
0
)
=
d
2
d
t
2
γ
(
t
)
|
t
=
t
0
{\displaystyle {\boldsymbol {\gamma }}''(t_{0})=\left.{\frac {\mathrm {d} ^{2}}{\mathrm {d} t^{2}}}{\boldsymbol {\gamma }}(t)\right|_{t=t_{0}}}
Its normalized form, the unit normal vector, is the second Frenet vector e2(t) and is defined as
e
2
(
t
)
=
e
2
¯
(
t
)
‖
e
2
¯
(
t
)
‖
.
{\displaystyle \mathbf {e} _{2}(t)={\frac {{\overline {\mathbf {e} _{2}}}(t)}{\left\|{\overline {\mathbf {e} _{2}}}(t)\right\|}}.}
The tangent and the normal vector at point t define the osculating plane at point t.
It can be shown that ē2(t) ∝ e′1(t). Therefore,
e
2
(
t
)
=
e
1
′
(
t
)
‖
e
1
′
(
t
)
‖
.
{\displaystyle \mathbf {e} _{2}(t)={\frac {\mathbf {e} _{1}'(t)}{\left\|\mathbf {e} _{1}'(t)\right\|}}.}
=== Curvature ===
The first generalized curvature χ1(t) is called curvature and measures the deviance of γ from being a straight line relative to the osculating plane. It is defined as
κ
(
t
)
=
χ
1
(
t
)
=
⟨
e
1
′
(
t
)
,
e
2
(
t
)
⟩
‖
γ
′
(
t
)
‖
{\displaystyle \kappa (t)=\chi _{1}(t)={\frac {{\bigl \langle }\mathbf {e} _{1}'(t),\mathbf {e} _{2}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}}
and is called the curvature of γ at point t. It can be shown that
κ
(
t
)
=
‖
e
1
′
(
t
)
‖
‖
γ
′
(
t
)
‖
.
{\displaystyle \kappa (t)={\frac {\left\|\mathbf {e} _{1}'(t)\right\|}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}.}
The reciprocal of the curvature
1
κ
(
t
)
{\displaystyle {\frac {1}{\kappa (t)}}}
is called the radius of curvature.
A circle with radius r has a constant curvature of
κ
(
t
)
=
1
r
{\displaystyle \kappa (t)={\frac {1}{r}}}
whereas a line has a curvature of 0.
=== Binormal vector ===
The unit binormal vector is the third Frenet vector e3(t). It is always orthogonal to the unit tangent and normal vectors at t. It is defined as
e
3
(
t
)
=
e
3
¯
(
t
)
‖
e
3
¯
(
t
)
‖
,
e
3
¯
(
t
)
=
γ
‴
(
t
)
−
⟨
γ
‴
(
t
)
,
e
1
(
t
)
⟩
e
1
(
t
)
−
⟨
γ
‴
(
t
)
,
e
2
(
t
)
⟩
e
2
(
t
)
{\displaystyle \mathbf {e} _{3}(t)={\frac {{\overline {\mathbf {e} _{3}}}(t)}{\left\|{\overline {\mathbf {e} _{3}}}(t)\right\|}},\quad {\overline {\mathbf {e} _{3}}}(t)={\boldsymbol {\gamma }}'''(t)-{\bigr \langle }{\boldsymbol {\gamma }}'''(t),\mathbf {e} _{1}(t){\bigr \rangle }\,\mathbf {e} _{1}(t)-{\bigl \langle }{\boldsymbol {\gamma }}'''(t),\mathbf {e} _{2}(t){\bigr \rangle }\,\mathbf {e} _{2}(t)}
In 3-dimensional space, the equation simplifies to
e
3
(
t
)
=
e
1
(
t
)
×
e
2
(
t
)
{\displaystyle \mathbf {e} _{3}(t)=\mathbf {e} _{1}(t)\times \mathbf {e} _{2}(t)}
or to
e
3
(
t
)
=
−
e
1
(
t
)
×
e
2
(
t
)
,
{\displaystyle \mathbf {e} _{3}(t)=-\mathbf {e} _{1}(t)\times \mathbf {e} _{2}(t),}
That either sign may occur is illustrated by the examples of a right-handed helix and a left-handed helix.
=== Torsion ===
The second generalized curvature χ2(t) is called torsion and measures the deviance of γ from being a plane curve. In other words, if the torsion is zero, the curve lies completely in the same osculating plane (there is only one osculating plane for every point t). It is defined as
τ
(
t
)
=
χ
2
(
t
)
=
⟨
e
2
′
(
t
)
,
e
3
(
t
)
⟩
‖
γ
′
(
t
)
‖
{\displaystyle \tau (t)=\chi _{2}(t)={\frac {{\bigl \langle }\mathbf {e} _{2}'(t),\mathbf {e} _{3}(t){\bigr \rangle }}{\left\|{\boldsymbol {\gamma }}'(t)\right\|}}}
and is called the torsion of γ at point t.
=== Aberrancy ===
The third derivative may be used to define aberrancy, a metric of non-circularity of a curve.
== Main theorem of curve theory ==
Given n − 1 functions:
χ
i
∈
C
n
−
i
(
[
a
,
b
]
,
R
n
)
,
χ
i
(
t
)
>
0
,
1
≤
i
≤
n
−
1
{\displaystyle \chi _{i}\in C^{n-i}([a,b],\mathbb {R} ^{n}),\quad \chi _{i}(t)>0,\quad 1\leq i\leq n-1}
then there exists a unique (up to transformations using the Euclidean group) Cn + 1-curve γ which is regular of order n and has the following properties:
‖
γ
′
(
t
)
‖
=
1
t
∈
[
a
,
b
]
χ
i
(
t
)
=
⟨
e
i
′
(
t
)
,
e
i
+
1
(
t
)
⟩
‖
γ
′
(
t
)
‖
{\displaystyle {\begin{aligned}\|\gamma '(t)\|&=1&t\in [a,b]\\\chi _{i}(t)&={\frac {\langle \mathbf {e} _{i}'(t),\mathbf {e} _{i+1}(t)\rangle }{\|{\boldsymbol {\gamma }}'(t)\|}}\end{aligned}}}
where the set
e
1
(
t
)
,
…
,
e
n
(
t
)
{\displaystyle \mathbf {e} _{1}(t),\ldots ,\mathbf {e} _{n}(t)}
is the Frenet frame for the curve.
By additionally providing a start t0 in I, a starting point p0 in
R
n
{\displaystyle \mathbb {R} ^{n}}
and an initial positive orthonormal Frenet frame {e1, ..., en − 1} with
γ
(
t
0
)
=
p
0
e
i
(
t
0
)
=
e
i
,
1
≤
i
≤
n
−
1
{\displaystyle {\begin{aligned}{\boldsymbol {\gamma }}(t_{0})&=\mathbf {p} _{0}\\\mathbf {e} _{i}(t_{0})&=\mathbf {e} _{i},\quad 1\leq i\leq n-1\end{aligned}}}
the Euclidean transformations are eliminated to obtain a unique curve γ.
== Frenet–Serret formulas ==
The Frenet–Serret formulas are a set of ordinary differential equations of first order. The solution is the set of Frenet vectors describing the curve specified by the generalized curvature functions χi.
=== 2 dimensions ===
[
e
1
′
(
t
)
e
2
′
(
t
)
]
=
‖
γ
′
(
t
)
‖
[
0
κ
(
t
)
−
κ
(
t
)
0
]
[
e
1
(
t
)
e
2
(
t
)
]
{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}'(t)\\\mathbf {e} _{2}'(t)\end{bmatrix}}=\left\Vert \gamma '(t)\right\Vert {\begin{bmatrix}0&\kappa (t)\\-\kappa (t)&0\\\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\\mathbf {e} _{2}(t)\end{bmatrix}}}
=== 3 dimensions ===
[
e
1
′
(
t
)
e
2
′
(
t
)
e
3
′
(
t
)
]
=
‖
γ
′
(
t
)
‖
[
0
κ
(
t
)
0
−
κ
(
t
)
0
τ
(
t
)
0
−
τ
(
t
)
0
]
[
e
1
(
t
)
e
2
(
t
)
e
3
(
t
)
]
{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}'(t)\\[0.75ex]\mathbf {e} _{2}'(t)\\[0.75ex]\mathbf {e} _{3}'(t)\end{bmatrix}}=\left\Vert \gamma '(t)\right\Vert {\begin{bmatrix}0&\kappa (t)&0\\[1ex]-\kappa (t)&0&\tau (t)\\[1ex]0&-\tau (t)&0\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\[1ex]\mathbf {e} _{2}(t)\\[1ex]\mathbf {e} _{3}(t)\end{bmatrix}}}
=== n dimensions (general formula) ===
[
e
1
′
(
t
)
e
2
′
(
t
)
⋮
e
n
−
1
′
(
t
)
e
n
′
(
t
)
]
=
‖
γ
′
(
t
)
‖
[
0
χ
1
(
t
)
⋯
0
0
−
χ
1
(
t
)
0
⋯
0
0
⋮
⋮
⋱
⋮
⋮
0
0
⋯
0
χ
n
−
1
(
t
)
0
0
⋯
−
χ
n
−
1
(
t
)
0
]
[
e
1
(
t
)
e
2
(
t
)
⋮
e
n
−
1
(
t
)
e
n
(
t
)
]
{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}'(t)\\[1ex]\mathbf {e} _{2}'(t)\\[1ex]\vdots \\[1ex]\mathbf {e} _{n-1}'(t)\\[1ex]\mathbf {e} _{n}'(t)\\[1ex]\end{bmatrix}}=\left\Vert \gamma '(t)\right\Vert {\begin{bmatrix}0&\chi _{1}(t)&\cdots &0&0\\[1ex]-\chi _{1}(t)&0&\cdots &0&0\\[1ex]\vdots &\vdots &\ddots &\vdots &\vdots \\[1ex]0&0&\cdots &0&\chi _{n-1}(t)\\[1ex]0&0&\cdots &-\chi _{n-1}(t)&0\\[1ex]\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(t)\\[1ex]\mathbf {e} _{2}(t)\\[1ex]\vdots \\[1ex]\mathbf {e} _{n-1}(t)\\[1ex]\mathbf {e} _{n}(t)\\[1ex]\end{bmatrix}}}
== See also ==
List of curves topics
== References ==
== Further reading ==
Kreyszig, Erwin (1991). Differential Geometry. New York: Dover Publications. ISBN 0-486-66721-9. Chapter II is a classical treatment of Theory of Curves in 3-dimensions. | Wikipedia/Differential_geometry_of_curves |
In mathematics, the range of a function may refer to either the function's codomain or image. In some cases the codomain and the image of a function are the same set; such a function is called surjective or onto. For any non-surjective function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
the codomain
Y
{\displaystyle Y}
and the image
Y
~
{\displaystyle {\tilde {Y}}}
are different; however, a new function can be defined with the original function's image as its codomain,
f
~
:
X
→
Y
~
{\displaystyle {\tilde {f}}:X\to {\tilde {Y}}}
where
f
~
(
x
)
=
f
(
x
)
.
{\displaystyle {\tilde {f}}(x)=f(x).}
This new function is surjective.
== Definitions ==
Given two sets X and Y, a binary relation f between X and Y is a function (from X to Y) if for every element x in X there is exactly one y in Y such that f relates x to y. The sets X and Y are called the domain and codomain of f, respectively. The image of the function f is the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f(x) = y.
== Usage ==
As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain. More modern books, if they use the word "range" at all, generally use it to mean what is now called the image. To avoid any confusion, a number of modern books don't use the word "range" at all.
== Elaboration and example ==
Given a function
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
with domain
X
{\displaystyle X}
, the range of
f
{\displaystyle f}
, sometimes denoted
ran
(
f
)
{\displaystyle \operatorname {ran} (f)}
or
Range
(
f
)
{\displaystyle \operatorname {Range} (f)}
, may refer to the codomain or target set
Y
{\displaystyle Y}
(i.e., the set into which all of the output of
f
{\displaystyle f}
is constrained to fall), or to
f
(
X
)
{\displaystyle f(X)}
, the image of the domain of
f
{\displaystyle f}
under
f
{\displaystyle f}
(i.e., the subset of
Y
{\displaystyle Y}
consisting of all actual outputs of
f
{\displaystyle f}
). The image of a function is always a subset of the codomain of the function.
As an example of the two different usages, consider the function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers
R
{\displaystyle \mathbb {R} }
, but its image is the set of non-negative real numbers
R
+
{\displaystyle \mathbb {R} ^{+}}
, since
x
2
{\displaystyle x^{2}}
is never negative if
x
{\displaystyle x}
is real. For this function, if we use "range" to mean codomain, it refers to
R
{\displaystyle \mathbb {\displaystyle \mathbb {R} ^{}} }
; if we use "range" to mean image, it refers to
R
+
{\displaystyle \mathbb {R} ^{+}}
.
For some functions, the image and the codomain coincide; these functions are called surjective or onto. For example, consider the function
f
(
x
)
=
2
x
,
{\displaystyle f(x)=2x,}
which inputs a real number and outputs its double. For this function, both the codomain and the image are the set of all real numbers, so the word range is unambiguous.
Even in cases where the image and codomain of a function are different, a new function can be uniquely defined with its codomain as the image of the original function. For example, as a function from the integers to the integers, the doubling function
f
(
n
)
=
2
n
{\displaystyle f(n)=2n}
is not surjective because only the even integers are part of the image. However, a new function
f
~
(
n
)
=
2
n
{\displaystyle {\tilde {f}}(n)=2n}
whose domain is the integers and whose codomain is the even integers is surjective. For
f
~
,
{\displaystyle {\tilde {f}},}
the word range is unambiguous.
== See also ==
Bijection, injection and surjection
Essential range
== Notes and references ==
== Bibliography ==
Childs, Lindsay N. (2009). Childs, Lindsay N. (ed.). A Concrete Introduction to Higher Algebra. Undergraduate Texts in Mathematics (3rd ed.). Springer. doi:10.1007/978-0-387-74725-5. ISBN 978-0-387-74527-5. OCLC 173498962.
Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley. ISBN 978-0-471-43334-7. OCLC 52559229.
Hungerford, Thomas W. (1974). Algebra. Graduate Texts in Mathematics. Vol. 73. Springer. doi:10.1007/978-1-4612-6101-8. ISBN 0-387-90518-9. OCLC 703268.
Rudin, Walter (1991). Functional Analysis (2nd ed.). McGraw Hill. ISBN 0-07-054236-8. | Wikipedia/Range_of_a_function |
A graphic organizer, also known as a knowledge map, concept map, story map, cognitive organizer, advance organizer, or concept diagram, is a pedagogical tool that uses visual symbols to express knowledge and concepts through relationships between them.
The main purpose of a graphic organizer is to provide a visual aid to facilitate learning and instruction.
== History ==
Graphic organizers have a history extending to the early 1960s. David Paul Ausubel was an American psychologist who coined the phrase "advance organizers" to refer to tools which bridge "the gap between what learners already know and what they have to learn at any given moment in their educational careers." Ausubel's advance organizers originally took the form of prose to merge the familiar—what students know—with the new or unfamiliar—what they have discovered or are learning. The advance organizer intended to help learners more easily retain verbal information but was written in a higher level of language. Later the advance organizers were represented as paradigms, schematic representations, and diagrams as they took on more geometric shapes.
In 1969, Richard F. Barron came up with a tree diagram that was referred to as a "structured overview." The diagram introduced new vocabulary and used spatial characteristics and language written at the same level as the material being learned. In the classroom, this hierarchical organization was used by the teacher as a pre-reading strategy to show relationships among vocabulary. Its use later expanded for not only pre-reading strategies but for supplementary and post-reading activities. It was not until the 1980s that the term graphic organizer was used.
== Theories ==
Various theories have been put forth to undergird the assimilation of knowledge through the use of graphic organizers. According to Ausubel's Subsumption Theory, when a learner connects new information to their own preexisting ideas, they absorb new information. By relating new information to prior knowledge, learners reorganize their cognitive structures rather than build an entirely new one from scratch. Educational psychologist Richard E. Mayer reinterpreted Ausubel's subsumption theory within his own theory of assimilation encoding. In applying the use of organizers to the assimilation theory, advance organizers facilitate prior knowledge to working memory as well as its active integration of received information. However, he warned that advance organizers are not beneficial if the tools do not ask the learner to actively incorporate new information or if the preceding teaching methods and materials already are well-defined and well-structured.
Others find a basis for graphic organizers on schema theory developed by Swiss psychologist Jean Piaget. In psychology, schema refers to a cognitive framework or concept that helps to organize and interpret information. The brain uses these patterns of thinking and behavior that are held in long-term memory to help people interpret the world around them. Piaget's theory is that a scheme is both a category of knowledge and the process of acquiring new knowledge. He believed that as people continually adapt to their environments, they take in new information and acquire additional knowledge. Culbert, et al. (1998) posits that by using graphic organizers, prior knowledge is activated, and learners can add new information to their schema and thus improve comprehension of the material.
== Application ==
=== Pre-writing tool for students with learning disabilities ===
In one study of 21 students on Individualized Education Plans, graphic organizers were used during the pre-writing process to gauge student achievement on the persuasive essay in a 10th grade writing classroom. Explicit instruction on how to fill out the organizer along with color coding sections and sufficient class time to fill these out resulted in an 89 percent of students saying they felt graphic organizers were helpful in a post-assignment survey.
=== Metacognition tool for second-language (L2) learners ===
One yearlong study of 3rd and 5th grade California dual language classrooms found that through the use of graphic organizers, students increased higher-order thinking skills, enhanced vocabulary acquisition, and developed the academic language of science.
== Types of organizers ==
Graphic organizers take many forms:
Relational organizers
Storyboard
Fishbone - Ishikawa diagram
Cause and effect web
Chart
T-Chart
Category/classification organizers
Concept mapping
KWL tables
Mind mapping
Sequence organizers
Chain
Ladder - Story map
Stairs - Topic map
Compare contrast organizers
Dashboard
Venn diagrams
Double bubble map
Concept development organizers
Story web
Word web
Circle chart
Flow chart
Cluster diagram
Lotus diagram
Star diagram
Options and control device organizers
Mechanical control panel
Graphical user interface
== Enhancing students' skills ==
A review study concluded that using graphic organizers improves student performance in the following areas:
Retention
Students remember information better and can better recall it when it is represented and learned both visually and verbally.
Reading comprehension
The use of graphic organizers helps improving the reading comprehension of students.
Student achievement
Students with and without learning disabilities improve achievement across content areas and grade levels.
Thinking and learning skills; critical thinking
When students develop and use a graphic organizer their higher order thinking and critical thinking skills are enhanced.
== Criticism ==
In four studies on the effects of advance organizers on learning tasks, no significant difference was found from the control group who did not use organizers to learn as presented in a paper Richard F. Barron delivered to the Annual Meeting of the National Reading Conference in 1980. In that same study, Richard F. Barron did find that student-constructed postorganizers showed more benefits. Graphic postorganizers focus on learners finding the relationships of vocabulary terms by manipulating them in a diagram or schematic way after they have already learned these terms.
== See also ==
Diagram
Four square writing method
KWL table
Thinking Maps
Visualization (graphic)
== References == | Wikipedia/Graphic_organizer |
In mathematics, differential refers to several related notions derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.
The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology.
== Introduction ==
The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise.
Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula
d
y
=
d
y
d
x
d
x
,
{\displaystyle dy={\frac {dy}{dx}}\,dx,}
where
d
y
d
x
{\displaystyle {\frac {dy}{dx}}\,}
denotes not 'dy divided by dx' as one would intuitively read, but 'the derivative of y with respect to x '. This formula summarizes the idea that the derivative of y with respect to x is the limit of the ratio of differences Δy/Δx as Δx approaches zero:
d
y
(
x
)
d
x
=
lim
Δ
x
→
0
Δ
y
(
x
)
Δ
x
{\displaystyle {\dfrac {\mathrm {d} y(x)}{\mathrm {d} x}}=\lim _{\Delta x\rightarrow 0}{\dfrac {\Delta y(x)}{\Delta x}}}
You can meet
d
{\displaystyle d}
is italicised (
d
{\displaystyle d}
) or slanted (d) or regular, the last emphasizes
d
{\displaystyle \mathrm {d} }
is an operator designation like the summation operator
(
∑
)
{\displaystyle \left(\sum \right)}
, the delta operator (the finite difference operator) (
Δ
{\displaystyle \Delta }
), trigonometric functions (
sin
,
cos
,
tan
{\displaystyle \sin ,\cos ,\tan }
)...
=== Basic notions ===
In calculus, the differential represents a change in the linearization of a function.
The total differential is its generalization for functions of multiple variables.
In traditional approaches to calculus, differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number.
The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map).
More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback.
Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes.
The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential.
== History and usage ==
Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous. Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term differentials for infinitesimal quantities and introduced the notation for them which is still used today.
In Leibniz's notation, if x is a variable quantity, then dx denotes an infinitesimal change in the variable x. Thus, if y is a function of x, then the derivative of y with respect to x is often denoted dy/dx, which would otherwise be denoted (in the notation of Newton or Lagrange) ẏ or y′. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of y at x is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δy/Δx as Δx becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as dx has the same dimensions as the variable x.
Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus.
In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially differential; both differential and infinitesimal are used with new, more rigorous, meanings.
Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as
∫
f
(
x
)
d
x
,
{\displaystyle \int f(x)\,dx,}
the integral sign (which is a modified long s) denotes the infinite sum, f(x) denotes the "height" of a thin strip, and the differential dx denotes its infinitely thin width.
== Approaches ==
There are several approaches for making the notion of differentials mathematically precise.
Differentials as linear maps. This approach underlies the definition of the derivative and the exterior derivative in differential geometry.
Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.
Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.
Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers that contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.
These approaches are very different from each other, but they have in common the idea of being quantitative, i.e., saying not just that a differential is infinitely small, but how small it is.
=== Differentials as linear maps ===
There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on
R
{\displaystyle \mathbb {R} }
,
R
n
{\displaystyle \mathbb {R} ^{n}}
, a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context.
==== Differentials as linear maps on R ====
Suppose
f
(
x
)
{\displaystyle f(x)}
is a real-valued function on
R
{\displaystyle \mathbb {R} }
. We can reinterpret the variable
x
{\displaystyle x}
in
f
(
x
)
{\displaystyle f(x)}
as being a function rather than a number, namely the identity map on the real line, which takes a real number
p
{\displaystyle p}
to itself:
x
(
p
)
=
p
{\displaystyle x(p)=p}
. Then
f
(
x
)
{\displaystyle f(x)}
is the composite of
f
{\displaystyle f}
with
x
{\displaystyle x}
, whose value at
p
{\displaystyle p}
is
f
(
x
(
p
)
)
=
f
(
p
)
{\displaystyle f(x(p))=f(p)}
. The differential
d
f
{\displaystyle \operatorname {d} f}
(which of course depends on
f
{\displaystyle f}
) is then a function whose value at
p
{\displaystyle p}
(usually denoted
d
f
p
{\displaystyle df_{p}}
) is not a number, but a linear map from
R
{\displaystyle \mathbb {R} }
to
R
{\displaystyle \mathbb {R} }
. Since a linear map from
R
{\displaystyle \mathbb {R} }
to
R
{\displaystyle \mathbb {R} }
is given by a
1
×
1
{\displaystyle 1\times 1}
matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of
d
f
p
{\displaystyle df_{p}}
as an infinitesimal and compare it with the standard infinitesimal
d
x
p
{\displaystyle dx_{p}}
, which is again just the identity map from
R
{\displaystyle \mathbb {R} }
to
R
{\displaystyle \mathbb {R} }
(a
1
×
1
{\displaystyle 1\times 1}
matrix with entry
1
{\displaystyle 1}
). The identity map has the property that if
ε
{\displaystyle \varepsilon }
is very small, then
d
x
p
(
ε
)
{\displaystyle dx_{p}(\varepsilon )}
is very small, which enables us to regard it as infinitesimal. The differential
d
f
p
{\displaystyle df_{p}}
has the same property, because it is just a multiple of
d
x
p
{\displaystyle dx_{p}}
, and this multiple is the derivative
f
′
(
p
)
{\displaystyle f'(p)}
by definition. We therefore obtain that
d
f
p
=
f
′
(
p
)
d
x
p
{\displaystyle df_{p}=f'(p)\,dx_{p}}
, and hence
d
f
=
f
′
d
x
{\displaystyle df=f'\,dx}
. Thus we recover the idea that
f
′
{\displaystyle f'}
is the ratio of the differentials
d
f
{\displaystyle df}
and
d
x
{\displaystyle dx}
.
This would just be a trick were it not for the fact that:
it captures the idea of the derivative of
f
{\displaystyle f}
at
p
{\displaystyle p}
as the best linear approximation to
f
{\displaystyle f}
at
p
{\displaystyle p}
;
it has many generalizations.
==== Differentials as linear maps on Rn ====
If
f
{\displaystyle f}
is a function from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
{\displaystyle \mathbb {R} }
, then we say that
f
{\displaystyle f}
is differentiable at
p
∈
R
n
{\displaystyle p\in \mathbb {R} ^{n}}
if there is a linear map
d
f
p
{\displaystyle df_{p}}
from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
{\displaystyle \mathbb {R} }
such that for any
ε
>
0
{\displaystyle \varepsilon >0}
, there is a neighbourhood
N
{\displaystyle N}
of
p
{\displaystyle p}
such that for
x
∈
N
{\displaystyle x\in N}
,
|
f
(
x
)
−
f
(
p
)
−
d
f
p
(
x
−
p
)
|
<
ε
|
x
−
p
|
.
{\displaystyle \left|f(x)-f(p)-df_{p}(x-p)\right|<\varepsilon \left|x-p\right|.}
We can now use the same trick as in the one-dimensional case and think of the expression
f
(
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle f(x_{1},x_{2},\ldots ,x_{n})}
as the composite of
f
{\displaystyle f}
with the standard coordinates
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\ldots ,x_{n}}
on
R
n
{\displaystyle \mathbb {R} ^{n}}
(so that
x
j
(
p
)
{\displaystyle x_{j}(p)}
is the
j
{\displaystyle j}
-th component of
p
∈
R
n
{\displaystyle p\in \mathbb {R} ^{n}}
). Then the differentials
(
d
x
1
)
p
,
(
d
x
2
)
p
,
…
,
(
d
x
n
)
p
{\displaystyle \left(dx_{1}\right)_{p},\left(dx_{2}\right)_{p},\ldots ,\left(dx_{n}\right)_{p}}
at a point
p
{\displaystyle p}
form a basis for the vector space of linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
{\displaystyle \mathbb {R} }
and therefore, if
f
{\displaystyle f}
is differentiable at
p
{\displaystyle p}
, we can write
d
f
p
{\displaystyle \operatorname {d} f_{p}}
as a linear combination of these basis elements:
d
f
p
=
∑
j
=
1
n
D
j
f
(
p
)
(
d
x
j
)
p
.
{\displaystyle df_{p}=\sum _{j=1}^{n}D_{j}f(p)\,(dx_{j})_{p}.}
The coefficients
D
j
f
(
p
)
{\displaystyle D_{j}f(p)}
are (by definition) the partial derivatives of
f
{\displaystyle f}
at
p
{\displaystyle p}
with respect to
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\ldots ,x_{n}}
. Hence, if
f
{\displaystyle f}
is differentiable on all of
R
n
{\displaystyle \mathbb {R} ^{n}}
, we can write, more concisely:
d
f
=
∂
f
∂
x
1
d
x
1
+
∂
f
∂
x
2
d
x
2
+
⋯
+
∂
f
∂
x
n
d
x
n
.
{\displaystyle \operatorname {d} f={\frac {\partial f}{\partial x_{1}}}\,dx_{1}+{\frac {\partial f}{\partial x_{2}}}\,dx_{2}+\cdots +{\frac {\partial f}{\partial x_{n}}}\,dx_{n}.}
In the one-dimensional case this becomes
d
f
=
d
f
d
x
d
x
{\displaystyle df={\frac {df}{dx}}dx}
as before.
This idea generalizes straightforwardly to functions from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
. Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds.
Aside: Note that the existence of all the partial derivatives of
f
(
x
)
{\displaystyle f(x)}
at
x
{\displaystyle x}
is a necessary condition for the existence of a differential at
x
{\displaystyle x}
. However it is not a sufficient condition. For counterexamples, see Gateaux derivative.
==== Differentials as linear maps on a vector space ====
The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance.
For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for
R
n
{\displaystyle \mathbb {R} ^{n}}
.
=== Differentials as germs of functions ===
This approach works on any differentiable manifold. If
U and V are open sets containing p
f
:
U
→
R
{\displaystyle f\colon U\to \mathbb {R} }
is continuous
g
:
V
→
R
{\displaystyle g\colon V\to \mathbb {R} }
is continuous
then f is equivalent to g at p, denoted
f
∼
p
g
{\displaystyle f\sim _{p}g}
, if and only if
there is an open
W
⊆
U
∩
V
{\displaystyle W\subseteq U\cap V}
containing p such that
f
(
x
)
=
g
(
x
)
{\displaystyle f(x)=g(x)}
for every x in W.
The germ of f at p, denoted
[
f
]
p
{\displaystyle [f]_{p}}
, is the set of all real continuous functions equivalent to f at p; if f is smooth at p then
[
f
]
p
{\displaystyle [f]_{p}}
is a smooth germ.
If
U
1
{\displaystyle U_{1}}
,
U
2
{\displaystyle U_{2}}
V
1
{\displaystyle V_{1}}
and
V
2
{\displaystyle V_{2}}
are open sets containing p
f
1
:
U
1
→
R
{\displaystyle f_{1}\colon U_{1}\to \mathbb {R} }
,
f
2
:
U
2
→
R
{\displaystyle f_{2}\colon U_{2}\to \mathbb {R} }
,
g
1
:
V
1
→
R
{\displaystyle g_{1}\colon V_{1}\to \mathbb {R} }
and
g
2
:
V
2
→
R
{\displaystyle g_{2}\colon V_{2}\to \mathbb {R} }
are smooth functions
f
1
∼
p
g
1
{\displaystyle f_{1}\sim _{p}g_{1}}
f
2
∼
p
g
2
{\displaystyle f_{2}\sim _{p}g_{2}}
r is a real number
then
r
∗
f
1
∼
p
r
∗
g
1
{\displaystyle r*f_{1}\sim _{p}r*g_{1}}
f
1
+
f
2
:
U
1
∩
U
2
→
R
∼
p
g
1
+
g
2
:
V
1
∩
V
2
→
R
{\displaystyle f_{1}+f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}+g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} }
f
1
∗
f
2
:
U
1
∩
U
2
→
R
∼
p
g
1
∗
g
2
:
V
1
∩
V
2
→
R
{\displaystyle f_{1}*f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}*g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} }
This shows that the germs at p form an algebra.
Define
I
p
{\displaystyle {\mathcal {I}}_{p}}
to be the set of all smooth germs vanishing at p and
I
p
2
{\displaystyle {\mathcal {I}}_{p}^{2}}
to be the product of ideals
I
p
I
p
{\displaystyle {\mathcal {I}}_{p}{\mathcal {I}}_{p}}
. Then a differential at p (cotangent vector at p) is an element of
I
p
/
I
p
2
{\displaystyle {\mathcal {I}}_{p}/{\mathcal {I}}_{p}^{2}}
. The differential of a smooth function f at p, denoted
d
f
p
{\displaystyle \mathrm {d} f_{p}}
, is
[
f
−
f
(
p
)
]
p
/
I
p
2
{\displaystyle [f-f(p)]_{p}/{\mathcal {I}}_{p}^{2}}
.
A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch.
Then the differential of f at p is the set of all functions differentially equivalent to
f
−
f
(
p
)
{\displaystyle f-f(p)}
at p.
=== Algebraic geometry ===
In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R[ε], where ε2 = 0.
This can be motivated by the algebro-geometric point of view on the derivative of a function f from R to R at a point p. For this, note first that f − f(p) belongs to the ideal Ip of functions on R which vanish at p. If the derivative f vanishes at p, then f − f(p) belongs to the square Ip2 of this ideal. Hence the derivative of f at p may be captured by the equivalence class [f − f(p)] in the quotient space Ip/Ip2, and the 1-jet of f (which encodes its value and its first derivative) is the equivalence class of f in the space of all functions modulo Ip2. Algebraic geometers regard this equivalence class as the restriction of f to a thickened version of the point p whose coordinate ring is not R (which is the quotient space of functions on R modulo Ip) but R[ε] which is the quotient space of functions on R modulo Ip2. Such a thickened point is a simple example of a scheme.
==== Algebraic geometry notions ====
Differentials are also important in algebraic geometry, and there are several important notions.
Abelian differentials usually mean differential one-forms on an algebraic curve or Riemann surface.
Quadratic differentials (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces.
Kähler differentials provide a general notion of differential in algebraic geometry.
=== Synthetic differential geometry ===
A fifth approach to infinitesimals is the method of synthetic differential geometry or smooth infinitesimal analysis. This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of smoothly varying sets which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers automatically contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are constructive (e.g., do not use proof by contradiction). Constructivists regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available.
=== Nonstandard analysis ===
The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers. Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/n, ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle.
== Differential geometry ==
The notion of a differential motivates several concepts in differential geometry (and differential topology).
The differential (Pushforward) of a map between manifolds.
Differential forms provide a framework which accommodates multiplication and differentiation of differentials.
The exterior derivative is a notion of differentiation of differential forms which generalizes the differential of a function (which is a differential 1-form).
Pullback is, in particular, a geometric name for the chain rule for composing a map between manifolds with a differential form on the target manifold.
Covariant derivatives or differentials provide a general notion for differentiating of vector fields and tensor fields on a manifold, or, more generally, sections of a vector bundle: see Connection (vector bundle). This ultimately leads to the general concept of a connection.
== Other meanings ==
The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex
(
C
∙
,
d
∙
)
,
{\displaystyle (C_{\bullet },d_{\bullet }),}
the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials.
The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra.
== See also ==
Differential equation
Differential form
Differential of a function
== Notes ==
== Citations ==
== References ==
Apostol, Tom M. (1967), Calculus (2nd ed.), Wiley, ISBN 978-0-471-00005-1.
Bell, John L. (1998), Invitation to Smooth Infinitesimal Analysis (PDF).
Boyer, Carl B. (1991), "Archimedes of Syracuse", A History of Mathematics (2nd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8.
Darling, R. W. R. (1994), Differential forms and connections, Cambridge, UK: Cambridge University Press, Bibcode:1994dfc..book.....D, ISBN 978-0-521-46800-8.
Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 978-0-387-98637-1
Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.).
Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press.
Lawvere, F.W. (1968), Outline of synthetic differential geometry (PDF) (published 1998).
Moerdijk, I.; Reyes, Gonzalo E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag, ISBN 978-1-441-93095-8.
Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3.
Weisstein, Eric W. "Differentials". MathWorld. | Wikipedia/Differential_(mathematics) |
In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above or on the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set.
In simple terms, a convex function graph is shaped like a cup
∪
{\displaystyle \cup }
(or a straight line like a linear function), while a concave function's graph is shaped like a cap
∩
{\displaystyle \cap }
.
A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include a linear function
f
(
x
)
=
c
x
{\displaystyle f(x)=cx}
(where
c
{\displaystyle c}
is a real number), a quadratic function
c
x
2
{\displaystyle cx^{2}}
(
c
{\displaystyle c}
as a nonnegative real number) and an exponential function
c
e
x
{\displaystyle ce^{x}}
(
c
{\displaystyle c}
as a nonnegative real number).
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always bounded above by the expected value of the convex function of the random variable. This result, known as Jensen's inequality, can be used to deduce inequalities such as the arithmetic–geometric mean inequality and Hölder's inequality.
== Definition ==
Let
X
{\displaystyle X}
be a convex subset of a real vector space and let
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
be a function.
Then
f
{\displaystyle f}
is called convex if and only if any of the following equivalent conditions hold:
For all
0
≤
t
≤
1
{\displaystyle 0\leq t\leq 1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
The right hand side represents the straight line between
(
x
1
,
f
(
x
1
)
)
{\displaystyle \left(x_{1},f\left(x_{1}\right)\right)}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle \left(x_{2},f\left(x_{2}\right)\right)}
in the graph of
f
{\displaystyle f}
as a function of
t
;
{\displaystyle t;}
increasing
t
{\displaystyle t}
from
0
{\displaystyle 0}
to
1
{\displaystyle 1}
or decreasing
t
{\displaystyle t}
from
1
{\displaystyle 1}
to
0
{\displaystyle 0}
sweeps this line. Similarly, the argument of the function
f
{\displaystyle f}
in the left hand side represents the straight line between
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
in
X
{\displaystyle X}
or the
x
{\displaystyle x}
-axis of the graph of
f
.
{\displaystyle f.}
So, this condition requires that the straight line between any pair of points on the curve of
f
{\displaystyle f}
be above or just meeting the graph.
For all
0
<
t
<
1
{\displaystyle 0<t<1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
such that
x
1
≠
x
2
{\displaystyle x_{1}\neq x_{2}}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
The difference of this second condition with respect to the first condition above is that this condition does not include the intersection points (for example,
(
x
1
,
f
(
x
1
)
)
{\displaystyle \left(x_{1},f\left(x_{1}\right)\right)}
and
(
x
2
,
f
(
x
2
)
)
{\displaystyle \left(x_{2},f\left(x_{2}\right)\right)}
) between the straight line passing through a pair of points on the curve of
f
{\displaystyle f}
(the straight line is represented by the right hand side of this condition) and the curve of
f
;
{\displaystyle f;}
the first condition includes the intersection points as it becomes
f
(
x
1
)
≤
f
(
x
1
)
{\displaystyle f\left(x_{1}\right)\leq f\left(x_{1}\right)}
or
f
(
x
2
)
≤
f
(
x
2
)
{\displaystyle f\left(x_{2}\right)\leq f\left(x_{2}\right)}
at
t
=
0
{\displaystyle t=0}
or
1
,
{\displaystyle 1,}
or
x
1
=
x
2
.
{\displaystyle x_{1}=x_{2}.}
In fact, the intersection points do not need to be considered in a condition of convex using
f
(
t
x
1
+
(
1
−
t
)
x
2
)
≤
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)\leq tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
because
f
(
x
1
)
≤
f
(
x
1
)
{\displaystyle f\left(x_{1}\right)\leq f\left(x_{1}\right)}
and
f
(
x
2
)
≤
f
(
x
2
)
{\displaystyle f\left(x_{2}\right)\leq f\left(x_{2}\right)}
are always true (so not useful to be a part of a condition).
The second statement characterizing convex functions that are valued in the real line
R
{\displaystyle \mathbb {R} }
is also the statement used to define convex functions that are valued in the extended real number line
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
,
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}
where such a function
f
{\displaystyle f}
is allowed to take
±
∞
{\displaystyle \pm \infty }
as a value. The first statement is not used because it permits
t
{\displaystyle t}
to take
0
{\displaystyle 0}
or
1
{\displaystyle 1}
as a value, in which case, if
f
(
x
1
)
=
±
∞
{\displaystyle f\left(x_{1}\right)=\pm \infty }
or
f
(
x
2
)
=
±
∞
,
{\displaystyle f\left(x_{2}\right)=\pm \infty ,}
respectively, then
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
would be undefined (because the multiplications
0
⋅
∞
{\displaystyle 0\cdot \infty }
and
0
⋅
(
−
∞
)
{\displaystyle 0\cdot (-\infty )}
are undefined). The sum
−
∞
+
∞
{\displaystyle -\infty +\infty }
is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of
−
∞
{\displaystyle -\infty }
and
+
∞
{\displaystyle +\infty }
as a value.
The second statement can also be modified to get the definition of strict convexity, where the latter is obtained by replacing
≤
{\displaystyle \,\leq \,}
with the strict inequality
<
.
{\displaystyle \,<.}
Explicitly, the map
f
{\displaystyle f}
is called strictly convex if and only if for all real
0
<
t
<
1
{\displaystyle 0<t<1}
and all
x
1
,
x
2
∈
X
{\displaystyle x_{1},x_{2}\in X}
such that
x
1
≠
x
2
{\displaystyle x_{1}\neq x_{2}}
:
f
(
t
x
1
+
(
1
−
t
)
x
2
)
<
t
f
(
x
1
)
+
(
1
−
t
)
f
(
x
2
)
{\displaystyle f\left(tx_{1}+(1-t)x_{2}\right)<tf\left(x_{1}\right)+(1-t)f\left(x_{2}\right)}
A strictly convex function
f
{\displaystyle f}
is a function that the straight line between any pair of points on the curve
f
{\displaystyle f}
is above the curve
f
{\displaystyle f}
except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex is
f
(
x
,
y
)
=
x
2
+
y
{\displaystyle f(x,y)=x^{2}+y}
. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them.
The function
f
{\displaystyle f}
is said to be concave (resp. strictly concave) if
−
f
{\displaystyle -f}
(
f
{\displaystyle f}
multiplied by −1) is convex (resp. strictly convex).
== Alternative naming ==
The term convex is often referred to as convex down or concave upward, and the term concave is often referred as concave down or convex upward. If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph
∪
{\displaystyle \cup }
. As an example, Jensen's inequality refers to an inequality involving a convex or convex-(down), function.
== Properties ==
Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable.
=== Functions of one variable ===
Suppose
f
{\displaystyle f}
is a function of one real variable defined on an interval, and let
R
(
x
1
,
x
2
)
=
f
(
x
2
)
−
f
(
x
1
)
x
2
−
x
1
{\displaystyle R(x_{1},x_{2})={\frac {f(x_{2})-f(x_{1})}{x_{2}-x_{1}}}}
(note that
R
(
x
1
,
x
2
)
{\displaystyle R(x_{1},x_{2})}
is the slope of the purple line in the first drawing; the function
R
{\displaystyle R}
is symmetric in
(
x
1
,
x
2
)
,
{\displaystyle (x_{1},x_{2}),}
means that
R
{\displaystyle R}
does not change by exchanging
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
).
f
{\displaystyle f}
is convex if and only if
R
(
x
1
,
x
2
)
{\displaystyle R(x_{1},x_{2})}
is monotonically non-decreasing in
x
1
,
{\displaystyle x_{1},}
for every fixed
x
2
{\displaystyle x_{2}}
(or vice versa). This characterization of convexity is quite useful to prove the following results.
A convex function
f
{\displaystyle f}
of one real variable defined on some open interval
C
{\displaystyle C}
is continuous on
C
{\displaystyle C}
. Moreover,
f
{\displaystyle f}
admits left and right derivatives, and these are monotonically non-decreasing. In addition, the left derivative is left-continuous and the right-derivative is right-continuous. As a consequence,
f
{\displaystyle f}
is differentiable at all but at most countably many points, the set on which
f
{\displaystyle f}
is not differentiable can however still be dense. If
C
{\displaystyle C}
is closed, then
f
{\displaystyle f}
may fail to be continuous at the endpoints of
C
{\displaystyle C}
(an example is shown in the examples section).
A differentiable function of one variable is convex on an interval if and only if its derivative is monotonically non-decreasing on that interval. If a function is differentiable and convex then it is also continuously differentiable.
A differentiable function of one variable is convex on an interval if and only if its graph lies above all of its tangents:: 69
f
(
x
)
≥
f
(
y
)
+
f
′
(
y
)
(
x
−
y
)
{\displaystyle f(x)\geq f(y)+f'(y)(x-y)}
for all
x
{\displaystyle x}
and
y
{\displaystyle y}
in the interval.
A twice differentiable function of one variable is convex on an interval if and only if its second derivative is non-negative there; this gives a practical test for convexity. Visually, a twice differentiable convex function "curves up", without any bends the other way (inflection points). If its second derivative is positive at all points then the function is strictly convex, but the converse does not hold. For example, the second derivative of
f
(
x
)
=
x
4
{\displaystyle f(x)=x^{4}}
is
f
″
(
x
)
=
12
x
2
{\displaystyle f''(x)=12x^{2}}
, which is zero for
x
=
0
,
{\displaystyle x=0,}
but
x
4
{\displaystyle x^{4}}
is strictly convex.
This property and the above property in terms of "...its derivative is monotonically non-decreasing..." are not equal since if
f
″
{\displaystyle f''}
is non-negative on an interval
X
{\displaystyle X}
then
f
′
{\displaystyle f'}
is monotonically non-decreasing on
X
{\displaystyle X}
while its converse is not true, for example,
f
′
{\displaystyle f'}
is monotonically non-decreasing on
X
{\displaystyle X}
while its derivative
f
″
{\displaystyle f''}
is not defined at some points on
X
{\displaystyle X}
.
If
f
{\displaystyle f}
is a convex function of one real variable, and
f
(
0
)
≤
0
{\displaystyle f(0)\leq 0}
, then
f
{\displaystyle f}
is superadditive on the positive reals, that is
f
(
a
+
b
)
≥
f
(
a
)
+
f
(
b
)
{\displaystyle f(a+b)\geq f(a)+f(b)}
for positive real numbers
a
{\displaystyle a}
and
b
{\displaystyle b}
.
A function
f
{\displaystyle f}
is midpoint convex on an interval
C
{\displaystyle C}
if for all
x
1
,
x
2
∈
C
{\displaystyle x_{1},x_{2}\in C}
f
(
x
1
+
x
2
2
)
≤
f
(
x
1
)
+
f
(
x
2
)
2
.
{\displaystyle f\!\left({\frac {x_{1}+x_{2}}{2}}\right)\leq {\frac {f(x_{1})+f(x_{2})}{2}}.}
This condition is only slightly weaker than convexity. For example, a real-valued Lebesgue measurable function that is midpoint-convex is convex: this is a theorem of Sierpiński. In particular, a continuous function that is midpoint convex will be convex.
=== Functions of several variables ===
A function that is marginally convex in each individual variable is not necessarily (jointly) convex. For example, the function
f
(
x
,
y
)
=
x
y
{\displaystyle f(x,y)=xy}
is marginally linear, and thus marginally convex, in each variable, but not (jointly) convex.
A function
f
:
X
→
[
−
∞
,
∞
]
{\displaystyle f:X\to [-\infty ,\infty ]}
valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \}}
is convex if and only if its epigraph
{
(
x
,
r
)
∈
X
×
R
:
r
≥
f
(
x
)
}
{\displaystyle \{(x,r)\in X\times \mathbb {R} ~:~r\geq f(x)\}}
is a convex set.
A differentiable function
f
{\displaystyle f}
defined on a convex domain is convex if and only if
f
(
x
)
≥
f
(
y
)
+
∇
f
(
y
)
T
⋅
(
x
−
y
)
{\displaystyle f(x)\geq f(y)+\nabla f(y)^{T}\cdot (x-y)}
holds for all
x
,
y
{\displaystyle x,y}
in the domain.
A twice differentiable function of several variables is convex on a convex set if and only if its Hessian matrix of second partial derivatives is positive semidefinite on the interior of the convex set.
For a convex function
f
,
{\displaystyle f,}
the sublevel sets
{
x
:
f
(
x
)
<
a
}
{\displaystyle \{x:f(x)<a\}}
and
{
x
:
f
(
x
)
≤
a
}
{\displaystyle \{x:f(x)\leq a\}}
with
a
∈
R
{\displaystyle a\in \mathbb {R} }
are convex sets. A function that satisfies this property is called a quasiconvex function and may fail to be a convex function.
Consequently, the set of global minimisers of a convex function
f
{\displaystyle f}
is a convex set:
argmin
f
{\displaystyle {\operatorname {argmin} }\,f}
- convex.
Any local minimum of a convex function is also a global minimum. A strictly convex function will have at most one global minimum.
Jensen's inequality applies to every convex function
f
{\displaystyle f}
. If
X
{\displaystyle X}
is a random variable taking values in the domain of
f
,
{\displaystyle f,}
then
E
(
f
(
X
)
)
≥
f
(
E
(
X
)
)
,
{\displaystyle \operatorname {E} (f(X))\geq f(\operatorname {E} (X)),}
where
E
{\displaystyle \operatorname {E} }
denotes the mathematical expectation. Indeed, convex functions are exactly those that satisfies the hypothesis of Jensen's inequality.
A first-order homogeneous function of two positive variables
x
{\displaystyle x}
and
y
,
{\displaystyle y,}
(that is, a function satisfying
f
(
a
x
,
a
y
)
=
a
f
(
x
,
y
)
{\displaystyle f(ax,ay)=af(x,y)}
for all positive real
a
,
x
,
y
>
0
{\displaystyle a,x,y>0}
) that is convex in one variable must be convex in the other variable.
== Operations that preserve convexity ==
−
f
{\displaystyle -f}
is concave if and only if
f
{\displaystyle f}
is convex.
If
r
{\displaystyle r}
is any real number then
r
+
f
{\displaystyle r+f}
is convex if and only if
f
{\displaystyle f}
is convex.
Nonnegative weighted sums:
if
w
1
,
…
,
w
n
≥
0
{\displaystyle w_{1},\ldots ,w_{n}\geq 0}
and
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are all convex, then so is
w
1
f
1
+
⋯
+
w
n
f
n
.
{\displaystyle w_{1}f_{1}+\cdots +w_{n}f_{n}.}
In particular, the sum of two convex functions is convex.
this property extends to infinite sums, integrals and expected values as well (provided that they exist).
Elementwise maximum: let
{
f
i
}
i
∈
I
{\displaystyle \{f_{i}\}_{i\in I}}
be a collection of convex functions. Then
g
(
x
)
=
sup
i
∈
I
f
i
(
x
)
{\displaystyle g(x)=\sup \nolimits _{i\in I}f_{i}(x)}
is convex. The domain of
g
(
x
)
{\displaystyle g(x)}
is the collection of points where the expression is finite. Important special cases:
If
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
are convex functions then so is
g
(
x
)
=
max
{
f
1
(
x
)
,
…
,
f
n
(
x
)
}
.
{\displaystyle g(x)=\max \left\{f_{1}(x),\ldots ,f_{n}(x)\right\}.}
Danskin's theorem: If
f
(
x
,
y
)
{\displaystyle f(x,y)}
is convex in
x
{\displaystyle x}
then
g
(
x
)
=
sup
y
∈
C
f
(
x
,
y
)
{\displaystyle g(x)=\sup \nolimits _{y\in C}f(x,y)}
is convex in
x
{\displaystyle x}
even if
C
{\displaystyle C}
is not a convex set.
Composition:
If
f
{\displaystyle f}
and
g
{\displaystyle g}
are convex functions and
g
{\displaystyle g}
is non-decreasing over a univariate domain, then
h
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x)=g(f(x))}
is convex. For example, if
f
{\displaystyle f}
is convex, then so is
e
f
(
x
)
{\displaystyle e^{f(x)}}
because
e
x
{\displaystyle e^{x}}
is convex and monotonically increasing.
If
f
{\displaystyle f}
is concave and
g
{\displaystyle g}
is convex and non-increasing over a univariate domain, then
h
(
x
)
=
g
(
f
(
x
)
)
{\displaystyle h(x)=g(f(x))}
is convex.
Convexity is invariant under affine maps: that is, if
f
{\displaystyle f}
is convex with domain
D
f
⊆
R
m
{\displaystyle D_{f}\subseteq \mathbf {R} ^{m}}
, then so is
g
(
x
)
=
f
(
A
x
+
b
)
{\displaystyle g(x)=f(Ax+b)}
, where
A
∈
R
m
×
n
,
b
∈
R
m
{\displaystyle A\in \mathbf {R} ^{m\times n},b\in \mathbf {R} ^{m}}
with domain
D
g
⊆
R
n
.
{\displaystyle D_{g}\subseteq \mathbf {R} ^{n}.}
Minimization: If
f
(
x
,
y
)
{\displaystyle f(x,y)}
is convex in
(
x
,
y
)
{\displaystyle (x,y)}
then
g
(
x
)
=
inf
y
∈
C
f
(
x
,
y
)
{\displaystyle g(x)=\inf \nolimits _{y\in C}f(x,y)}
is convex in
x
,
{\displaystyle x,}
provided that
C
{\displaystyle C}
is a convex set and that
g
(
x
)
≠
−
∞
.
{\displaystyle g(x)\neq -\infty .}
If
f
{\displaystyle f}
is convex, then its perspective
g
(
x
,
t
)
=
t
f
(
x
t
)
{\displaystyle g(x,t)=tf\left({\tfrac {x}{t}}\right)}
with domain
{
(
x
,
t
)
:
x
t
∈
Dom
(
f
)
,
t
>
0
}
{\displaystyle \left\{(x,t):{\tfrac {x}{t}}\in \operatorname {Dom} (f),t>0\right\}}
is convex.
Let
X
{\displaystyle X}
be a vector space.
f
:
X
→
R
{\displaystyle f:X\to \mathbf {R} }
is convex and satisfies
f
(
0
)
≤
0
{\displaystyle f(0)\leq 0}
if and only if
f
(
a
x
+
b
y
)
≤
a
f
(
x
)
+
b
f
(
y
)
{\displaystyle f(ax+by)\leq af(x)+bf(y)}
for any
x
,
y
∈
X
{\displaystyle x,y\in X}
and any non-negative real numbers
a
,
b
{\displaystyle a,b}
that satisfy
a
+
b
≤
1.
{\displaystyle a+b\leq 1.}
== Strongly convex functions ==
The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function. A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional function
f
{\displaystyle f}
is twice continuously differentiable and the domain is the real line, then we can characterize it as follows:
f
{\displaystyle f}
convex if and only if
f
″
(
x
)
≥
0
{\displaystyle f''(x)\geq 0}
for all
x
.
{\displaystyle x.}
f
{\displaystyle f}
strictly convex if
f
″
(
x
)
>
0
{\displaystyle f''(x)>0}
for all
x
{\displaystyle x}
(note: this is sufficient, but not necessary).
f
{\displaystyle f}
strongly convex if and only if
f
″
(
x
)
≥
m
>
0
{\displaystyle f''(x)\geq m>0}
for all
x
.
{\displaystyle x.}
For example, let
f
{\displaystyle f}
be strictly convex, and suppose there is a sequence of points
(
x
n
)
{\displaystyle (x_{n})}
such that
f
″
(
x
n
)
=
1
n
{\displaystyle f''(x_{n})={\tfrac {1}{n}}}
. Even though
f
″
(
x
n
)
>
0
{\displaystyle f''(x_{n})>0}
, the function is not strongly convex because
f
″
(
x
)
{\displaystyle f''(x)}
will become arbitrarily small.
More generally, a differentiable function
f
{\displaystyle f}
is called strongly convex with parameter
m
>
0
{\displaystyle m>0}
if the following inequality holds for all points
x
,
y
{\displaystyle x,y}
in its domain:
(
∇
f
(
x
)
−
∇
f
(
y
)
)
T
(
x
−
y
)
≥
m
‖
x
−
y
‖
2
2
{\displaystyle (\nabla f(x)-\nabla f(y))^{T}(x-y)\geq m\|x-y\|_{2}^{2}}
or, more generally,
⟨
∇
f
(
x
)
−
∇
f
(
y
)
,
x
−
y
⟩
≥
m
‖
x
−
y
‖
2
{\displaystyle \langle \nabla f(x)-\nabla f(y),x-y\rangle \geq m\|x-y\|^{2}}
where
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is any inner product, and
‖
⋅
‖
{\displaystyle \|\cdot \|}
is the corresponding norm. Some authors, such as refer to functions satisfying this inequality as elliptic functions.
An equivalent condition is the following:
f
(
y
)
≥
f
(
x
)
+
∇
f
(
x
)
T
(
y
−
x
)
+
m
2
‖
y
−
x
‖
2
2
{\displaystyle f(y)\geq f(x)+\nabla f(x)^{T}(y-x)+{\frac {m}{2}}\|y-x\|_{2}^{2}}
It is not necessary for a function to be differentiable in order to be strongly convex. A third definition for a strongly convex function, with parameter
m
,
{\displaystyle m,}
is that, for all
x
,
y
{\displaystyle x,y}
in the domain and
t
∈
[
0
,
1
]
,
{\displaystyle t\in [0,1],}
f
(
t
x
+
(
1
−
t
)
y
)
≤
t
f
(
x
)
+
(
1
−
t
)
f
(
y
)
−
1
2
m
t
(
1
−
t
)
‖
x
−
y
‖
2
2
{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-{\frac {1}{2}}mt(1-t)\|x-y\|_{2}^{2}}
Notice that this definition approaches the definition for strict convexity as
m
→
0
,
{\displaystyle m\to 0,}
and is identical to the definition of a convex function when
m
=
0.
{\displaystyle m=0.}
Despite this, functions exist that are strictly convex but are not strongly convex for any
m
>
0
{\displaystyle m>0}
(see example below).
If the function
f
{\displaystyle f}
is twice continuously differentiable, then it is strongly convex with parameter
m
{\displaystyle m}
if and only if
∇
2
f
(
x
)
⪰
m
I
{\displaystyle \nabla ^{2}f(x)\succeq mI}
for all
x
{\displaystyle x}
in the domain, where
I
{\displaystyle I}
is the identity and
∇
2
f
{\displaystyle \nabla ^{2}f}
is the Hessian matrix, and the inequality
⪰
{\displaystyle \succeq }
means that
∇
2
f
(
x
)
−
m
I
{\displaystyle \nabla ^{2}f(x)-mI}
is positive semi-definite. This is equivalent to requiring that the minimum eigenvalue of
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
be at least
m
{\displaystyle m}
for all
x
.
{\displaystyle x.}
If the domain is just the real line, then
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
is just the second derivative
f
″
(
x
)
,
{\displaystyle f''(x),}
so the condition becomes
f
″
(
x
)
≥
m
{\displaystyle f''(x)\geq m}
. If
m
=
0
{\displaystyle m=0}
then this means the Hessian is positive semidefinite (or if the domain is the real line, it means that
f
″
(
x
)
≥
0
{\displaystyle f''(x)\geq 0}
), which implies the function is convex, and perhaps strictly convex, but not strongly convex.
Assuming still that the function is twice continuously differentiable, one can show that the lower bound of
∇
2
f
(
x
)
{\displaystyle \nabla ^{2}f(x)}
implies that it is strongly convex. Using Taylor's Theorem there exists
z
∈
{
t
x
+
(
1
−
t
)
y
:
t
∈
[
0
,
1
]
}
{\displaystyle z\in \{tx+(1-t)y:t\in [0,1]\}}
such that
f
(
y
)
=
f
(
x
)
+
∇
f
(
x
)
T
(
y
−
x
)
+
1
2
(
y
−
x
)
T
∇
2
f
(
z
)
(
y
−
x
)
{\displaystyle f(y)=f(x)+\nabla f(x)^{T}(y-x)+{\frac {1}{2}}(y-x)^{T}\nabla ^{2}f(z)(y-x)}
Then
(
y
−
x
)
T
∇
2
f
(
z
)
(
y
−
x
)
≥
m
(
y
−
x
)
T
(
y
−
x
)
{\displaystyle (y-x)^{T}\nabla ^{2}f(z)(y-x)\geq m(y-x)^{T}(y-x)}
by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above.
A function
f
{\displaystyle f}
is strongly convex with parameter m if and only if the function
x
↦
f
(
x
)
−
m
2
‖
x
‖
2
{\displaystyle x\mapsto f(x)-{\frac {m}{2}}\|x\|^{2}}
is convex.
A twice continuously differentiable function
f
{\displaystyle f}
on a compact domain
X
{\displaystyle X}
that satisfies
f
″
(
x
)
>
0
{\displaystyle f''(x)>0}
for all
x
∈
X
{\displaystyle x\in X}
is strongly convex. The proof of this statement follows from the extreme value theorem, which states that a continuous function on a compact set has a maximum and minimum.
Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets.
=== Properties of strongly-convex functions ===
If f is a strongly-convex function with parameter m, then:: Prop.6.1.4
For every real number r, the level set {x | f(x) ≤ r} is compact.
The function f has a unique global minimum on Rn.
== Uniformly convex functions ==
A uniformly convex function, with modulus
ϕ
{\displaystyle \phi }
, is a function
f
{\displaystyle f}
that, for all
x
,
y
{\displaystyle x,y}
in the domain and
t
∈
[
0
,
1
]
,
{\displaystyle t\in [0,1],}
satisfies
f
(
t
x
+
(
1
−
t
)
y
)
≤
t
f
(
x
)
+
(
1
−
t
)
f
(
y
)
−
t
(
1
−
t
)
ϕ
(
‖
x
−
y
‖
)
{\displaystyle f(tx+(1-t)y)\leq tf(x)+(1-t)f(y)-t(1-t)\phi (\|x-y\|)}
where
ϕ
{\displaystyle \phi }
is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by taking
ϕ
(
α
)
=
m
2
α
2
{\displaystyle \phi (\alpha )={\tfrac {m}{2}}\alpha ^{2}}
we recover the definition of strong convexity.
It is worth noting that some authors require the modulus
ϕ
{\displaystyle \phi }
to be an increasing function, but this condition is not required by all authors.
== Examples ==
=== Functions of one variable ===
The function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
has
f
″
(
x
)
=
2
>
0
{\displaystyle f''(x)=2>0}
, so f is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2.
The function
f
(
x
)
=
x
4
{\displaystyle f(x)=x^{4}}
has
f
″
(
x
)
=
12
x
2
≥
0
{\displaystyle f''(x)=12x^{2}\geq 0}
, so f is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points. It is not strongly convex.
The absolute value function
f
(
x
)
=
|
x
|
{\displaystyle f(x)=|x|}
is convex (as reflected in the triangle inequality), even though it does not have a derivative at the point
x
=
0.
{\displaystyle x=0.}
It is not strictly convex.
The function
f
(
x
)
=
|
x
|
p
{\displaystyle f(x)=|x|^{p}}
for
p
≥
1
{\displaystyle p\geq 1}
is convex.
The exponential function
f
(
x
)
=
e
x
{\displaystyle f(x)=e^{x}}
is convex. It is also strictly convex, since
f
″
(
x
)
=
e
x
>
0
{\displaystyle f''(x)=e^{x}>0}
, but it is not strongly convex since the second derivative can be arbitrarily close to zero. More generally, the function
g
(
x
)
=
e
f
(
x
)
{\displaystyle g(x)=e^{f(x)}}
is logarithmically convex if
f
{\displaystyle f}
is a convex function. The term "superconvex" is sometimes used instead.
The function
f
{\displaystyle f}
with domain [0,1] defined by
f
(
0
)
=
f
(
1
)
=
1
,
f
(
x
)
=
0
{\displaystyle f(0)=f(1)=1,f(x)=0}
for
0
<
x
<
1
{\displaystyle 0<x<1}
is convex; it is continuous on the open interval
(
0
,
1
)
,
{\displaystyle (0,1),}
but not continuous at 0 and 1.
The function
x
3
{\displaystyle x^{3}}
has second derivative
6
x
{\displaystyle 6x}
; thus it is convex on the set where
x
≥
0
{\displaystyle x\geq 0}
and concave on the set where
x
≤
0.
{\displaystyle x\leq 0.}
Examples of functions that are monotonically increasing but not convex include
f
(
x
)
=
x
{\displaystyle f(x)={\sqrt {x}}}
and
g
(
x
)
=
log
x
{\displaystyle g(x)=\log x}
.
Examples of functions that are convex but not monotonically increasing include
h
(
x
)
=
x
2
{\displaystyle h(x)=x^{2}}
and
k
(
x
)
=
−
x
{\displaystyle k(x)=-x}
.
The function
f
(
x
)
=
1
x
{\displaystyle f(x)={\tfrac {1}{x}}}
has
f
″
(
x
)
=
2
x
3
{\displaystyle f''(x)={\tfrac {2}{x^{3}}}}
which is greater than 0 if
x
>
0
{\displaystyle x>0}
so
f
(
x
)
{\displaystyle f(x)}
is convex on the interval
(
0
,
∞
)
{\displaystyle (0,\infty )}
. It is concave on the interval
(
−
∞
,
0
)
{\displaystyle (-\infty ,0)}
.
The function
f
(
x
)
=
1
x
2
{\displaystyle f(x)={\tfrac {1}{x^{2}}}}
with
f
(
0
)
=
∞
{\displaystyle f(0)=\infty }
, is convex on the interval
(
0
,
∞
)
{\displaystyle (0,\infty )}
and convex on the interval
(
−
∞
,
0
)
{\displaystyle (-\infty ,0)}
, but not convex on the interval
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
, because of the singularity at
x
=
0.
{\displaystyle x=0.}
=== Functions of n variables ===
LogSumExp function, also called softmax function, is a convex function.
The function
−
log
det
(
X
)
{\displaystyle -\log \det(X)}
on the domain of positive-definite matrices is convex.: 74
Every real-valued linear transformation is convex but not strictly convex, since if
f
{\displaystyle f}
is linear, then
f
(
a
+
b
)
=
f
(
a
)
+
f
(
b
)
{\displaystyle f(a+b)=f(a)+f(b)}
. This statement also holds if we replace "convex" by "concave".
Every real-valued affine function, that is, each function of the form
f
(
x
)
=
a
T
x
+
b
,
{\displaystyle f(x)=a^{T}x+b,}
is simultaneously convex and concave.
Every norm is a convex function, by the triangle inequality and positive homogeneity.
The spectral radius of a nonnegative matrix is a convex function of its diagonal elements.
== See also ==
== Notes ==
== References ==
Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific.
Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
Donoghue, William F. (1969). Distributions and Fourier Transforms. Academic Press.
Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
Krasnosel'skii M.A., Rutickii Ya.B. (1961). Convex Functions and Orlicz Spaces. Groningen: P.Noordhoff Ltd.
Lauritzen, Niels (2013). Undergraduate Convexity. World Scientific Publishing.
Luenberger, David (1984). Linear and Nonlinear Programming. Addison-Wesley.
Luenberger, David (1969). Optimization by Vector Space Methods. Wiley & Sons.
Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
Thomson, Brian (1994). Symmetric Properties of Real Functions. CRC Press.
Zălinescu, C. (2002). Convex analysis in general vector spaces. River Edge, NJ: World Scientific Publishing Co., Inc. pp. xx+367. ISBN 981-238-067-1. MR 1921556.
== External links ==
"Convex function (of a real variable)", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Convex function (of a complex variable)", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Convex_function |
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science).
This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
== Definition ==
Given a nonnegative integer m, an order-
m
{\displaystyle m}
linear differential operator is a map
P
{\displaystyle P}
from a function space
F
1
{\displaystyle {\mathcal {F}}_{1}}
on
R
n
{\displaystyle \mathbb {R} ^{n}}
to another function space
F
2
{\displaystyle {\mathcal {F}}_{2}}
that can be written as:
P
=
∑
|
α
|
≤
m
a
α
(
x
)
D
α
,
{\displaystyle P=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha }\ ,}
where
α
=
(
α
1
,
α
2
,
⋯
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\alpha _{2},\cdots ,\alpha _{n})}
is a multi-index of non-negative integers,
|
α
|
=
α
1
+
α
2
+
⋯
+
α
n
{\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}
, and for each
α
{\displaystyle \alpha }
,
a
α
(
x
)
{\displaystyle a_{\alpha }(x)}
is a function on some open domain in n-dimensional space. The operator
D
α
{\displaystyle D^{\alpha }}
is interpreted as
D
α
=
∂
|
α
|
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle D^{\alpha }={\frac {\partial ^{|\alpha |}}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
Thus for a function
f
∈
F
1
{\displaystyle f\in {\mathcal {F}}_{1}}
:
P
f
=
∑
|
α
|
≤
m
a
α
(
x
)
∂
|
α
|
f
∂
x
1
α
1
∂
x
2
α
2
⋯
∂
x
n
α
n
{\displaystyle Pf=\sum _{|\alpha |\leq m}a_{\alpha }(x){\frac {\partial ^{|\alpha |}f}{\partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\cdots \partial x_{n}^{\alpha _{n}}}}}
The notation
D
α
{\displaystyle D^{\alpha }}
is justified (i.e., independent of order of differentiation) because of the symmetry of second derivatives.
The polynomial p obtained by replacing partials
∂
∂
x
i
{\displaystyle {\frac {\partial }{\partial x_{i}}}}
by variables
ξ
i
{\displaystyle \xi _{i}}
in P is called the total symbol of P; i.e., the total symbol of P above is:
p
(
x
,
ξ
)
=
∑
|
α
|
≤
m
a
α
(
x
)
ξ
α
{\displaystyle p(x,\xi )=\sum _{|\alpha |\leq m}a_{\alpha }(x)\xi ^{\alpha }}
where
ξ
α
=
ξ
1
α
1
⋯
ξ
n
α
n
.
{\displaystyle \xi ^{\alpha }=\xi _{1}^{\alpha _{1}}\cdots \xi _{n}^{\alpha _{n}}.}
The highest homogeneous component of the symbol, namely,
σ
(
x
,
ξ
)
=
∑
|
α
|
=
m
a
α
(
x
)
ξ
α
{\displaystyle \sigma (x,\xi )=\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha }}
is called the principal symbol of P. While the total symbol is not intrinsically defined, the principal symbol is intrinsically defined (i.e., it is a function on the cotangent bundle).
More generally, let E and F be vector bundles over a manifold X. Then the linear operator
P
:
C
∞
(
E
)
→
C
∞
(
F
)
{\displaystyle P:C^{\infty }(E)\to C^{\infty }(F)}
is a differential operator of order
k
{\displaystyle k}
if, in local coordinates on X, we have
P
u
(
x
)
=
∑
|
α
|
=
k
P
α
(
x
)
∂
α
u
∂
x
α
+
lower-order terms
{\displaystyle Pu(x)=\sum _{|\alpha |=k}P^{\alpha }(x){\frac {\partial ^{\alpha }u}{\partial x^{\alpha }}}+{\text{lower-order terms}}}
where, for each multi-index α,
P
α
(
x
)
:
E
→
F
{\displaystyle P^{\alpha }(x):E\to F}
is a bundle map, symmetric on the indices α.
The kth order coefficients of P transform as a symmetric tensor
σ
P
:
S
k
(
T
∗
X
)
⊗
E
→
F
{\displaystyle \sigma _{P}:S^{k}(T^{*}X)\otimes E\to F}
whose domain is the tensor product of the kth symmetric power of the cotangent bundle of X with E, and whose codomain is F. This symmetric tensor is known as the principal symbol (or just the symbol) of P.
The coordinate system xi permits a local trivialization of the cotangent bundle by the coordinate differentials dxi, which determine fiber coordinates ξi. In terms of a basis of frames eμ, fν of E and F, respectively, the differential operator P decomposes into components
(
P
u
)
ν
=
∑
μ
P
ν
μ
u
μ
{\displaystyle (Pu)_{\nu }=\sum _{\mu }P_{\nu \mu }u_{\mu }}
on each section u of E. Here Pνμ is the scalar differential operator defined by
P
ν
μ
=
∑
α
P
ν
μ
α
∂
∂
x
α
.
{\displaystyle P_{\nu \mu }=\sum _{\alpha }P_{\nu \mu }^{\alpha }{\frac {\partial }{\partial x^{\alpha }}}.}
With this trivialization, the principal symbol can now be written
(
σ
P
(
ξ
)
u
)
ν
=
∑
|
α
|
=
k
∑
μ
P
ν
μ
α
(
x
)
ξ
α
u
μ
.
{\displaystyle (\sigma _{P}(\xi )u)_{\nu }=\sum _{|\alpha |=k}\sum _{\mu }P_{\nu \mu }^{\alpha }(x)\xi _{\alpha }u_{\mu }.}
In the cotangent space over a fixed point x of X, the symbol
σ
P
{\displaystyle \sigma _{P}}
defines a homogeneous polynomial of degree k in
T
x
∗
X
{\displaystyle T_{x}^{*}X}
with values in
Hom
(
E
x
,
F
x
)
{\displaystyle \operatorname {Hom} (E_{x},F_{x})}
.
== Fourier interpretation ==
A differential operator P and its symbol appear naturally in connection with the Fourier transform as follows. Let ƒ be a Schwartz function. Then by the inverse Fourier transform,
P
f
(
x
)
=
1
(
2
π
)
d
2
∫
R
d
e
i
x
⋅
ξ
p
(
x
,
i
ξ
)
f
^
(
ξ
)
d
ξ
.
{\displaystyle Pf(x)={\frac {1}{(2\pi )^{\frac {d}{2}}}}\int \limits _{\mathbf {R} ^{d}}e^{ix\cdot \xi }p(x,i\xi ){\hat {f}}(\xi )\,d\xi .}
This exhibits P as a Fourier multiplier. A more general class of functions p(x,ξ) which satisfy at most polynomial growth conditions in ξ under which this integral is well-behaved comprises the pseudo-differential operators.
== Examples ==
The differential operator
P
{\displaystyle P}
is elliptic if its symbol is invertible; that is for each nonzero
θ
∈
T
∗
X
{\displaystyle \theta \in T^{*}X}
the bundle map
σ
P
(
θ
,
…
,
θ
)
{\displaystyle \sigma _{P}(\theta ,\dots ,\theta )}
is invertible. On a compact manifold, it follows from the elliptic theory that P is a Fredholm operator: it has finite-dimensional kernel and cokernel.
In the study of hyperbolic and parabolic partial differential equations, zeros of the principal symbol correspond to the characteristics of the partial differential equation.
In applications to the physical sciences, operators such as the Laplace operator play a major role in setting up and solving partial differential equations.
In differential topology, the exterior derivative and Lie derivative operators have intrinsic meaning.
In abstract algebra, the concept of a derivation allows for generalizations of differential operators, which do not require the use of calculus. Frequently such generalizations are employed in algebraic geometry and commutative algebra. See also Jet (mathematics).
In the development of holomorphic functions of a complex variable z = x + i y, sometimes a complex function is considered to be a function of two real variables x and y. Use is made of the Wirtinger derivatives, which are partial differential operators:
∂
∂
z
=
1
2
(
∂
∂
x
−
i
∂
∂
y
)
,
∂
∂
z
¯
=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
.
{\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right)\ ,\quad {\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right)\ .}
This approach is also used to study functions of several complex variables and functions of a motor variable.
The differential operator del, also called nabla, is an important vector differential operator. It appears frequently in physics in places like the differential form of Maxwell's equations. In three-dimensional Cartesian coordinates, del is defined as
∇
=
x
^
∂
∂
x
+
y
^
∂
∂
y
+
z
^
∂
∂
z
.
{\displaystyle \nabla =\mathbf {\hat {x}} {\partial \over \partial x}+\mathbf {\hat {y}} {\partial \over \partial y}+\mathbf {\hat {z}} {\partial \over \partial z}.}
Del defines the gradient, and is used to calculate the curl, divergence, and Laplacian of various objects.
A chiral differential operator. For now, see [1]
== History ==
The conceptual step of writing a differential operator as something free-standing is attributed to Louis François Antoine Arbogast in 1800.
== Notations ==
The most common differential operator is the action of taking the derivative. Common notations for taking the first derivative with respect to a variable x include:
d
d
x
{\displaystyle {d \over dx}}
,
D
{\displaystyle D}
,
D
x
,
{\displaystyle D_{x},}
and
∂
x
{\displaystyle \partial _{x}}
.
When taking higher, nth order derivatives, the operator may be written:
d
n
d
x
n
{\displaystyle {d^{n} \over dx^{n}}}
,
D
n
{\displaystyle D^{n}}
,
D
x
n
{\displaystyle D_{x}^{n}}
, or
∂
x
n
{\displaystyle \partial _{x}^{n}}
.
The derivative of a function f of an argument x is sometimes given as either of the following:
[
f
(
x
)
]
′
{\displaystyle [f(x)]'}
f
′
(
x
)
.
{\displaystyle f'(x).}
The D notation's use and creation is credited to Oliver Heaviside, who considered differential operators of the form
∑
k
=
0
n
c
k
D
k
{\displaystyle \sum _{k=0}^{n}c_{k}D^{k}}
in his study of differential equations.
One of the most frequently seen differential operators is the Laplacian operator, defined by
Δ
=
∇
2
=
∑
k
=
1
n
∂
2
∂
x
k
2
.
{\displaystyle \Delta =\nabla ^{2}=\sum _{k=1}^{n}{\frac {\partial ^{2}}{\partial x_{k}^{2}}}.}
Another differential operator is the Θ operator, or theta operator, defined by
Θ
=
z
d
d
z
.
{\displaystyle \Theta =z{d \over dz}.}
This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z:
Θ
(
z
k
)
=
k
z
k
,
k
=
0
,
1
,
2
,
…
{\displaystyle \Theta (z^{k})=kz^{k},\quad k=0,1,2,\dots }
In n variables the homogeneity operator is given by
Θ
=
∑
k
=
1
n
x
k
∂
∂
x
k
.
{\displaystyle \Theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.}
As in one variable, the eigenspaces of Θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
In writing, following common mathematical convention, the argument of a differential operator is usually placed on the right side of the operator itself. Sometimes an alternative notation is used: The result of applying the operator to the function on the left side of the operator and on the right side of the operator, and the difference obtained when applying the differential operator to the functions on both sides, are denoted by arrows as follows:
f
∂
x
←
g
=
g
⋅
∂
x
f
{\displaystyle f{\overleftarrow {\partial _{x}}}g=g\cdot \partial _{x}f}
f
∂
x
→
g
=
f
⋅
∂
x
g
{\displaystyle f{\overrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g}
f
∂
x
↔
g
=
f
⋅
∂
x
g
−
g
⋅
∂
x
f
.
{\displaystyle f{\overleftrightarrow {\partial _{x}}}g=f\cdot \partial _{x}g-g\cdot \partial _{x}f.}
Such a bidirectional-arrow notation is frequently used for describing the probability current of quantum mechanics.
== Adjoint of an operator ==
Given a linear differential operator
T
{\displaystyle T}
T
u
=
∑
k
=
0
n
a
k
(
x
)
D
k
u
{\displaystyle Tu=\sum _{k=0}^{n}a_{k}(x)D^{k}u}
the adjoint of this operator is defined as the operator
T
∗
{\displaystyle T^{*}}
such that
⟨
T
u
,
v
⟩
=
⟨
u
,
T
∗
v
⟩
{\displaystyle \langle Tu,v\rangle =\langle u,T^{*}v\rangle }
where the notation
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
is used for the scalar product or inner product. This definition therefore depends on the definition of the scalar product (or inner product).
=== Formal adjoint in one variable ===
In the functional space of square-integrable functions on a real interval (a, b), the scalar product is defined by
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
¯
g
(
x
)
d
x
,
{\displaystyle \langle f,g\rangle =\int _{a}^{b}{\overline {f(x)}}\,g(x)\,dx,}
where the line over f(x) denotes the complex conjugate of f(x). If one moreover adds the condition that f or g vanishes as
x
→
a
{\displaystyle x\to a}
and
x
→
b
{\displaystyle x\to b}
, one can also define the adjoint of T by
T
∗
u
=
∑
k
=
0
n
(
−
1
)
k
D
k
[
a
k
(
x
)
¯
u
]
.
{\displaystyle T^{*}u=\sum _{k=0}^{n}(-1)^{k}D^{k}\left[{\overline {a_{k}(x)}}u\right].}
This formula does not explicitly depend on the definition of the scalar product. It is therefore sometimes chosen as a definition of the adjoint operator. When
T
∗
{\displaystyle T^{*}}
is defined according to this formula, it is called the formal adjoint of T.
A (formally) self-adjoint operator is an operator equal to its own (formal) adjoint.
=== Several variables ===
If Ω is a domain in Rn, and P a differential operator on Ω, then the adjoint of P is defined in L2(Ω) by duality in the analogous manner:
⟨
f
,
P
∗
g
⟩
L
2
(
Ω
)
=
⟨
P
f
,
g
⟩
L
2
(
Ω
)
{\displaystyle \langle f,P^{*}g\rangle _{L^{2}(\Omega )}=\langle Pf,g\rangle _{L^{2}(\Omega )}}
for all smooth L2 functions f, g. Since smooth functions are dense in L2, this defines the adjoint on a dense subset of L2: P* is a densely defined operator.
=== Example ===
The Sturm–Liouville operator is a well-known example of a formal self-adjoint operator. This second-order linear differential operator L can be written in the form
L
u
=
−
(
p
u
′
)
′
+
q
u
=
−
(
p
u
″
+
p
′
u
′
)
+
q
u
=
−
p
u
″
−
p
′
u
′
+
q
u
=
(
−
p
)
D
2
u
+
(
−
p
′
)
D
u
+
(
q
)
u
.
{\displaystyle Lu=-(pu')'+qu=-(pu''+p'u')+qu=-pu''-p'u'+qu=(-p)D^{2}u+(-p')Du+(q)u.}
This property can be proven using the formal adjoint definition above.
This operator is central to Sturm–Liouville theory where the eigenfunctions (analogues to eigenvectors) of this operator are considered.
== Properties ==
Differentiation is linear, i.e.
D
(
f
+
g
)
=
(
D
f
)
+
(
D
g
)
,
{\displaystyle D(f+g)=(Df)+(Dg),}
D
(
a
f
)
=
a
(
D
f
)
,
{\displaystyle D(af)=a(Df),}
where f and g are functions, and a is a constant.
Any polynomial in D with function coefficients is also a differential operator. We may also compose differential operators by the rule
(
D
1
∘
D
2
)
(
f
)
=
D
1
(
D
2
(
f
)
)
.
{\displaystyle (D_{1}\circ D_{2})(f)=D_{1}(D_{2}(f)).}
Some care is then required: firstly any function coefficients in the operator D2 must be differentiable as many times as the application of D1 requires. To get a ring of such operators we must assume derivatives of all orders of the coefficients used. Secondly, this ring will not be commutative: an operator gD isn't the same in general as Dg. For example we have the relation basic in quantum mechanics:
D
x
−
x
D
=
1.
{\displaystyle Dx-xD=1.}
The subring of operators that are polynomials in D with constant coefficients is, by contrast, commutative. It can be characterised another way: it consists of the translation-invariant operators.
The differential operators also obey the shift theorem.
== Ring of polynomial differential operators ==
=== Ring of univariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
,
X
⟩
{\displaystyle R\langle D,X\rangle }
be the non-commutative polynomial ring over R in the variables D and X, and I the two-sided ideal generated by DX − XD − 1. Then the ring of univariate polynomial differential operators over R is the quotient ring
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
. This is a non-commutative simple ring. Every element can be written in a unique way as a R-linear combination of monomials of the form
X
a
D
b
mod
I
{\displaystyle X^{a}D^{b}{\text{ mod }}I}
. It supports an analogue of Euclidean division of polynomials.
Differential modules over
R
[
X
]
{\displaystyle R[X]}
(for the standard derivation) can be identified with modules over
R
⟨
D
,
X
⟩
/
I
{\displaystyle R\langle D,X\rangle /I}
.
=== Ring of multivariate polynomial differential operators ===
If R is a ring, let
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle }
be the non-commutative polynomial ring over R in the variables
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
{\displaystyle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}}
, and I the two-sided ideal generated by the elements
(
D
i
X
j
−
X
j
D
i
)
−
δ
i
,
j
,
D
i
D
j
−
D
j
D
i
,
X
i
X
j
−
X
j
X
i
{\displaystyle (D_{i}X_{j}-X_{j}D_{i})-\delta _{i,j},\ \ \ D_{i}D_{j}-D_{j}D_{i},\ \ \ X_{i}X_{j}-X_{j}X_{i}}
for all
1
≤
i
,
j
≤
n
,
{\displaystyle 1\leq i,j\leq n,}
where
δ
{\displaystyle \delta }
is Kronecker delta. Then the ring of multivariate polynomial differential operators over R is the quotient ring
R
⟨
D
1
,
…
,
D
n
,
X
1
,
…
,
X
n
⟩
/
I
{\displaystyle R\langle D_{1},\ldots ,D_{n},X_{1},\ldots ,X_{n}\rangle /I}
.
This is a non-commutative simple ring.
Every element can be written in a unique way as a R-linear combination of monomials of the form
X
1
a
1
…
X
n
a
n
D
1
b
1
…
D
n
b
n
{\displaystyle X_{1}^{a_{1}}\ldots X_{n}^{a_{n}}D_{1}^{b_{1}}\ldots D_{n}^{b_{n}}}
.
== Coordinate-independent description ==
In differential geometry and algebraic geometry it is often convenient to have a coordinate-independent description of differential operators between two vector bundles. Let E and F be two vector bundles over a differentiable manifold M. An R-linear mapping of sections P : Γ(E) → Γ(F) is said to be a kth-order linear differential operator if it factors through the jet bundle Jk(E).
In other words, there exists a linear mapping of vector bundles
i
P
:
J
k
(
E
)
→
F
{\displaystyle i_{P}:J^{k}(E)\to F}
such that
P
=
i
P
∘
j
k
{\displaystyle P=i_{P}\circ j^{k}}
where jk: Γ(E) → Γ(Jk(E)) is the prolongation that associates to any section of E its k-jet.
This just means that for a given section s of E, the value of P(s) at a point x ∈ M is fully determined by the kth-order infinitesimal behavior of s in x. In particular this implies that P(s)(x) is determined by the germ of s in x, which is expressed by saying that differential operators are local. A foundational result is the Peetre theorem showing that the converse is also true: any (linear) local operator is differential.
=== Relation to commutative algebra ===
An equivalent, but purely algebraic description of linear differential operators is as follows: an R-linear map P is a kth-order linear differential operator, if for any k + 1 smooth functions
f
0
,
…
,
f
k
∈
C
∞
(
M
)
{\displaystyle f_{0},\ldots ,f_{k}\in C^{\infty }(M)}
we have
[
f
k
,
[
f
k
−
1
,
[
⋯
[
f
0
,
P
]
⋯
]
]
=
0.
{\displaystyle [f_{k},[f_{k-1},[\cdots [f_{0},P]\cdots ]]=0.}
Here the bracket
[
f
,
P
]
:
Γ
(
E
)
→
Γ
(
F
)
{\displaystyle [f,P]:\Gamma (E)\to \Gamma (F)}
is defined as the commutator
[
f
,
P
]
(
s
)
=
P
(
f
⋅
s
)
−
f
⋅
P
(
s
)
.
{\displaystyle [f,P](s)=P(f\cdot s)-f\cdot P(s).}
This characterization of linear differential operators shows that they are particular mappings between modules over a commutative algebra, allowing the concept to be seen as a part of commutative algebra.
== Variants ==
=== A differential operator of infinite order ===
A differential operator of infinite order is (roughly) a differential operator whose total symbol is a power series instead of a polynomial.
=== Bidifferential operator ===
A differential operator acting on two functions
D
(
g
,
f
)
{\displaystyle D(g,f)}
is called a bidifferential operator. The notion appears, for instance, in an associative algebra structure on a deformation quantization of a Poisson algebra.
=== Microdifferential operator ===
A microdifferential operator is a type of operator on an open subset of a cotangent bundle, as opposed to an open subset of a manifold. It is obtained by extending the notion of a differential operator to the cotangent bundle.
== See also ==
== Notes ==
== References ==
Freed, Daniel S. (1987), Geometry of Dirac operators, p. 8, CiteSeerX 10.1.1.186.8445
Hörmander, L. (1983), The analysis of linear partial differential operators I, Grundl. Math. Wissenschaft., vol. 256, Springer, doi:10.1007/978-3-642-96750-4, ISBN 3-540-12104-8, MR 0717035.
Schapira, Pierre (1985). Microdifferential Systems in the Complex Domain. Grundlehren der mathematischen Wissenschaften. Vol. 269. Springer. doi:10.1007/978-3-642-61665-5. ISBN 978-3-642-64904-2.
Wells, R.O. (1973), Differential analysis on complex manifolds, Springer-Verlag, ISBN 0-387-90419-0.
== Further reading ==
Fedosov, Boris; Schulze, Bert-Wolfgang; Tarkhanov, Nikolai (2002). "Analytic index formulas for elliptic corner operators". Annales de l'Institut Fourier. 52 (3): 899–982. doi:10.5802/aif.1906. ISSN 1777-5310.
https://mathoverflow.net/questions/451110/reference-request-inverse-of-differential-operators
== External links ==
Media related to Differential operators at Wikimedia Commons
"Differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Differential_operator |
Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications such as social network analysis, cartography, linguistics, and bioinformatics.
A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself: very different layouts can correspond to the same graph. In the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of these vertices and edges within a drawing affects its understandability, usability, fabrication cost, and aesthetics. The problem gets worse if the graph changes over time by adding and deleting edges (dynamic graph drawing) and the goal is to preserve the user's mental map.
== Graphical conventions ==
Graphs are frequently drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented as line segments, polylines, or curves in the Euclidean plane. Node–link diagrams can be traced back to the 14th-16th century works of Pseudo-Lull which were published under the name of Ramon Llull, a 13th century polymath. Pseudo-Lull drew diagrams of this type for complete graphs in order to analyze all pairwise combinations among sets of metaphysical concepts.
In the case of directed graphs, arrowheads form a commonly used graphical convention to show their orientation; however, user studies have shown that other conventions such as tapering provide this information more effectively. Upward planar drawing uses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary.
Alternative conventions to node–link diagrams include adjacency representations such as circle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions; intersection representations in which vertices are represented by non-disjoint geometric objects and edges are represented by their intersections; visibility representations in which vertices are represented by regions in the plane and edges are represented by regions that have an unobstructed line of sight to each other; confluent drawings, in which edges are represented as smooth curves within mathematical train tracks; fabrics, in which nodes are represented as horizontal lines and edges as vertical lines; and visualizations of the adjacency matrix of the graph.
== Quality measures ==
Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability. In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures.
The crossing number of a drawing is the number of pairs of edges that cross each other. If the graph is planar, then it is often convenient to draw it without any edge intersections; that is, in this case, a graph drawing represents a graph embedding. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings.
The area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly. The aspect ratio of the bounding box may also be important.
Symmetry display is the problem of finding symmetry groups within a given graph, and finding a drawing that displays as much of the symmetry as possible. Some layout methods automatically lead to symmetric drawings; alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing.
It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of control points on the edge.
Several commonly used quality measures concern lengths of edges: it is generally desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than highly varied.
Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree.
The slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges (allowing crossings). Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number; it remains open whether the slope number of degree-4 graphs is bounded.
== Layout methods ==
There are many different graph layout strategies:
In force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics. Typically, these systems combine attractive forces between adjacent vertices with repulsive forces between all pairs of vertices, in order to seek a layout in which edge lengths are small while vertices are well-separated. These systems may perform gradient descent based minimization of an energy function, or they may translate the forces directly into velocities or accelerations for the moving vertices.
Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of the graph.
Orthogonal layout methods, which allow the edges of the graph to run horizontally or vertically, parallel to the coordinate axes of the layout. These methods were originally designed for VLSI and PCB layout problems but they have also been adapted for graph drawing. They typically involve a multiphase approach in which an input graph is planarized by replacing crossing points by vertices, a topological embedding of the planarized graph is found, edge orientations are chosen to minimize bends, vertices are placed consistently with these orientations, and finally a layout compaction stage reduces the area of the drawing.
Tree layout algorithms these show a rooted tree-like formation, suitable for trees. Often, in a technique called "balloon layout", the children of each node in the tree are drawn on a circle surrounding the node, with the radii of these circles diminishing at lower levels in the tree so that these circles do not overlap.
Layered graph drawing methods (often called Sugiyama-style drawing) are best suited for directed acyclic graphs or graphs that are nearly acyclic, such as the graphs of dependencies between modules or functions in a software system. In these methods, the nodes of the graph are arranged into horizontal layers using methods such as the Coffman–Graham algorithm, in such a way that most edges go downwards from one layer to the next; after this step, the nodes within each layer are arranged in order to minimize crossings.
Arc diagrams, a layout style dating back to the 1960s, place vertices on a line; edges may be drawn as semicircles above or below the line, or as smooth curves linked together from multiple semicircles.
Circular layout methods place the vertices of the graph on a circle, choosing carefully the ordering of the vertices around the circle to reduce crossings and place adjacent vertices close to each other. Edges may be drawn either as chords of the circle or as arcs inside or outside of the circle. In some cases, multiple circles may be used.
Dominance drawing places vertices in such a way that one vertex is upwards, rightwards, or both of another if and only if it is reachable from the other vertex. In this way, the layout style makes the reachability relation of the graph visually apparent.
== Application-specific graph drawings ==
Graphs and graph drawings arising in other areas of application include
Sociograms, drawings of a social network, as often offered by social network analysis software
Hasse diagrams, a type of graph drawing specialized to partial orders
Dessin d'enfants, a type of graph drawing used in algebraic geometry
State diagrams, graphical representations of finite-state machines
Computer network diagrams, depictions of the nodes and connections in a computer network
Flowcharts and drakon-charts, drawings in which the nodes represent the steps of an algorithm and the edges represent control flow between steps.
Project network, graphical depiction of the chronological order in which activities of a project are to be completed.
Data-flow diagrams, drawings in which the nodes represent the components of an information system and the edges represent the movement of information from one component to another.
Bioinformatics including phylogenetic trees, protein–protein interaction networks, and metabolic pathways.
In addition, the placement and routing steps of electronic design automation (EDA) are similar in many ways to graph drawing, as is the problem of greedy embedding in distributed computing, and the graph drawing literature includes several results borrowed from the EDA literature. However, these problems also differ in several important ways: for instance, in EDA, area minimization and signal length are more important than aesthetics, and the routing problem in EDA may have more than two terminals per net while the analogous problem in graph drawing generally only involves pairs of vertices for each edge.
== Software ==
Software, systems, and providers of systems for drawing graphs include:
BioFabric open-source software for visualizing large networks by drawing nodes as horizontal lines.
Cytoscape, open-source software for visualizing molecular interaction networks
Gephi, open-source network analysis and visualization software
graph-tool, a free/libre Python library for analysis of graphs
Graphviz, an open-source graph drawing system from AT&T Corporation
Linkurious, a commercial network analysis and visualization software for graph databases
Mathematica, a general-purpose computation tool that includes 2D and 3D graph visualization and graph analysis tools.
Microsoft Automatic Graph Layout, open-source .NET library (formerly called GLEE) for laying out graphs
NetworkX is a Python library for studying graphs and networks.
Tulip, an open-source data visualization tool
yEd, a graph editor with graph layout functionality
PGF/TikZ 3.0 with the graphdrawing package (requires LuaTeX).
LaNet-vi, an open-source large network visualization software
== See also ==
International Symposium on Graph Drawing
List of Unified Modeling Language tools
== References ==
=== Footnotes ===
=== General references ===
=== Specialized subtopics ===
== Further reading ==
== External links ==
GraphX library for .NET Archived 2018-01-26 at the Wayback Machine: open-source WPF library for graph calculation and visualization. Supports many layout and edge routing algorithms.
Graph drawing e-print archive: including information on papers from all Graph Drawing symposia. | Wikipedia/Graph_drawing |
In mathematics, a partial function f from a set X to a set Y is a function from a subset S of X (possibly the whole X itself) to Y. The subset S, that is, the domain of f viewed as a function, is called the domain of definition or natural domain of f. If S equals X, that is, if f is defined on every element in X, then f is said to be a total function.
In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set.
A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a function.
In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total.
When arrow notation is used for functions, a partial function
f
{\displaystyle f}
from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
is sometimes written as
f
:
X
⇀
Y
,
{\displaystyle f:X\rightharpoonup Y,}
f
:
X
↛
Y
,
{\displaystyle f:X\nrightarrow Y,}
or
f
:
X
↪
Y
.
{\displaystyle f:X\hookrightarrow Y.}
However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings.
Specifically, for a partial function
f
:
X
⇀
Y
,
{\displaystyle f:X\rightharpoonup Y,}
and any
x
∈
X
,
{\displaystyle x\in X,}
one has either:
f
(
x
)
=
y
∈
Y
{\displaystyle f(x)=y\in Y}
(it is a single element in Y), or
f
(
x
)
{\displaystyle f(x)}
is undefined.
For example, if
f
{\displaystyle f}
is the square root function restricted to the integers
f
:
Z
→
N
,
{\displaystyle f:\mathbb {Z} \to \mathbb {N} ,}
defined by:
f
(
n
)
=
m
{\displaystyle f(n)=m}
if, and only if,
m
2
=
n
,
{\displaystyle m^{2}=n,}
m
∈
N
,
n
∈
Z
,
{\displaystyle m\in \mathbb {N} ,n\in \mathbb {Z} ,}
then
f
(
n
)
{\displaystyle f(n)}
is only defined if
n
{\displaystyle n}
is a perfect square (that is,
0
,
1
,
4
,
9
,
16
,
…
{\displaystyle 0,1,4,9,16,\ldots }
). So
f
(
25
)
=
5
{\displaystyle f(25)=5}
but
f
(
26
)
{\displaystyle f(26)}
is undefined.
== Basic concepts ==
A partial function arises from the consideration of maps between two sets X and Y that may not be defined on the entire set X. A common example is the square root operation on the real numbers
R
{\displaystyle \mathbb {R} }
: because negative real numbers do not have real square roots, the operation can be viewed as a partial function from
R
{\displaystyle \mathbb {R} }
to
R
.
{\displaystyle \mathbb {R} .}
The domain of definition of a partial function is the subset S of X on which the partial function is defined; in this case, the partial function may also be viewed as a function from S to Y. In the example of the square root operation, the set S consists of the nonnegative real numbers
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem.
In case the domain of definition S is equal to the whole set X, the partial function is said to be total. Thus, total partial functions from X to Y coincide with functions from X to Y.
Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively.
Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective.
An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function.
The notion of transformation can be generalized to partial functions as well. A partial transformation is a function
f
:
A
⇀
B
,
{\displaystyle f:A\rightharpoonup B,}
where both
A
{\displaystyle A}
and
B
{\displaystyle B}
are subsets of some set
X
.
{\displaystyle X.}
== Function spaces ==
For convenience, denote the set of all partial functions
f
:
X
⇀
Y
{\displaystyle f:X\rightharpoonup Y}
from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
by
[
X
⇀
Y
]
.
{\displaystyle [X\rightharpoonup Y].}
This set is the union of the sets of functions defined on subsets of
X
{\displaystyle X}
with same codomain
Y
{\displaystyle Y}
:
[
X
⇀
Y
]
=
⋃
D
⊆
X
[
D
→
Y
]
,
{\displaystyle [X\rightharpoonup Y]=\bigcup _{D\subseteq X}[D\to Y],}
the latter also written as
⋃
D
⊆
X
Y
D
.
{\textstyle \bigcup _{D\subseteq {X}}Y^{D}.}
In finite case, its cardinality is
|
[
X
⇀
Y
]
|
=
(
|
Y
|
+
1
)
|
X
|
,
{\displaystyle |[X\rightharpoonup Y]|=(|Y|+1)^{|X|},}
because any partial function can be extended to a function by any fixed value
c
{\displaystyle c}
not contained in
Y
,
{\displaystyle Y,}
so that the codomain is
Y
∪
{
c
}
,
{\displaystyle Y\cup \{c\},}
an operation which is injective (unique and invertible by restriction).
== Discussion and examples ==
The first diagram at the top of the article represents a partial function that is not a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set.
=== Natural logarithm ===
Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function.
=== Subtraction of natural numbers ===
Subtraction of natural numbers (in which
N
{\displaystyle \mathbb {N} }
is the non-negative integers) is a partial function:
f
:
N
×
N
⇀
N
{\displaystyle f:\mathbb {N} \times \mathbb {N} \rightharpoonup \mathbb {N} }
f
(
x
,
y
)
=
x
−
y
.
{\displaystyle f(x,y)=x-y.}
It is defined only when
x
≥
y
.
{\displaystyle x\geq y.}
=== Bottom element ===
In denotational semantics a partial function is considered as returning the bottom element when it is undefined.
In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested.
In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function.
=== In category theory ===
In category theory, when considering the operation of morphism composition in concrete categories, the composition operation
∘
:
hom
(
C
)
×
hom
(
C
)
→
hom
(
C
)
{\displaystyle \circ \;:\;\hom(C)\times \hom(C)\to \hom(C)}
is a total function if and only if
ob
(
C
)
{\displaystyle \operatorname {ob} (C)}
has one element. The reason for this is that two morphisms
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
U
→
V
{\displaystyle g:U\to V}
can only be composed as
g
∘
f
{\displaystyle g\circ f}
if
Y
=
U
,
{\displaystyle Y=U,}
that is, the codomain of
f
{\displaystyle f}
must equal the domain of
g
.
{\displaystyle g.}
The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science."
The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category.
=== In abstract algebra ===
Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined).
The set of all partial functions (partial transformations) on a given base set,
X
,
{\displaystyle X,}
forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on
X
{\displaystyle X}
), typically denoted by
P
T
X
.
{\displaystyle {\mathcal {PT}}_{X}.}
The set of all partial bijections on
X
{\displaystyle X}
forms the symmetric inverse semigroup.
=== Charts and atlases for manifolds and fiber bundles ===
Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps.
The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined.
== See also ==
Analytic continuation – Extension of the domain of an analytic function (mathematics)
Multivalued function – Generalized mathematical function
Densely defined operator – Function that is defined almost everywhere (mathematics)
== References ==
Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. ISBN 0-486-61471-9.
Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). ISBN 0-7204-2103-9.
Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill Book Company, New York.
=== Notes === | Wikipedia/Partial_function |
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function.
Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept.
A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as
f
(
x
)
=
x
2
+
1
;
{\displaystyle f(x)=x^{2}+1;}
in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if
f
(
x
)
=
x
2
+
1
,
{\displaystyle f(x)=x^{2}+1,}
then
f
(
4
)
=
4
2
+
1
=
17.
{\displaystyle f(4)=4^{2}+1=17.}
Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane.
Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.
The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details.
== Definition ==
A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function.
If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written
y
=
f
(
x
)
.
{\displaystyle y=f(x).}
In this notation, x is the argument or variable of the function.
A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain.
A function f, its domain X, and its codomain Y are often specified by the notation
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
One may write
x
↦
y
{\displaystyle x\mapsto y}
instead of
y
=
f
(
x
)
{\displaystyle y=f(x)}
, where the symbol
↦
{\displaystyle \mapsto }
(read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function
x
↦
x
2
.
{\displaystyle x\mapsto x^{2}.}
The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is a real function, the determination of the domain of the function
x
↦
1
/
f
(
x
)
{\displaystyle x\mapsto 1/f(x)}
requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function.
A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S.
=== Formal definition ===
The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs
(
x
,
y
)
{\displaystyle (x,y)}
such that
x
∈
X
{\displaystyle x\in X}
and
y
∈
Y
.
{\displaystyle y\in Y.}
The set of all these pairs is called the Cartesian product of X and Y and denoted
X
×
Y
.
{\displaystyle X\times Y.}
Thus, the above definition may be formalized as follows.
A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions:
For every
x
{\displaystyle x}
in
X
{\displaystyle X}
there exists
y
{\displaystyle y}
in
Y
{\displaystyle Y}
such that
(
x
,
y
)
∈
R
.
{\displaystyle (x,y)\in R.}
If
(
x
,
y
)
∈
R
{\displaystyle (x,y)\in R}
and
(
x
,
z
)
∈
R
,
{\displaystyle (x,z)\in R,}
then
y
=
z
.
{\displaystyle y=z.}
This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation):
A function is formed by three sets, the domain
X
,
{\displaystyle X,}
the codomain
Y
,
{\displaystyle Y,}
and the graph
R
{\displaystyle R}
that satisfy the three following conditions.
R
⊆
{
(
x
,
y
)
∣
x
∈
X
,
y
∈
Y
}
{\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}}
∀
x
∈
X
,
∃
y
∈
Y
,
(
x
,
y
)
∈
R
{\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad }
(
x
,
y
)
∈
R
∧
(
x
,
z
)
∈
R
⟹
y
=
z
{\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad }
=== Partial functions ===
Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every
x
∈
X
,
{\displaystyle x\in X,}
there is at most one y in Y such that
(
x
,
y
)
∈
R
.
{\displaystyle (x,y)\in R.}
Using functional notation, this means that, given
x
∈
X
,
{\displaystyle x\in X,}
either
f
(
x
)
{\displaystyle f(x)}
is in Y, or it is undefined.
The set of the elements of X such that
f
(
x
)
{\displaystyle f(x)}
is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function.
In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain.
In calculus, a real-valued function of a real variable or real function is a partial function from the set
R
{\displaystyle \mathbb {R} }
of the real numbers to itself. Given a real function
f
:
x
↦
f
(
x
)
{\displaystyle f:x\mapsto f(x)}
its multiplicative inverse
x
↦
1
/
f
(
x
)
{\displaystyle x\mapsto 1/f(x)}
is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse.
Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers
C
{\displaystyle \mathbb {C} }
. The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function
z
↦
1
/
ζ
(
z
)
{\displaystyle z\mapsto 1/\zeta (z)}
is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis.
In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem).
=== Multivariate functions ===
A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed.
Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation.
Commonly, an n-tuple is denoted enclosed between parentheses, such as in
(
1
,
2
,
…
,
n
)
.
{\displaystyle (1,2,\ldots ,n).}
When using functional notation, one usually omits the parentheses surrounding tuples, writing
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\ldots ,x_{n})}
instead of
f
(
(
x
1
,
…
,
x
n
)
)
.
{\displaystyle f((x_{1},\ldots ,x_{n})).}
Given n sets
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\ldots ,X_{n},}
the set of all n-tuples
(
x
1
,
…
,
x
n
)
{\displaystyle (x_{1},\ldots ,x_{n})}
such that
x
1
∈
X
1
,
…
,
x
n
∈
X
n
{\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}}
is called the Cartesian product of
X
1
,
…
,
X
n
,
{\displaystyle X_{1},\ldots ,X_{n},}
and denoted
X
1
×
⋯
×
X
n
.
{\displaystyle X_{1}\times \cdots \times X_{n}.}
Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain.
f
:
U
→
Y
,
{\displaystyle f:U\to Y,}
where the domain U has the form
U
⊆
X
1
×
⋯
×
X
n
.
{\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.}
If all the
X
i
{\displaystyle X_{i}}
are equal to the set
R
{\displaystyle \mathbb {R} }
of the real numbers or to the set
C
{\displaystyle \mathbb {C} }
of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables.
== Notation ==
There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below.
=== Functional notation ===
The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in
f
(
x
)
,
sin
(
3
)
,
or
f
(
x
2
+
1
)
.
{\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).}
The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain (
x
2
+
1
{\displaystyle x^{2}+1}
in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let
f
(
x
)
=
sin
(
x
2
+
1
)
{\displaystyle f(x)=\sin(x^{2}+1)}
".
When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x).
Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols.
The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let
f
(
x
)
{\displaystyle f(x)}
be a function". This is an abuse of notation that is useful for a simpler formulation.
=== Arrow notation ===
Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example,
x
↦
x
+
1
{\displaystyle x\mapsto x+1}
is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of
R
{\displaystyle \mathbb {R} }
is implied.
The domain and codomain can also be explicitly stated, for example:
sqr
:
Z
→
Z
x
↦
x
2
.
{\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}}
This defines a function sqr from the integers to the integers that returns the square of its input.
As a common application of the arrow notation, suppose
f
:
X
×
X
→
Y
;
(
x
,
t
)
↦
f
(
x
,
t
)
{\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)}
is a function in two variables, and we want to refer to a partially applied function
X
→
Y
{\displaystyle X\to Y}
produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted
x
↦
f
(
x
,
t
0
)
{\displaystyle x\mapsto f(x,t_{0})}
using the arrow notation. The expression
x
↦
f
(
x
,
t
0
)
{\displaystyle x\mapsto f(x,t_{0})}
(read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0).
=== Index notation ===
Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes
f
x
.
{\displaystyle f_{x}.}
This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element
f
n
{\displaystyle f_{n}}
is called the nth element of the sequence.
The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map
x
↦
f
(
x
,
t
)
{\displaystyle x\mapsto f(x,t)}
(see above) would be denoted
f
t
{\displaystyle f_{t}}
using index notation, if we define the collection of maps
f
t
{\displaystyle f_{t}}
by the formula
f
t
(
x
)
=
f
(
x
,
t
)
{\displaystyle f_{t}(x)=f(x,t)}
for all
x
,
t
∈
X
{\displaystyle x,t\in X}
.
=== Dot notation ===
In the notation
x
↦
f
(
x
)
,
{\displaystyle x\mapsto f(x),}
the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x.
For example,
a
(
⋅
)
2
{\displaystyle a(\cdot )^{2}}
may stand for the function
x
↦
a
x
2
{\displaystyle x\mapsto ax^{2}}
, and
∫
a
(
⋅
)
f
(
u
)
d
u
{\textstyle \int _{a}^{\,(\cdot )}f(u)\,du}
may stand for a function defined by an integral with variable upper bound:
x
↦
∫
a
x
f
(
u
)
d
u
{\textstyle x\mapsto \int _{a}^{x}f(u)\,du}
.
=== Specialized notations ===
There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above.
=== Functions of more than one variable ===
In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers
(
x
,
y
)
{\displaystyle (x,y)}
to the sum of their squares,
x
2
+
y
2
{\displaystyle x^{2}+y^{2}}
. Such a function is commonly written as
f
(
x
,
y
)
=
x
2
+
y
2
{\displaystyle f(x,y)=x^{2}+y^{2}}
and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as
f
(
w
,
x
,
y
)
{\displaystyle f(w,x,y)}
,
f
(
w
,
x
,
y
,
z
)
{\displaystyle f(w,x,y,z)}
.
== Other terms ==
A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function.
Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions.
In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map.
Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function.
== Specifying a function ==
Given a function
f
{\displaystyle f}
, by definition, to each element
x
{\displaystyle x}
of the domain of the function
f
{\displaystyle f}
, there is a unique element associated to it, the value
f
(
x
)
{\displaystyle f(x)}
of
f
{\displaystyle f}
at
x
{\displaystyle x}
. There are several ways to specify or describe how
x
{\displaystyle x}
is related to
f
(
x
)
{\displaystyle f(x)}
, both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function
f
{\displaystyle f}
.
=== By listing function values ===
On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if
A
=
{
1
,
2
,
3
}
{\displaystyle A=\{1,2,3\}}
, then one can define a function
f
:
A
→
R
{\displaystyle f:A\to \mathbb {R} }
by
f
(
1
)
=
2
,
f
(
2
)
=
3
,
f
(
3
)
=
4.
{\displaystyle f(1)=2,f(2)=3,f(3)=4.}
=== By a formula ===
Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain.
For example, in the above example,
f
{\displaystyle f}
can be defined by the formula
f
(
n
)
=
n
+
1
{\displaystyle f(n)=n+1}
, for
n
∈
{
1
,
2
,
3
}
{\displaystyle n\in \{1,2,3\}}
.
When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from
R
{\displaystyle \mathbb {R} }
to
R
,
{\displaystyle \mathbb {R} ,}
the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative.
For example,
f
(
x
)
=
1
+
x
2
{\displaystyle f(x)={\sqrt {1+x^{2}}}}
defines a function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
whose domain is
R
,
{\displaystyle \mathbb {R} ,}
because
1
+
x
2
{\displaystyle 1+x^{2}}
is always positive if x is a real number. On the other hand,
f
(
x
)
=
1
−
x
2
{\displaystyle f(x)={\sqrt {1-x^{2}}}}
defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.)
Functions can be classified by the nature of formulas that define them:
A quadratic function is a function that may be written
f
(
x
)
=
a
x
2
+
b
x
+
c
,
{\displaystyle f(x)=ax^{2}+bx+c,}
where a, b, c are constants.
More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example,
f
(
x
)
=
x
3
−
3
x
−
1
{\displaystyle f(x)=x^{3}-3x-1}
and
f
(
x
)
=
(
x
−
1
)
(
x
3
+
1
)
+
2
x
2
−
1
{\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1}
are polynomial functions of
x
{\displaystyle x}
.
A rational function is the same, with divisions also allowed, such as
f
(
x
)
=
x
−
1
x
+
1
,
{\displaystyle f(x)={\frac {x-1}{x+1}},}
and
f
(
x
)
=
1
x
+
1
+
3
x
−
2
x
−
1
.
{\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.}
An algebraic function is the same, with nth roots and roots of polynomials also allowed.
An elementary function is the same, with logarithms and exponential functions allowed.
=== Inverse and implicit functions ===
A function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function
f
−
1
:
Y
→
X
{\displaystyle f^{-1}:Y\to X}
that maps
y
∈
Y
{\displaystyle y\in Y}
to the element
x
∈
X
{\displaystyle x\in X}
such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers.
If a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is not bijective, it may occur that one can select subsets
E
⊆
X
{\displaystyle E\subseteq X}
and
F
⊆
Y
{\displaystyle F\subseteq Y}
such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly.
More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every
x
∈
E
,
{\displaystyle x\in E,}
there is some
y
∈
Y
{\displaystyle y\in Y}
such that x R y. If one has a criterion allowing selecting such a y for every
x
∈
E
,
{\displaystyle x\in E,}
this defines a function
f
:
E
→
Y
,
{\displaystyle f:E\to Y,}
called an implicit function, because it is implicitly defined by the relation R.
For example, the equation of the unit circle
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0].
In this example, the equation can be solved in y, giving
y
=
±
1
−
x
2
,
{\displaystyle y=\pm {\sqrt {1-x^{2}}},}
but, in more complicated examples, this is impossible. For example, the relation
y
5
+
y
+
x
=
0
{\displaystyle y^{5}+y+x=0}
defines y as an implicit function of x, called the Bring radical, which has
R
{\displaystyle \mathbb {R} }
as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots.
The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point.
=== Using differential calculus ===
Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function.
More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0.
Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by
e
x
=
∑
n
=
0
∞
x
n
n
!
{\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}}
. However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number.
=== By recurrence ===
Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations.
The factorial function on the nonnegative integers (
n
↦
n
!
{\displaystyle n\mapsto n!}
) is a basic example, as it can be defined by the recurrence relation
n
!
=
n
(
n
−
1
)
!
for
n
>
0
,
{\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,}
and the initial condition
0
!
=
1.
{\displaystyle 0!=1.}
== Representing a function ==
A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts.
=== Graphs and plots ===
Given a function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
its graph is, formally, the set
G
=
{
(
x
,
f
(
x
)
)
∣
x
∈
X
}
.
{\displaystyle G=\{(x,f(x))\mid x\in X\}.}
In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element
(
x
,
y
)
∈
G
{\displaystyle (x,y)\in G}
may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function
x
↦
x
2
,
{\displaystyle x\mapsto x^{2},}
consisting of all points with coordinates
(
x
,
x
2
)
{\displaystyle (x,x^{2})}
for
x
∈
R
,
{\displaystyle x\in \mathbb {R} ,}
yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function
x
↦
x
2
,
{\displaystyle x\mapsto x^{2},}
with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates
(
r
,
θ
)
=
(
x
,
x
2
)
,
{\displaystyle (r,\theta )=(x,x^{2}),}
the plot obtained is Fermat's spiral.
=== Tables ===
A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function
f
:
{
1
,
…
,
5
}
2
→
R
{\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} }
defined as
f
(
x
,
y
)
=
x
y
{\displaystyle f(x,y)=xy}
can be represented by the familiar multiplication table
On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places:
Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions.
=== Bar chart ===
A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis).
== General properties ==
This section describes general properties of functions, that are independent of specific properties of the domain and the codomain.
=== Standard functions ===
There are a number of standard functions that occur frequently:
For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function
∅
→
X
{\displaystyle \varnothing \to X}
is not equal to
∅
→
Y
{\displaystyle \varnothing \to Y}
if and only if
X
≠
Y
{\displaystyle X\neq Y}
, although their graphs are both the empty set.
For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set.
Given a function
f
:
X
→
Y
,
{\displaystyle f:X\to Y,}
the canonical surjection of f onto its image
f
(
X
)
=
{
f
(
x
)
∣
x
∈
X
}
{\displaystyle f(X)=\{f(x)\mid x\in X\}}
is the function from X to f(X) that maps x to f(x).
For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself.
The identity function on a set X, often denoted by idX, is the inclusion of X into itself.
=== Function composition ===
Given two functions
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
such that the domain of g is the codomain of f, their composition is the function
g
∘
f
:
X
→
Z
{\displaystyle g\circ f:X\rightarrow Z}
defined by
(
g
∘
f
)
(
x
)
=
g
(
f
(
x
)
)
.
{\displaystyle (g\circ f)(x)=g(f(x)).}
That is, the value of
g
∘
f
{\displaystyle g\circ f}
is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right.
The composition
g
∘
f
{\displaystyle g\circ f}
is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both
g
∘
f
{\displaystyle g\circ f}
and
f
∘
g
{\displaystyle f\circ g}
satisfy these conditions, the composition is not necessarily commutative, that is, the functions
g
∘
f
{\displaystyle g\circ f}
and
f
∘
g
{\displaystyle f\circ g}
need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then
g
(
f
(
x
)
)
=
x
2
+
1
{\displaystyle g(f(x))=x^{2}+1}
and
f
(
g
(
x
)
)
=
(
x
+
1
)
2
{\displaystyle f(g(x))=(x+1)^{2}}
agree just for
x
=
0.
{\displaystyle x=0.}
The function composition is associative in the sense that, if one of
(
h
∘
g
)
∘
f
{\displaystyle (h\circ g)\circ f}
and
h
∘
(
g
∘
f
)
{\displaystyle h\circ (g\circ f)}
is defined, then the other is also defined, and they are equal, that is,
(
h
∘
g
)
∘
f
=
h
∘
(
g
∘
f
)
.
{\displaystyle (h\circ g)\circ f=h\circ (g\circ f).}
Therefore, it is usual to just write
h
∘
g
∘
f
.
{\displaystyle h\circ g\circ f.}
The identity functions
id
X
{\displaystyle \operatorname {id} _{X}}
and
id
Y
{\displaystyle \operatorname {id} _{Y}}
are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has
f
∘
id
X
=
id
Y
∘
f
=
f
.
{\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.}
=== Image and preimage ===
Let
f
:
X
→
Y
.
{\displaystyle f:X\to Y.}
The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is,
f
(
A
)
=
{
f
(
x
)
∣
x
∈
A
}
.
{\displaystyle f(A)=\{f(x)\mid x\in A\}.}
The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain.
On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
and is given by the equation
f
−
1
(
y
)
=
{
x
∈
X
∣
f
(
x
)
=
y
}
.
{\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.}
Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by
f
−
1
(
B
)
{\displaystyle f^{-1}(B)}
and is given by the equation
f
−
1
(
B
)
=
{
x
∈
X
∣
f
(
x
)
∈
B
}
.
{\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.}
For example, the preimage of
{
4
,
9
}
{\displaystyle \{4,9\}}
under the square function is the set
{
−
3
,
−
2
,
2
,
3
}
{\displaystyle \{-3,-2,2,3\}}
.
By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then
f
−
1
(
0
)
=
Z
{\displaystyle f^{-1}(0)=\mathbb {Z} }
.
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties:
A
⊆
B
⟹
f
(
A
)
⊆
f
(
B
)
{\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)}
C
⊆
D
⟹
f
−
1
(
C
)
⊆
f
−
1
(
D
)
{\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)}
A
⊆
f
−
1
(
f
(
A
)
)
{\displaystyle A\subseteq f^{-1}(f(A))}
C
⊇
f
(
f
−
1
(
C
)
)
{\displaystyle C\supseteq f(f^{-1}(C))}
f
(
f
−
1
(
f
(
A
)
)
)
=
f
(
A
)
{\displaystyle f(f^{-1}(f(A)))=f(A)}
f
−
1
(
f
(
f
−
1
(
C
)
)
)
=
f
−
1
(
C
)
{\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)}
The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f.
If a function f has an inverse (see below), this inverse is denoted
f
−
1
.
{\displaystyle f^{-1}.}
In this case
f
−
1
(
C
)
{\displaystyle f^{-1}(C)}
may denote either the image by
f
−
1
{\displaystyle f^{-1}}
or the preimage by f of C. This is not a problem, as these sets are equal. The notation
f
(
A
)
{\displaystyle f(A)}
and
f
−
1
(
C
)
{\displaystyle f^{-1}(C)}
may be ambiguous in the case of sets that contain some subsets as elements, such as
{
x
,
{
x
}
}
.
{\displaystyle \{x,\{x\}\}.}
In this case, some care may be needed, for example, by using square brackets
f
[
A
]
,
f
−
1
[
C
]
{\displaystyle f[A],f^{-1}[C]}
for images and preimages of subsets and ordinary parentheses for images and preimages of elements.
=== Injective, surjective and bijective functions ===
Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a function.
The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every
y
∈
Y
,
{\displaystyle y\in Y,}
the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
g
∘
f
=
id
X
,
{\displaystyle g\circ f=\operatorname {id} _{X},}
that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element
x
0
{\displaystyle x_{0}}
in X (which exists as X is supposed to be nonempty), and one defines g by
g
(
y
)
=
x
{\displaystyle g(y)=x}
if
y
=
f
(
x
)
{\displaystyle y=f(x)}
and
g
(
y
)
=
x
0
{\displaystyle g(y)=x_{0}}
if
y
∉
f
(
X
)
.
{\displaystyle y\not \in f(X).}
Conversely, if
g
∘
f
=
id
X
,
{\displaystyle g\circ f=\operatorname {id} _{X},}
and
y
=
f
(
x
)
,
{\displaystyle y=f(x),}
then
x
=
g
(
y
)
,
{\displaystyle x=g(y),}
and thus
f
−
1
(
y
)
=
{
x
}
.
{\displaystyle f^{-1}(y)=\{x\}.}
The function f is surjective (or onto, or is a surjection) if its range
f
(
X
)
{\displaystyle f(X)}
equals its codomain
Y
{\displaystyle Y}
, that is, if, for each element
y
{\displaystyle y}
of the codomain, there exists some element
x
{\displaystyle x}
of the domain such that
f
(
x
)
=
y
{\displaystyle f(x)=y}
(in other words, the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
of every
y
∈
Y
{\displaystyle y\in Y}
is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
f
∘
g
=
id
Y
,
{\displaystyle f\circ g=\operatorname {id} _{Y},}
that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by
g
(
y
)
=
x
,
{\displaystyle g(y)=x,}
where
x
{\displaystyle x}
is an arbitrarily chosen element of
f
−
1
(
y
)
.
{\displaystyle f^{-1}(y).}
The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every
y
∈
Y
,
{\displaystyle y\in Y,}
the preimage
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function
g
:
Y
→
X
{\displaystyle g:Y\to X}
such that
g
∘
f
=
id
X
{\displaystyle g\circ f=\operatorname {id} _{X}}
and
f
∘
g
=
id
Y
.
{\displaystyle f\circ g=\operatorname {id} _{Y}.}
(Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward).
Every function
f
:
X
→
Y
{\displaystyle f:X\to Y}
may be factorized as the composition
i
∘
s
{\displaystyle i\circ s}
of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f.
"One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical.
=== Restriction and extension ===
If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a function and S is a subset of X, then the restriction of
f
{\displaystyle f}
to S, denoted
f
|
S
{\displaystyle f|_{S}}
, is the function from S to Y defined by
f
|
S
(
x
)
=
f
(
x
)
{\displaystyle f|_{S}(x)=f(x)}
for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function
f
{\displaystyle f}
such that
f
|
S
{\displaystyle f|_{S}}
is injective, then the canonical surjection of
f
|
S
{\displaystyle f|_{S}}
onto its image
f
|
S
(
S
)
=
f
(
S
)
{\displaystyle f|_{S}(S)=f(S)}
is a bijection, and thus has an inverse function from
f
(
S
)
{\displaystyle f(S)}
to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos.
Function restriction may also be used for "gluing" functions together. Let
X
=
⋃
i
∈
I
U
i
{\textstyle X=\bigcup _{i\in I}U_{i}}
be the decomposition of X as a union of subsets, and suppose that a function
f
i
:
U
i
→
Y
{\displaystyle f_{i}:U_{i}\to Y}
is defined on each
U
i
{\displaystyle U_{i}}
such that for each pair
i
,
j
{\displaystyle i,j}
of indices, the restrictions of
f
i
{\displaystyle f_{i}}
and
f
j
{\displaystyle f_{j}}
to
U
i
∩
U
j
{\displaystyle U_{i}\cap U_{j}}
are equal. Then this defines a unique function
f
:
X
→
Y
{\displaystyle f:X\to Y}
such that
f
|
U
i
=
f
i
{\displaystyle f|_{U_{i}}=f_{i}}
for all i. This is the way that functions on manifolds are defined.
An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane.
Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function
h
(
x
)
=
a
x
+
b
c
x
+
d
{\displaystyle h(x)={\frac {ax+b}{cx+d}}}
such that ad − bc ≠ 0. Its domain is the set of all real numbers different from
−
d
/
c
,
{\displaystyle -d/c,}
and its image is the set of all real numbers different from
a
/
c
.
{\displaystyle a/c.}
If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting
h
(
∞
)
=
a
/
c
{\displaystyle h(\infty )=a/c}
and
h
(
−
d
/
c
)
=
∞
{\displaystyle h(-d/c)=\infty }
.
== In calculus ==
The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined.
Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis.
=== Real function ===
A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions.
The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval.
Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by
(
f
+
g
)
(
x
)
=
f
(
x
)
+
g
(
x
)
(
f
−
g
)
(
x
)
=
f
(
x
)
−
g
(
x
)
(
f
⋅
g
)
(
x
)
=
f
(
x
)
⋅
g
(
x
)
.
{\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.}
The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by
f
g
(
x
)
=
f
(
x
)
g
(
x
)
,
{\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},}
but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g.
The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function
x
↦
1
x
,
{\displaystyle x\mapsto {\frac {1}{x}},}
whose graph is a hyperbola, and whose domain is the whole real line except for 0.
The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function
x
↦
1
x
{\textstyle x\mapsto {\frac {1}{x}}}
is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm.
A real function f is monotonic in an interval if the sign of
f
(
x
)
−
f
(
y
)
x
−
y
{\displaystyle {\frac {f(x)-f(y)}{x-y}}}
does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function.
Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation
y
″
+
y
=
0
{\displaystyle y''+y=0}
such that
sin
0
=
0
,
cos
0
=
1
,
∂
sin
x
∂
x
(
0
)
=
1
,
∂
cos
x
∂
x
(
0
)
=
0.
{\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.}
=== Vector-valued function ===
When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function.
Some vector-valued functions are defined on a subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
or other spaces that share geometric or topological properties of
R
n
{\displaystyle \mathbb {R} ^{n}}
, such as manifolds. These vector-valued functions are given the name vector fields.
== Function space ==
In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions.
Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces.
== Multi-valued functions ==
Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point
x
0
,
{\displaystyle x_{0},}
there are several possible starting values for the function.
For example, in defining the square root as the inverse function of the square function, for any positive real number
x
0
,
{\displaystyle x_{0},}
there are two choices for the value of the square root, one of which is positive and denoted
x
0
,
{\displaystyle {\sqrt {x_{0}}},}
and another which is negative and denoted
−
x
0
.
{\displaystyle -{\sqrt {x_{0}}}.}
These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x.
In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of
x
3
−
3
x
−
y
=
0
{\displaystyle x^{3}-3x-y=0}
(see the figure on the right). For y = 0 one may choose either
0
,
3
,
or
−
3
{\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}}
for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2.
Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy.
== In the foundations of mathematics ==
The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions.
For example, the singleton set may be considered as a function
x
↦
{
x
}
.
{\displaystyle x\mapsto \{x\}.}
Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions.
These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set.
In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus.
== In computer science ==
In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects.
In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory.
Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions.
General recursive functions are partial functions from integers to integers that can be defined from
constant functions,
successor, and
projection functions
via the operators
composition,
primitive recursion, and
minimization.
Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties:
a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.),
every sequence of symbols may be coded as a sequence of bits,
a bit sequence can be interpreted as the binary representation of an integer.
Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation.
In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus.
== See also ==
=== Subpages ===
=== Generalizations ===
=== Related topics ===
== Notes ==
== References ==
== Sources ==
== Further reading ==
== External links ==
The Wolfram Functions – website giving formulae and visualizations of many mathematical functions
NIST Digital Library of Mathematical Functions | Wikipedia/Bivariate_function |
In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
== Formal definition ==
Let
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,\mathrm {T} )}
be measurable spaces, meaning that
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are sets equipped with respective
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
is said to be measurable if for every
E
∈
T
{\displaystyle E\in \mathrm {T} }
the pre-image of
E
{\displaystyle E}
under
f
{\displaystyle f}
is in
Σ
{\displaystyle \Sigma }
; that is, for all
E
∈
T
{\displaystyle E\in \mathrm {T} }
f
−
1
(
E
)
:=
{
x
∈
X
∣
f
(
x
)
∈
E
}
∈
Σ
.
{\displaystyle f^{-1}(E):=\{x\in X\mid f(x)\in E\}\in \Sigma .}
That is,
σ
(
f
)
⊆
Σ
,
{\displaystyle \sigma (f)\subseteq \Sigma ,}
where
σ
(
f
)
{\displaystyle \sigma (f)}
is the σ-algebra generated by f. If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a measurable function, one writes
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
.
{\displaystyle f\colon (X,\Sigma )\rightarrow (Y,\mathrm {T} ).}
to emphasize the dependency on the
σ
{\displaystyle \sigma }
-algebras
Σ
{\displaystyle \Sigma }
and
T
.
{\displaystyle \mathrm {T} .}
== Term usage variations ==
The choice of
σ
{\displaystyle \sigma }
-algebras in the definition above is sometimes implicit and left up to the context. For example, for
R
,
{\displaystyle \mathbb {R} ,}
C
,
{\displaystyle \mathbb {C} ,}
or other topological spaces, the Borel algebra (generated by all the open sets) is a common choice. Some authors define measurable functions as exclusively real-valued ones with respect to the Borel algebra.
If the values of the function lie in an infinite-dimensional vector space, other non-equivalent definitions of measurability, such as weak measurability and Bochner measurability, exist.
== Notable classes of measurable functions ==
Random variables are by definition measurable functions defined on probability spaces.
If
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and
(
Y
,
T
)
{\displaystyle (Y,T)}
are Borel spaces, a measurable function
f
:
(
X
,
Σ
)
→
(
Y
,
T
)
{\displaystyle f:(X,\Sigma )\to (Y,T)}
is also called a Borel function. Continuous functions are Borel functions but not all Borel functions are continuous. However, a measurable function is nearly a continuous function; see Luzin's theorem. If a Borel function happens to be a section of a map
Y
→
π
X
,
{\displaystyle Y\xrightarrow {~\pi ~} X,}
it is called a Borel section.
A Lebesgue measurable function is a measurable function
f
:
(
R
,
L
)
→
(
C
,
B
C
)
,
{\displaystyle f:(\mathbb {R} ,{\mathcal {L}})\to (\mathbb {C} ,{\mathcal {B}}_{\mathbb {C} }),}
where
L
{\displaystyle {\mathcal {L}}}
is the
σ
{\displaystyle \sigma }
-algebra of Lebesgue measurable sets, and
B
C
{\displaystyle {\mathcal {B}}_{\mathbb {C} }}
is the Borel algebra on the complex numbers
C
.
{\displaystyle \mathbb {C} .}
Lebesgue measurable functions are of interest in mathematical analysis because they can be integrated. In the case
f
:
X
→
R
,
{\displaystyle f:X\to \mathbb {R} ,}
f
{\displaystyle f}
is Lebesgue measurable if and only if
{
f
>
α
}
=
{
x
∈
X
:
f
(
x
)
>
α
}
{\displaystyle \{f>\alpha \}=\{x\in X:f(x)>\alpha \}}
is measurable for all
α
∈
R
.
{\displaystyle \alpha \in \mathbb {R} .}
This is also equivalent to any of
{
f
≥
α
}
,
{
f
<
α
}
,
{
f
≤
α
}
{\displaystyle \{f\geq \alpha \},\{f<\alpha \},\{f\leq \alpha \}}
being measurable for all
α
,
{\displaystyle \alpha ,}
or the preimage of any open set being measurable. Continuous functions, monotone functions, step functions, semicontinuous functions, Riemann-integrable functions, and functions of bounded variation are all Lebesgue measurable. A function
f
:
X
→
C
{\displaystyle f:X\to \mathbb {C} }
is measurable if and only if the real and imaginary parts are measurable.
== Properties of measurable functions ==
The sum and product of two complex-valued measurable functions are measurable. So is the quotient, so long as there is no division by zero.
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
2
)
→
(
Z
,
Σ
3
)
{\displaystyle g:(Y,\Sigma _{2})\to (Z,\Sigma _{3})}
are measurable functions, then so is their composition
g
∘
f
:
(
X
,
Σ
1
)
→
(
Z
,
Σ
3
)
.
{\displaystyle g\circ f:(X,\Sigma _{1})\to (Z,\Sigma _{3}).}
If
f
:
(
X
,
Σ
1
)
→
(
Y
,
Σ
2
)
{\displaystyle f:(X,\Sigma _{1})\to (Y,\Sigma _{2})}
and
g
:
(
Y
,
Σ
3
)
→
(
Z
,
Σ
4
)
{\displaystyle g:(Y,\Sigma _{3})\to (Z,\Sigma _{4})}
are measurable functions, their composition
g
∘
f
:
X
→
Z
{\displaystyle g\circ f:X\to Z}
need not be
(
Σ
1
,
Σ
4
)
{\displaystyle (\Sigma _{1},\Sigma _{4})}
-measurable unless
Σ
3
⊆
Σ
2
.
{\displaystyle \Sigma _{3}\subseteq \Sigma _{2}.}
Indeed, two Lebesgue-measurable functions may be constructed in such a way as to make their composition non-Lebesgue-measurable.
The (pointwise) supremum, infimum, limit superior, and limit inferior of a sequence (viz., countably many) of real-valued measurable functions are all measurable as well.
The pointwise limit of a sequence of measurable functions
f
n
:
X
→
Y
{\displaystyle f_{n}:X\to Y}
is measurable, where
Y
{\displaystyle Y}
is a metric space (endowed with the Borel algebra). This is not true in general if
Y
{\displaystyle Y}
is non-metrizable. The corresponding statement for continuous functions requires stronger conditions than pointwise convergence, such as uniform convergence.
== Non-measurable functions ==
Real-valued functions encountered in applications tend to be measurable; however, it is not difficult to prove the existence of non-measurable functions. Such proofs rely on the axiom of choice in an essential way, in the sense that Zermelo–Fraenkel set theory without the axiom of choice does not prove the existence of such functions.
In any measure space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
with a non-measurable set
A
⊂
X
,
{\displaystyle A\subset X,}
A
∉
Σ
,
{\displaystyle A\notin \Sigma ,}
one can construct a non-measurable indicator function:
1
A
:
(
X
,
Σ
)
→
R
,
1
A
(
x
)
=
{
1
if
x
∈
A
0
otherwise
,
{\displaystyle \mathbf {1} _{A}:(X,\Sigma )\to \mathbb {R} ,\quad \mathbf {1} _{A}(x)={\begin{cases}1&{\text{ if }}x\in A\\0&{\text{ otherwise}},\end{cases}}}
where
R
{\displaystyle \mathbb {R} }
is equipped with the usual Borel algebra. This is a non-measurable function since the preimage of the measurable set
{
1
}
{\displaystyle \{1\}}
is the non-measurable
A
.
{\displaystyle A.}
As another example, any non-constant function
f
:
X
→
R
{\displaystyle f:X\to \mathbb {R} }
is non-measurable with respect to the trivial
σ
{\displaystyle \sigma }
-algebra
Σ
=
{
∅
,
X
}
,
{\displaystyle \Sigma =\{\varnothing ,X\},}
since the preimage of any point in the range is some proper, nonempty subset of
X
,
{\displaystyle X,}
which is not an element of the trivial
Σ
.
{\displaystyle \Sigma .}
== See also ==
Bochner measurable function
Bochner space – Type of topological space
Lp space – Function spaces generalizing finite-dimensional p norm spaces - Vector spaces of measurable functions: the
L
p
{\displaystyle L^{p}}
spaces
Measure-preserving dynamical system – Subject of study in ergodic theory
Vector measure
Weakly measurable function
== Notes ==
== External links ==
Measurable function at Encyclopedia of Mathematics
Borel function at Encyclopedia of Mathematics | Wikipedia/Measurable_function |
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section.
== Scope ==
Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separate calculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way.
As a first example, consider the gradient from vector calculus. For a scalar function of three independent variables,
f
(
x
1
,
x
2
,
x
3
)
{\displaystyle f(x_{1},x_{2},x_{3})}
, the gradient is given by the vector equation
∇
f
=
∂
f
∂
x
1
x
^
1
+
∂
f
∂
x
2
x
^
2
+
∂
f
∂
x
3
x
^
3
,
{\displaystyle \nabla f={\frac {\partial f}{\partial x_{1}}}{\hat {x}}_{1}+{\frac {\partial f}{\partial x_{2}}}{\hat {x}}_{2}+{\frac {\partial f}{\partial x_{3}}}{\hat {x}}_{3},}
where
x
^
i
{\displaystyle {\hat {x}}_{i}}
represents a unit vector in the
x
i
{\displaystyle x_{i}}
direction for
1
≤
i
≤
3
{\displaystyle 1\leq i\leq 3}
. This type of generalized derivative can be seen as the derivative of a scalar, f, with respect to a vector,
x
{\displaystyle \mathbf {x} }
, and its result can be easily collected in vector form.
∇
f
=
(
∂
f
∂
x
)
T
=
[
∂
f
∂
x
1
∂
f
∂
x
2
∂
f
∂
x
3
]
T
.
{\displaystyle \nabla f=\left({\frac {\partial f}{\partial \mathbf {x} }}\right)^{\mathsf {T}}={\begin{bmatrix}{\dfrac {\partial f}{\partial x_{1}}}&{\dfrac {\partial f}{\partial x_{2}}}&{\dfrac {\partial f}{\partial x_{3}}}\\\end{bmatrix}}^{\textsf {T}}.}
More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. As another example, if we have an n-vector of dependent variables, or functions, of m independent variables we might consider the derivative of the dependent vector with respect to the independent vector. The result could be collected in an m×n matrix consisting of all of the possible derivative combinations.
There are a total of nine possibilities using scalars, vectors, and matrices. Notice that as we consider higher numbers of components in each of the independent and dependent variables we can be left with a very large number of possibilities. The six kinds of derivatives that can be most neatly organized in matrix form are collected in the following table.
Here, we have used the term "matrix" in its most general sense, recognizing that vectors are simply matrices with one column (and scalars are simply vectors with one row). Moreover, we have used bold letters to indicate vectors and bold capital letters for matrices. This notation is used throughout.
Notice that we could also talk about the derivative of a vector with respect to a matrix, or any of the other unfilled cells in our table. However, these derivatives are most naturally organized in a tensor of rank higher than 2, so that they do not fit neatly into a matrix. In the following three sections we will define each one of these derivatives and relate them to other branches of mathematics. See the layout conventions section for a more detailed table.
=== Relation to other derivatives ===
The matrix derivative is a convenient notation for keeping track of partial derivatives for doing calculations. The Fréchet derivative is the standard way in the setting of functional analysis to take derivatives with respect to vectors. In the case that a matrix function of a matrix is Fréchet differentiable, the two derivatives will agree up to translation of notations. As is the case in general for partial derivatives, some formulae may extend under weaker analytic conditions than the existence of the derivative as approximating linear mapping.
=== Usages ===
Matrix calculus is used for deriving optimal stochastic estimators, often involving the use of Lagrange multipliers. This includes the derivation of:
Kalman filter
Wiener filter
Expectation-maximization algorithm for Gaussian mixture
Gradient descent
== Notation ==
The vector and matrix derivatives presented in the sections to follow take full advantage of matrix notation, using a single variable to represent a large number of variables. In what follows we will distinguish scalars, vectors and matrices by their typeface. We will let M(n,m) denote the space of real n×m matrices with n rows and m columns. Such matrices will be denoted using bold capital letters: A, X, Y, etc. An element of M(n,1), that is, a column vector, is denoted with a boldface lowercase letter: a, x, y, etc. An element of M(1,1) is a scalar, denoted with lowercase italic typeface: a, t, x, etc. XT denotes matrix transpose, tr(X) is the trace, and det(X) or |X| is the determinant. All functions are assumed to be of differentiability class C1 unless otherwise noted. Generally letters from the first half of the alphabet (a, b, c, ...) will be used to denote constants, and from the second half (t, x, y, ...) to denote variables.
NOTE: As mentioned above, there are competing notations for laying out systems of partial derivatives in vectors and matrices, and no standard appears to be emerging yet. The next two introductory sections use the numerator layout convention simply for the purposes of convenience, to avoid overly complicating the discussion. The section after them discusses layout conventions in more detail. It is important to realize the following:
Despite the use of the terms "numerator layout" and "denominator layout", there are actually more than two possible notational choices involved. The reason is that the choice of numerator vs. denominator (or in some situations, numerator vs. mixed) can be made independently for scalar-by-vector, vector-by-scalar, vector-by-vector, and scalar-by-matrix derivatives, and a number of authors mix and match their layout choices in various ways.
The choice of numerator layout in the introductory sections below does not imply that this is the "correct" or "superior" choice. There are advantages and disadvantages to the various layout types. Serious mistakes can result from carelessly combining formulas written in different layouts, and converting from one layout to another requires care to avoid errors. As a result, when working with existing formulas the best policy is probably to identify whichever layout is used and maintain consistency with it, rather than attempting to use the same layout in all situations.
=== Alternatives ===
The tensor index notation with its Einstein summation convention is very similar to the matrix calculus, except one writes only a single component at a time. It has the advantage that one can easily manipulate arbitrarily high rank tensors, whereas tensors of rank higher than two are quite unwieldy with matrix notation. All of the work here can be done in this notation without use of the single-variable matrix notation. However, many problems in estimation theory and other areas of applied mathematics would result in too many indices to properly keep track of, pointing in favor of matrix calculus in those areas. Also, Einstein notation can be very useful in proving the identities presented here (see section on differentiation) as an alternative to typical element notation, which can become cumbersome when the explicit sums are carried around. Note that a matrix can be considered a tensor of rank two.
== Derivatives with vectors ==
Because vectors are matrices with only one column, the simplest matrix derivatives are vector derivatives.
The notations developed here can accommodate the usual operations of vector calculus by identifying the space M(n,1) of n-vectors with the Euclidean space Rn, and the scalar M(1,1) is identified with R. The corresponding concept from vector calculus is indicated at the end of each subsection.
NOTE: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
=== Vector-by-scalar ===
The derivative of a vector
y
=
[
y
1
y
2
⋯
y
m
]
T
{\displaystyle \mathbf {y} ={\begin{bmatrix}y_{1}&y_{2}&\cdots &y_{m}\end{bmatrix}}^{\mathsf {T}}}
, by a scalar x is written (in numerator layout notation) as
d
y
d
x
=
[
d
y
1
d
x
d
y
2
d
x
⋮
d
y
m
d
x
]
.
{\displaystyle {\frac {d\mathbf {y} }{dx}}={\begin{bmatrix}{\frac {dy_{1}}{dx}}\\{\frac {dy_{2}}{dx}}\\\vdots \\{\frac {dy_{m}}{dx}}\\\end{bmatrix}}.}
In vector calculus the derivative of a vector y with respect to a scalar x is known as the tangent vector of the vector y,
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}}}
. Notice here that y: R1 → Rm.
Example Simple examples of this include the velocity vector in Euclidean space, which is the tangent vector of the position vector (considered as a function of time). Also, the acceleration is the tangent vector of the velocity.
=== Scalar-by-vector ===
The derivative of a scalar y by a vector
x
=
[
x
1
x
2
⋯
x
n
]
{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}}
, is written (in numerator layout notation) as
∂
y
∂
x
=
[
∂
y
∂
x
1
∂
y
∂
x
2
⋯
∂
y
∂
x
n
]
.
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }}={\begin{bmatrix}{\dfrac {\partial y}{\partial x_{1}}}&{\dfrac {\partial y}{\partial x_{2}}}&\cdots &{\dfrac {\partial y}{\partial x_{n}}}\end{bmatrix}}.}
In vector calculus, the gradient of a scalar field f : Rn → R (whose independent coordinates are the components of x) is the transpose of the derivative of a scalar by a vector.
∇
f
=
[
∂
f
∂
x
1
⋮
∂
f
∂
x
n
]
=
(
∂
f
∂
x
)
T
{\displaystyle \nabla f={\begin{bmatrix}{\frac {\partial f}{\partial x_{1}}}\\\vdots \\{\frac {\partial f}{\partial x_{n}}}\end{bmatrix}}=\left({\frac {\partial f}{\partial \mathbf {x} }}\right)^{\mathsf {T}}}
By example, in physics, the electric field is the negative vector gradient of the electric potential.
The directional derivative of a scalar function f(x) of the space vector x in the direction of the unit vector u (represented in this case as a column vector) is defined using the gradient as follows.
∇
u
f
(
x
)
=
∇
f
(
x
)
⋅
u
{\displaystyle \nabla _{\mathbf {u} }{f}(\mathbf {x} )=\nabla f(\mathbf {x} )\cdot \mathbf {u} }
Using the notation just defined for the derivative of a scalar with respect to a vector we can re-write the directional derivative as
∇
u
f
=
∂
f
∂
x
u
.
{\displaystyle \nabla _{\mathbf {u} }f={\frac {\partial f}{\partial \mathbf {x} }}\mathbf {u} .}
This type of notation will be nice when proving product rules and chain rules that come out looking similar to what we are familiar with for the scalar derivative.
=== Vector-by-vector ===
Each of the previous two cases can be considered as an application of the derivative of a vector with respect to a vector, using a vector of size one appropriately. Similarly we will find that the derivatives involving matrices will reduce to derivatives involving vectors in a corresponding way.
The derivative of a vector function (a vector whose components are functions)
y
=
[
y
1
y
2
⋯
y
m
]
T
{\displaystyle \mathbf {y} ={\begin{bmatrix}y_{1}&y_{2}&\cdots &y_{m}\end{bmatrix}}^{\mathsf {T}}}
, with respect to an input vector,
x
=
[
x
1
x
2
⋯
x
n
]
T
{\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}&x_{2}&\cdots &x_{n}\end{bmatrix}}^{\mathsf {T}}}
, is written (in numerator layout notation) as
∂
y
∂
x
=
[
∂
y
1
∂
x
1
∂
y
1
∂
x
2
⋯
∂
y
1
∂
x
n
∂
y
2
∂
x
1
∂
y
2
∂
x
2
⋯
∂
y
2
∂
x
n
⋮
⋮
⋱
⋮
∂
y
m
∂
x
1
∂
y
m
∂
x
2
⋯
∂
y
m
∂
x
n
]
.
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}={\begin{bmatrix}{\frac {\partial y_{1}}{\partial x_{1}}}&{\frac {\partial y_{1}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{1}}{\partial x_{n}}}\\{\frac {\partial y_{2}}{\partial x_{1}}}&{\frac {\partial y_{2}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{2}}{\partial x_{n}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y_{m}}{\partial x_{1}}}&{\frac {\partial y_{m}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{m}}{\partial x_{n}}}\\\end{bmatrix}}.}
In vector calculus, the derivative of a vector function y with respect to a vector x whose components represent a space is known as the pushforward (or differential), or the Jacobian matrix.
The pushforward along a vector function f with respect to vector v in Rn is given by
d
f
(
v
)
=
∂
f
∂
v
d
v
.
{\displaystyle d\mathbf {f} (\mathbf {v} )={\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}d\mathbf {v} .}
== Derivatives with matrices ==
There are two types of derivatives with matrices that can be organized into a matrix of the same size. These are the derivative of a matrix by a scalar and the derivative of a scalar by a matrix. These can be useful in minimization problems found in many areas of applied mathematics and have adopted the names tangent matrix and gradient matrix respectively after their analogs for vectors.
Note: The discussion in this section assumes the numerator layout convention for pedagogical purposes. Some authors use different conventions. The section on layout conventions discusses this issue in greater detail. The identities given further down are presented in forms that can be used in conjunction with all common layout conventions.
=== Matrix-by-scalar ===
The derivative of a matrix function Y by a scalar x is known as the tangent matrix and is given (in numerator layout notation) by
∂
Y
∂
x
=
[
∂
y
11
∂
x
∂
y
12
∂
x
⋯
∂
y
1
n
∂
x
∂
y
21
∂
x
∂
y
22
∂
x
⋯
∂
y
2
n
∂
x
⋮
⋮
⋱
⋮
∂
y
m
1
∂
x
∂
y
m
2
∂
x
⋯
∂
y
m
n
∂
x
]
.
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}}={\begin{bmatrix}{\frac {\partial y_{11}}{\partial x}}&{\frac {\partial y_{12}}{\partial x}}&\cdots &{\frac {\partial y_{1n}}{\partial x}}\\{\frac {\partial y_{21}}{\partial x}}&{\frac {\partial y_{22}}{\partial x}}&\cdots &{\frac {\partial y_{2n}}{\partial x}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y_{m1}}{\partial x}}&{\frac {\partial y_{m2}}{\partial x}}&\cdots &{\frac {\partial y_{mn}}{\partial x}}\\\end{bmatrix}}.}
=== Scalar-by-matrix ===
The derivative of a scalar function y, with respect to a p×q matrix X of independent variables, is given (in numerator layout notation) by
∂
y
∂
X
=
[
∂
y
∂
x
11
∂
y
∂
x
21
⋯
∂
y
∂
x
p
1
∂
y
∂
x
12
∂
y
∂
x
22
⋯
∂
y
∂
x
p
2
⋮
⋮
⋱
⋮
∂
y
∂
x
1
q
∂
y
∂
x
2
q
⋯
∂
y
∂
x
p
q
]
.
{\displaystyle {\frac {\partial y}{\partial \mathbf {X} }}={\begin{bmatrix}{\frac {\partial y}{\partial x_{11}}}&{\frac {\partial y}{\partial x_{21}}}&\cdots &{\frac {\partial y}{\partial x_{p1}}}\\{\frac {\partial y}{\partial x_{12}}}&{\frac {\partial y}{\partial x_{22}}}&\cdots &{\frac {\partial y}{\partial x_{p2}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y}{\partial x_{1q}}}&{\frac {\partial y}{\partial x_{2q}}}&\cdots &{\frac {\partial y}{\partial x_{pq}}}\\\end{bmatrix}}.}
Important examples of scalar functions of matrices include the trace of a matrix and the determinant.
In analog with vector calculus this derivative is often written as the following.
∇
X
y
(
X
)
=
∂
y
(
X
)
∂
X
{\displaystyle \nabla _{\mathbf {X} }y(\mathbf {X} )={\frac {\partial y(\mathbf {X} )}{\partial \mathbf {X} }}}
Also in analog with vector calculus, the directional derivative of a scalar f(X) of a matrix X in the direction of matrix Y is given by
∇
Y
f
=
tr
(
∂
f
∂
X
Y
)
.
{\displaystyle \nabla _{\mathbf {Y} }f=\operatorname {tr} \left({\frac {\partial f}{\partial \mathbf {X} }}\mathbf {Y} \right).}
It is the gradient matrix, in particular, that finds many uses in minimization problems in estimation theory, particularly in the derivation of the Kalman filter algorithm, which is of great importance in the field.
=== Other matrix derivatives ===
The three types of derivatives that have not been considered are those involving vectors-by-matrices, matrices-by-vectors, and matrices-by-matrices. These are not as widely considered and a notation is not widely agreed upon.
== Layout conventions ==
This section discusses the similarities and differences between notational conventions that are used in the various fields that take advantage of matrix calculus. Although there are largely two consistent conventions, some authors find it convenient to mix the two conventions in forms that are discussed below. After this section, equations will be listed in both competing forms separately.
The fundamental issue is that the derivative of a vector with respect to a vector, i.e.
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}}
, is often written in two competing ways. If the numerator y is of size m and the denominator x of size n, then the result can be laid out as either an m×n matrix or n×m matrix, i.e. the m elements of y laid out in rows and the n elements of x laid out in columns, or vice versa. This leads to the following possibilities:
Numerator layout, i.e. lay out according to y and xT (i.e. contrarily to x). This is sometimes known as the Jacobian formulation. This corresponds to the m×n layout in the previous example, which means that the row number of
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}}
equals to the size of the numerator
y
{\displaystyle \mathbf {y} }
and the column number of
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}}
equals to the size of xT.
Denominator layout, i.e. lay out according to yT and x (i.e. contrarily to y). This is sometimes known as the Hessian formulation. Some authors term this layout the gradient, in distinction to the Jacobian (numerator layout), which is its transpose. (However, gradient more commonly means the derivative
∂
y
∂
x
,
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }},}
regardless of layout.). This corresponds to the n×m layout in the previous example, which means that the row number of
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}}
equals to the size of x (the denominator).
A third possibility sometimes seen is to insist on writing the derivative as
∂
y
∂
x
′
,
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} '}},}
(i.e. the derivative is taken with respect to the transpose of x) and follow the numerator layout. This makes it possible to claim that the matrix is laid out according to both numerator and denominator. In practice this produces results the same as the numerator layout.
When handling the gradient
∂
y
∂
x
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }}}
and the opposite case
∂
y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}},}
we have the same issues. To be consistent, we should do one of the following:
If we choose numerator layout for
∂
y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }},}
we should lay out the gradient
∂
y
∂
x
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }}}
as a row vector, and
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}}}
as a column vector.
If we choose denominator layout for
∂
y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }},}
we should lay out the gradient
∂
y
∂
x
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }}}
as a column vector, and
∂
y
∂
x
{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}}}
as a row vector.
In the third possibility above, we write
∂
y
∂
x
′
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} '}}}
and
∂
y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {y} }{\partial x}},}
and use numerator layout.
Not all math textbooks and papers are consistent in this respect throughout. That is, sometimes different conventions are used in different contexts within the same book or paper. For example, some choose denominator layout for gradients (laying them out as column vectors), but numerator layout for the vector-by-vector derivative
∂
y
∂
x
.
{\displaystyle {\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}.}
Similarly, when it comes to scalar-by-matrix derivatives
∂
y
∂
X
{\displaystyle {\frac {\partial y}{\partial \mathbf {X} }}}
and matrix-by-scalar derivatives
∂
Y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}},}
then consistent numerator layout lays out according to Y and XT, while consistent denominator layout lays out according to YT and X. In practice, however, following a denominator layout for
∂
Y
∂
x
,
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}},}
and laying the result out according to YT, is rarely seen because it makes for ugly formulas that do not correspond to the scalar formulas. As a result, the following layouts can often be found:
Consistent numerator layout, which lays out
∂
Y
∂
x
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}}}
according to Y and
∂
y
∂
X
{\displaystyle {\frac {\partial y}{\partial \mathbf {X} }}}
according to XT.
Mixed layout, which lays out
∂
Y
∂
x
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}}}
according to Y and
∂
y
∂
X
{\displaystyle {\frac {\partial y}{\partial \mathbf {X} }}}
according to X.
Use the notation
∂
y
∂
X
′
,
{\displaystyle {\frac {\partial y}{\partial \mathbf {X} '}},}
with results the same as consistent numerator layout.
In the following formulas, we handle the five possible combinations
∂
y
∂
x
,
∂
y
∂
x
,
∂
y
∂
x
,
∂
y
∂
X
{\displaystyle {\frac {\partial y}{\partial \mathbf {x} }},{\frac {\partial \mathbf {y} }{\partial x}},{\frac {\partial \mathbf {y} }{\partial \mathbf {x} }},{\frac {\partial y}{\partial \mathbf {X} }}}
and
∂
Y
∂
x
{\displaystyle {\frac {\partial \mathbf {Y} }{\partial x}}}
separately. We also handle cases of scalar-by-scalar derivatives that involve an intermediate vector or matrix. (This can arise, for example, if a multi-dimensional parametric curve is defined in terms of a scalar variable, and then a derivative of a scalar function of the curve is taken with respect to the scalar that parameterizes the curve.) For each of the various combinations, we give numerator-layout and denominator-layout results, except in the cases above where denominator layout rarely occurs. In cases involving matrices where it makes sense, we give numerator-layout and mixed-layout results. As noted above, cases where vector and matrix denominators are written in transpose notation are equivalent to numerator layout with the denominators written without the transpose.
Keep in mind that various authors use different combinations of numerator and denominator layouts for different types of derivatives, and there is no guarantee that an author will consistently use either numerator or denominator layout for all types. Match up the formulas below with those quoted in the source to determine the layout used for that particular type of derivative, but be careful not to assume that derivatives of other types necessarily follow the same kind of layout.
When taking derivatives with an aggregate (vector or matrix) denominator in order to find a maximum or minimum of the aggregate, it should be kept in mind that using numerator layout will produce results that are transposed with respect to the aggregate. For example, in attempting to find the maximum likelihood estimate of a multivariate normal distribution using matrix calculus, if the domain is a k×1 column vector, then the result using the numerator layout will be in the form of a 1×k row vector. Thus, either the results should be transposed at the end or the denominator layout (or mixed layout) should be used.
The results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
=== Numerator-layout notation ===
Using numerator-layout notation, we have:
∂
y
∂
x
=
[
∂
y
∂
x
1
∂
y
∂
x
2
⋯
∂
y
∂
x
n
]
.
∂
y
∂
x
=
[
∂
y
1
∂
x
∂
y
2
∂
x
⋮
∂
y
m
∂
x
]
.
∂
y
∂
x
=
[
∂
y
1
∂
x
1
∂
y
1
∂
x
2
⋯
∂
y
1
∂
x
n
∂
y
2
∂
x
1
∂
y
2
∂
x
2
⋯
∂
y
2
∂
x
n
⋮
⋮
⋱
⋮
∂
y
m
∂
x
1
∂
y
m
∂
x
2
⋯
∂
y
m
∂
x
n
]
.
∂
y
∂
X
=
[
∂
y
∂
x
11
∂
y
∂
x
21
⋯
∂
y
∂
x
p
1
∂
y
∂
x
12
∂
y
∂
x
22
⋯
∂
y
∂
x
p
2
⋮
⋮
⋱
⋮
∂
y
∂
x
1
q
∂
y
∂
x
2
q
⋯
∂
y
∂
x
p
q
]
.
{\displaystyle {\begin{aligned}{\frac {\partial y}{\partial \mathbf {x} }}&={\begin{bmatrix}{\frac {\partial y}{\partial x_{1}}}&{\frac {\partial y}{\partial x_{2}}}&\cdots &{\frac {\partial y}{\partial x_{n}}}\end{bmatrix}}.\\{\frac {\partial \mathbf {y} }{\partial x}}&={\begin{bmatrix}{\frac {\partial y_{1}}{\partial x}}\\{\frac {\partial y_{2}}{\partial x}}\\\vdots \\{\frac {\partial y_{m}}{\partial x}}\\\end{bmatrix}}.\\{\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}&={\begin{bmatrix}{\frac {\partial y_{1}}{\partial x_{1}}}&{\frac {\partial y_{1}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{1}}{\partial x_{n}}}\\{\frac {\partial y_{2}}{\partial x_{1}}}&{\frac {\partial y_{2}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{2}}{\partial x_{n}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y_{m}}{\partial x_{1}}}&{\frac {\partial y_{m}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{m}}{\partial x_{n}}}\\\end{bmatrix}}.\\{\frac {\partial y}{\partial \mathbf {X} }}&={\begin{bmatrix}{\frac {\partial y}{\partial x_{11}}}&{\frac {\partial y}{\partial x_{21}}}&\cdots &{\frac {\partial y}{\partial x_{p1}}}\\{\frac {\partial y}{\partial x_{12}}}&{\frac {\partial y}{\partial x_{22}}}&\cdots &{\frac {\partial y}{\partial x_{p2}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y}{\partial x_{1q}}}&{\frac {\partial y}{\partial x_{2q}}}&\cdots &{\frac {\partial y}{\partial x_{pq}}}\\\end{bmatrix}}.\end{aligned}}}
The following definitions are only provided in numerator-layout notation:
∂
Y
∂
x
=
[
∂
y
11
∂
x
∂
y
12
∂
x
⋯
∂
y
1
n
∂
x
∂
y
21
∂
x
∂
y
22
∂
x
⋯
∂
y
2
n
∂
x
⋮
⋮
⋱
⋮
∂
y
m
1
∂
x
∂
y
m
2
∂
x
⋯
∂
y
m
n
∂
x
]
.
d
X
=
[
d
x
11
d
x
12
⋯
d
x
1
n
d
x
21
d
x
22
⋯
d
x
2
n
⋮
⋮
⋱
⋮
d
x
m
1
d
x
m
2
⋯
d
x
m
n
]
.
{\displaystyle {\begin{aligned}{\frac {\partial \mathbf {Y} }{\partial x}}&={\begin{bmatrix}{\frac {\partial y_{11}}{\partial x}}&{\frac {\partial y_{12}}{\partial x}}&\cdots &{\frac {\partial y_{1n}}{\partial x}}\\{\frac {\partial y_{21}}{\partial x}}&{\frac {\partial y_{22}}{\partial x}}&\cdots &{\frac {\partial y_{2n}}{\partial x}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y_{m1}}{\partial x}}&{\frac {\partial y_{m2}}{\partial x}}&\cdots &{\frac {\partial y_{mn}}{\partial x}}\\\end{bmatrix}}.\\d\mathbf {X} &={\begin{bmatrix}dx_{11}&dx_{12}&\cdots &dx_{1n}\\dx_{21}&dx_{22}&\cdots &dx_{2n}\\\vdots &\vdots &\ddots &\vdots \\dx_{m1}&dx_{m2}&\cdots &dx_{mn}\\\end{bmatrix}}.\end{aligned}}}
=== Denominator-layout notation ===
Using denominator-layout notation, we have:
∂
y
∂
x
=
[
∂
y
∂
x
1
∂
y
∂
x
2
⋮
∂
y
∂
x
n
]
.
∂
y
∂
x
=
[
∂
y
1
∂
x
∂
y
2
∂
x
⋯
∂
y
m
∂
x
]
.
∂
y
∂
x
=
[
∂
y
1
∂
x
1
∂
y
2
∂
x
1
⋯
∂
y
m
∂
x
1
∂
y
1
∂
x
2
∂
y
2
∂
x
2
⋯
∂
y
m
∂
x
2
⋮
⋮
⋱
⋮
∂
y
1
∂
x
n
∂
y
2
∂
x
n
⋯
∂
y
m
∂
x
n
]
.
∂
y
∂
X
=
[
∂
y
∂
x
11
∂
y
∂
x
12
⋯
∂
y
∂
x
1
q
∂
y
∂
x
21
∂
y
∂
x
22
⋯
∂
y
∂
x
2
q
⋮
⋮
⋱
⋮
∂
y
∂
x
p
1
∂
y
∂
x
p
2
⋯
∂
y
∂
x
p
q
]
.
{\displaystyle {\begin{aligned}{\frac {\partial y}{\partial \mathbf {x} }}&={\begin{bmatrix}{\frac {\partial y}{\partial x_{1}}}\\{\frac {\partial y}{\partial x_{2}}}\\\vdots \\{\frac {\partial y}{\partial x_{n}}}\\\end{bmatrix}}.\\{\frac {\partial \mathbf {y} }{\partial x}}&={\begin{bmatrix}{\frac {\partial y_{1}}{\partial x}}&{\frac {\partial y_{2}}{\partial x}}&\cdots &{\frac {\partial y_{m}}{\partial x}}\end{bmatrix}}.\\{\frac {\partial \mathbf {y} }{\partial \mathbf {x} }}&={\begin{bmatrix}{\frac {\partial y_{1}}{\partial x_{1}}}&{\frac {\partial y_{2}}{\partial x_{1}}}&\cdots &{\frac {\partial y_{m}}{\partial x_{1}}}\\{\frac {\partial y_{1}}{\partial x_{2}}}&{\frac {\partial y_{2}}{\partial x_{2}}}&\cdots &{\frac {\partial y_{m}}{\partial x_{2}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y_{1}}{\partial x_{n}}}&{\frac {\partial y_{2}}{\partial x_{n}}}&\cdots &{\frac {\partial y_{m}}{\partial x_{n}}}\\\end{bmatrix}}.\\{\frac {\partial y}{\partial \mathbf {X} }}&={\begin{bmatrix}{\frac {\partial y}{\partial x_{11}}}&{\frac {\partial y}{\partial x_{12}}}&\cdots &{\frac {\partial y}{\partial x_{1q}}}\\{\frac {\partial y}{\partial x_{21}}}&{\frac {\partial y}{\partial x_{22}}}&\cdots &{\frac {\partial y}{\partial x_{2q}}}\\\vdots &\vdots &\ddots &\vdots \\{\frac {\partial y}{\partial x_{p1}}}&{\frac {\partial y}{\partial x_{p2}}}&\cdots &{\frac {\partial y}{\partial x_{pq}}}\\\end{bmatrix}}.\end{aligned}}}
== Identities ==
As noted above, in general, the results of operations will be transposed when switching between numerator-layout and denominator-layout notation.
To help make sense of all the identities below, keep in mind the most important rules: the chain rule, product rule and sum rule. The sum rule applies universally, and the product rule applies in most of the cases below, provided that the order of matrix products is maintained, since matrix products are not commutative. The chain rule applies in some of the cases, but unfortunately does not apply in matrix-by-scalar derivatives or scalar-by-matrix derivatives (in the latter case, mostly involving the trace operator applied to matrices). In the latter case, the product rule can't quite be applied directly, either, but the equivalent can be done with a bit more work using the differential identities.
The following identities adopt the following conventions:
the scalars, a, b, c, d, and e are constant in respect of, and the scalars, u, and v are functions of one of x, x, or X;
the vectors, a, b, c, d, and e are constant in respect of, and the vectors, u, and v are functions of one of x, x, or X;
the matrices, A, B, C, D, and E are constant in respect of, and the matrices, U and V are functions of one of x, x, or X.
=== Vector-by-vector identities ===
This is presented first because all of the operations that apply to vector-by-vector differentiation apply directly to vector-by-scalar or scalar-by-vector differentiation simply by reducing the appropriate vector in the numerator or denominator to a scalar.
=== Scalar-by-vector identities ===
The fundamental identities are placed above the thick black line.
=== Vector-by-scalar identities ===
NOTE: The formulas involving the vector-by-vector derivatives
∂
g
(
u
)
∂
u
{\displaystyle {\frac {\partial \mathbf {g} (\mathbf {u} )}{\partial \mathbf {u} }}}
and
∂
f
(
g
)
∂
g
{\displaystyle {\frac {\partial \mathbf {f} (\mathbf {g} )}{\partial \mathbf {g} }}}
(whose outputs are matrices) assume the matrices are laid out consistent with the vector layout, i.e. numerator-layout matrix when numerator-layout vector and vice versa; otherwise, transpose the vector-by-vector derivatives.
=== Scalar-by-matrix identities ===
Note that exact equivalents of the scalar product rule and chain rule do not exist when applied to matrix-valued functions of matrices. However, the product rule of this sort does apply to the differential form (see below), and this is the way to derive many of the identities below involving the trace function, combined with the fact that the trace function allows transposing and cyclic permutation, i.e.:
tr
(
A
)
=
tr
(
A
⊤
)
tr
(
A
B
C
D
)
=
tr
(
B
C
D
A
)
=
tr
(
C
D
A
B
)
=
tr
(
D
A
B
C
)
{\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} )&=\operatorname {tr} \left(\mathbf {A^{\top }} \right)\\\operatorname {tr} (\mathbf {ABCD} )&=\operatorname {tr} (\mathbf {BCDA} )=\operatorname {tr} (\mathbf {CDAB} )=\operatorname {tr} (\mathbf {DABC} )\end{aligned}}}
For example, to compute
∂
tr
(
A
X
B
X
⊤
C
)
∂
X
:
{\displaystyle {\frac {\partial \operatorname {tr} (\mathbf {AXBX^{\top }C} )}{\partial \mathbf {X} }}:}
d
tr
(
A
X
B
X
⊤
C
)
=
d
tr
(
C
A
X
B
X
⊤
)
=
tr
(
d
(
C
A
X
B
X
⊤
)
)
=
tr
(
C
A
X
d
(
B
X
⊤
)
+
d
(
C
A
X
)
B
X
⊤
)
=
tr
(
C
A
X
d
(
B
X
⊤
)
)
+
tr
(
d
(
C
A
X
)
B
X
⊤
)
=
tr
(
C
A
X
B
d
(
X
⊤
)
)
+
tr
(
C
A
(
d
X
)
B
X
⊤
)
=
tr
(
C
A
X
B
(
d
X
)
⊤
)
+
tr
(
C
A
(
d
X
)
B
X
⊤
)
=
tr
(
(
C
A
X
B
(
d
X
)
⊤
)
⊤
)
+
tr
(
C
A
(
d
X
)
B
X
⊤
)
=
tr
(
(
d
X
)
B
⊤
X
⊤
A
⊤
C
⊤
)
+
tr
(
C
A
(
d
X
)
B
X
⊤
)
=
tr
(
B
⊤
X
⊤
A
⊤
C
⊤
(
d
X
)
)
+
tr
(
B
X
⊤
C
A
(
d
X
)
)
=
tr
(
(
B
⊤
X
⊤
A
⊤
C
⊤
+
B
X
⊤
C
A
)
d
X
)
=
tr
(
(
C
A
X
B
+
A
⊤
C
⊤
X
B
⊤
)
⊤
d
X
)
{\displaystyle {\begin{aligned}d\operatorname {tr} (\mathbf {AXBX^{\top }C} )&=d\operatorname {tr} \left(\mathbf {CAXBX^{\top }} \right)=\operatorname {tr} \left(d\left(\mathbf {CAXBX^{\top }} \right)\right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAX} d(\mathbf {BX^{\top }} \right)+d\left(\mathbf {CAX} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAX} d\left(\mathbf {BX^{\top }} \right)\right)+\operatorname {tr} \left(d(\mathbf {CAX} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAXB} d\left(\mathbf {X^{\top }} \right)\right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {CAXB} (d\mathbf {X} )^{\top }\right)+\operatorname {tr} (\mathbf {CA} \left(d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {CAXB} (d\mathbf {X} )^{\top }\right)^{\top }\right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left((d\mathbf {X} )\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} \right)+\operatorname {tr} \left(\mathbf {CA} (d\mathbf {X} )\mathbf {BX^{\top }} \right)\\[1ex]&=\operatorname {tr} \left(\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} (d\mathbf {X} )\right)+\operatorname {tr} \left(\mathbf {BX^{\top }} \mathbf {CA} (d\mathbf {X} )\right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} +\mathbf {BX^{\top }} \mathbf {CA} \right)d\mathbf {X} \right)\\[1ex]&=\operatorname {tr} \left(\left(\mathbf {CAXB} +\mathbf {A^{\top }C^{\top }XB^{\top }} \right)^{\top }d\mathbf {X} \right)\end{aligned}}}
Therefore,
∂
tr
(
A
X
B
X
⊤
C
)
∂
X
=
B
⊤
X
⊤
A
⊤
C
⊤
+
B
X
⊤
C
A
.
{\displaystyle {\frac {\partial \operatorname {tr} \left(\mathbf {AXBX^{\top }C} \right)}{\partial \mathbf {X} }}=\mathbf {B^{\top }X^{\top }A^{\top }C^{\top }} +\mathbf {BX^{\top }CA} .}
(numerator layout)
∂
tr
(
A
X
B
X
⊤
C
)
∂
X
=
C
A
X
B
+
A
⊤
C
⊤
X
B
⊤
.
{\displaystyle {\frac {\partial \operatorname {tr} \left(\mathbf {AXBX^{\top }C} \right)}{\partial \mathbf {X} }}=\mathbf {CAXB} +\mathbf {A^{\top }C^{\top }XB^{\top }} .}
(denominator layout)
(For the last step, see the Conversion from differential to derivative form section.)
=== Matrix-by-scalar identities ===
=== Scalar-by-scalar identities ===
==== With vectors involved ====
==== With matrices involved ====
=== Identities in differential form ===
It is often easier to work in differential form and then convert back to normal derivatives. This only works well using the numerator layout. In these rules, a is a scalar.
In the last row,
δ
i
j
{\displaystyle \delta _{ij}}
is the Kronecker delta and
(
P
k
)
i
j
=
(
Q
)
i
k
(
Q
−
1
)
k
j
{\displaystyle (\mathbf {P} _{k})_{ij}=(\mathbf {Q} )_{ik}(\mathbf {Q} ^{-1})_{kj}}
is the set of orthogonal projection operators that project onto the k-th eigenvector of X.
Q is the matrix of eigenvectors of
X
=
Q
Λ
Q
−
1
{\displaystyle \mathbf {X} =\mathbf {Q} {\boldsymbol {\Lambda }}\mathbf {Q} ^{-1}}
, and
(
Λ
)
i
i
=
λ
i
{\displaystyle ({\boldsymbol {\Lambda }})_{ii}=\lambda _{i}}
are the eigenvalues.
The matrix function
f
(
X
)
{\displaystyle f(\mathbf {X} )}
is defined in terms of the scalar function
f
(
x
)
{\displaystyle f(x)}
for diagonalizable matrices by
f
(
X
)
=
∑
i
f
(
λ
i
)
P
i
{\textstyle f(\mathbf {X} )=\sum _{i}f(\lambda _{i})\mathbf {P} _{i}}
where
X
=
∑
i
λ
i
P
i
{\textstyle \mathbf {X} =\sum _{i}\lambda _{i}\mathbf {P} _{i}}
with
P
i
P
j
=
δ
i
j
P
i
{\displaystyle \mathbf {P} _{i}\mathbf {P} _{j}=\delta _{ij}\mathbf {P} _{i}}
.
To convert to normal derivative form, first convert it to one of the following canonical forms, and then use these identities:
== Applications ==
Matrix differential calculus is used in statistics and econometrics, particularly for the statistical analysis of multivariate distributions, especially the multivariate normal distribution and other elliptical distributions.
It is used in regression analysis to compute, for example, the ordinary least squares regression formula for the case of multiple explanatory variables.
It is also used in random matrices, statistical moments, local sensitivity and statistical diagnostics.
== See also ==
Derivative (generalizations)
Product integral
Ricci calculus
Tensor derivative
== Notes ==
== References ==
== Further reading ==
== External links ==
=== Software ===
MatrixCalculus.org, a website for evaluating matrix calculus expressions symbolically
NCAlgebra, an open-source Mathematica package that has some matrix calculus functionality
SymPy supports symbolic matrix derivatives in its matrix expression module, as well as symbolic tensor derivatives in its array expression module.
Tensorgrad, an open-source python package for matrix calculus. Supports general symbolic tensor derivatives using Penrose graphical notation.
=== Information ===
Matrix Reference Manual, Mike Brookes, Imperial College London.
Matrix Differentiation (and some other stuff), Randal J. Barnes, Department of Civil Engineering, University of Minnesota.
Notes on Matrix Calculus, Paul L. Fackler, North Carolina State University.
Matrix Differential Calculus Archived 2012-09-16 at the Wayback Machine (slide presentation), Zhang Le, University of Edinburgh.
Introduction to Vector and Matrix Differentiation (notes on matrix differentiation, in the context of Econometrics), Heino Bohn Nielsen.
A note on differentiating matrices (notes on matrix differentiation), Pawel Koval, from Munich Personal RePEc Archive.
Vector/Matrix Calculus More notes on matrix differentiation.
Matrix Identities (notes on matrix differentiation), Sam Roweis.
Tensor Cookbook Matrix Calculus using Tensor Diagrams. | Wikipedia/Matrix_calculus |
Molecular graphics is the discipline and philosophy of studying molecules and their properties through graphical representation. IUPAC limits the definition to representations on a "graphical display device". Ever since Dalton's atoms and Kekulé's benzene, there has been a rich history of hand-drawn atoms and molecules, and these representations have had an important influence on modern molecular graphics.
Colour molecular graphics are often used on chemistry journal covers artistically.
== History ==
Prior to the use of computer graphics in representing molecular structure, Robert Corey and Linus Pauling developed a system for representing atoms or groups of atoms from hard wood on a scale of 1 inch = 1 angstrom connected by a clamping device to maintain the molecular configuration. These early models also established the CPK coloring scheme that is still used today to differentiate the different types of atoms in molecular models (e.g. carbon = black, oxygen = red, nitrogen = blue, etc). This early model was improved upon in 1966 by W.L. Koltun and are now known as Corey-Pauling-Koltun (CPK) models.
The earliest efforts to produce models of molecular structure was done by Project MAC using wire-frame models displayed on a cathode ray tube in the mid 1960s. In 1965, Carroll Johnson distributed the Oak Ridge thermal ellipsoid plot (ORTEP) that visualized molecules as a ball-and-stick model with lines representing the bonds between atoms and ellipsoids to represent the probability of thermal motion. Thermal ellipsoid plots quickly became the de facto standard used in the display of X-ray crystallography data, and are still in wide use today. The first practical use of molecular graphics was a simple display of the protein myoglobin using a wireframe representation in 1966 by Cyrus Levinthal and Robert Langridge working at Project MAC.
Among the milestones in high-performance molecular graphics was the work of Nelson Max in "realistic" rendering of macromolecules using reflecting spheres.
Initially much of the technology concentrated on high-performance 3D graphics. During the 1970s, methods for displaying 3D graphics using cathode ray tubes were developed using continuous tone computer graphics in combination with electro-optic shutter viewing devices. The first devices used an active shutter 3D system, generating different perspective views for the left and right channel to provide the illusion of three-dimensional viewing. Stereoscopic viewing glasses were designed using lead lanthanum zirconate titanate (PLZT) ceramics as electronically-controlled shutter elements. Active 3D glasses require batteries and work in concert with the display to actively change the presentation by the lenses to the wearer's eyes. Many modern 3D glasses use a passive, polarized 3D system that enables the wearer to visualize 3D effects based on their own perception. Passive 3D glasses are more common today since they are less expensive.
The requirements of macromolecular crystallography also drove molecular graphics because the traditional techniques of physical model-building could not scale. The first two protein structures solved by molecular graphics without the aid of the Richards' Box were built with Stan Swanson's program FIT on the Vector General graphics display in the laboratory of Edgar Meyer at Texas A&M University: First Marge Legg in Al Cotton's lab at A&M solved a second, higher-resolution structure of staph. nuclease (1975) and then Jim Hogle solved the structure of monoclinic lysozyme in 1976. A full year passed before other graphics systems were used to replace the Richards' Box for modelling into density in 3-D. Alwyn Jones' FRODO program (and later "O") were developed to overlay the molecular electron density determined from X-ray crystallography and the hypothetical molecular structure.
=== Timeline ===
== Types ==
=== Ball-and-stick models ===
In the ball-and-stick model, atoms are drawn as small sphered connected by rods representing the chemical bonds between them.
=== Space-filling models ===
In the space-filling model, atoms are drawn as solid spheres to suggest the space they occupy, in proportion to their van der Waals radii. Atoms that share a bond overlap with each other.
=== Surfaces ===
In some models, the surface of the molecule is approximated and shaded to represent a physical property of the molecule, such as electronic charge density.
=== Ribbon diagrams ===
Ribbon diagrams are schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon shows the overall path and organization of the protein backbone in 3D, and serves as a visual framework on which to hang details of the full atomic structure, such as the balls for the oxygen atoms bound to the active site of myoglobin in the adjacent image. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-strands as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon.
== See also ==
Molecular model – Physical model for representing molecules
Ball-and-stick model – Representation of a molecule's bonds and 3D structure
Space-filling model – Type of 3D molecular model
Molecular modelling – Discovering chemical properties by physical simulations
Molecular geometry – Study of the 3D shapes of molecules
Molecule editor – Computer program used to create and modify simulated representations of molecular structures
Software
Molecular graphics software
Molecular mechanics modeling software
Molecular design software – CAD software for molecular-level engineering, modelling, and analysisPages displaying wikidata descriptions as a fallback
Structural formula – Graphic representation of a molecular structure
== References ==
== External links ==
Luminary Series Interview with Robert Langridge Interview by Russ Altman and historical slides.
History of Visualization of Biological Macromolecules by Eric Martz and Eric Francoeur. | Wikipedia/Molecular_graphics |
In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function f in terms of the derivative of f. More precisely, if the inverse of
f
{\displaystyle f}
is denoted as
f
−
1
{\displaystyle f^{-1}}
, where
f
−
1
(
y
)
=
x
{\displaystyle f^{-1}(y)=x}
if and only if
f
(
x
)
=
y
{\displaystyle f(x)=y}
, then the inverse function rule is, in Lagrange's notation,
[
f
−
1
]
′
(
y
)
=
1
f
′
(
f
−
1
(
y
)
)
{\displaystyle \left[f^{-1}\right]'(y)={\frac {1}{f'\left(f^{-1}(y)\right)}}}
.
This formula holds in general whenever
f
{\displaystyle f}
is continuous and injective on an interval I, with
f
{\displaystyle f}
being differentiable at
f
−
1
(
y
)
{\displaystyle f^{-1}(y)}
(
∈
I
{\displaystyle \in I}
) and where
f
′
(
f
−
1
(
y
)
)
≠
0
{\displaystyle f'(f^{-1}(y))\neq 0}
. The same formula is also equivalent to the expression
D
[
f
−
1
]
=
1
(
D
f
)
∘
(
f
−
1
)
,
{\displaystyle {\mathcal {D}}\left[f^{-1}\right]={\frac {1}{({\mathcal {D}}f)\circ \left(f^{-1}\right)}},}
where
D
{\displaystyle {\mathcal {D}}}
denotes the unary derivative operator (on the space of functions) and
∘
{\displaystyle \circ }
denotes function composition.
Geometrically, a function and inverse function have graphs that are reflections, in the line
y
=
x
{\displaystyle y=x}
. This reflection operation turns the gradient of any line into its reciprocal.
Assuming that
f
{\displaystyle f}
has an inverse in a neighbourhood of
x
{\displaystyle x}
and that its derivative at that point is non-zero, its inverse is guaranteed to be differentiable at
x
{\displaystyle x}
and have a derivative given by the above formula.
The inverse function rule may also be expressed in Leibniz's notation. As that notation suggests,
d
x
d
y
⋅
d
y
d
x
=
1.
{\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}=1.}
This relation is obtained by differentiating the equation
f
−
1
(
y
)
=
x
{\displaystyle f^{-1}(y)=x}
in terms of x and applying the chain rule, yielding that:
d
x
d
y
⋅
d
y
d
x
=
d
x
d
x
{\displaystyle {\frac {dx}{dy}}\,\cdot \,{\frac {dy}{dx}}={\frac {dx}{dx}}}
considering that the derivative of x with respect to x is 1.
== Derivation ==
Let
f
{\displaystyle f}
be an invertible (bijective) function, let
x
{\displaystyle x}
be in the domain of
f
{\displaystyle f}
, and let
y
=
f
(
x
)
.
{\displaystyle y=f(x).}
Let
g
=
f
−
1
.
{\displaystyle g=f^{-1}.}
So,
f
(
g
(
y
)
)
=
y
.
{\displaystyle f(g(y))=y.}
Derivating this equation with respect to
y
{\displaystyle y}
, and using the chain rule, one gets
f
′
(
g
(
y
)
)
⋅
g
′
(
y
)
=
1.
{\displaystyle f'(g(y))\cdot g'(y)=1.}
That is,
g
′
(
y
)
=
1
f
′
(
g
(
y
)
)
{\displaystyle g'(y)={\frac {1}{f'(g(y))}}}
or
(
f
−
1
)
′
(
y
)
=
1
f
′
(
f
−
1
(
y
)
)
.
{\displaystyle (f^{-1})^{\prime }(y)={\frac {1}{f^{\prime }(f^{-1}(y))}}.}
== Examples ==
y
=
x
2
{\displaystyle y=x^{2}}
(for positive x) has inverse
x
=
y
{\displaystyle x={\sqrt {y}}}
.
d
y
d
x
=
2
x
;
d
x
d
y
=
1
2
y
=
1
2
x
{\displaystyle {\frac {dy}{dx}}=2x{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{2{\sqrt {y}}}}={\frac {1}{2x}}}
d
y
d
x
⋅
d
x
d
y
=
2
x
⋅
1
2
x
=
1.
{\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=2x\cdot {\frac {1}{2x}}=1.}
At
x
=
0
{\displaystyle x=0}
, however, there is a problem: the graph of the square root function becomes vertical, corresponding to a horizontal tangent for the square function.
y
=
e
x
{\displaystyle y=e^{x}}
(for real x) has inverse
x
=
ln
y
{\displaystyle x=\ln {y}}
(for positive
y
{\displaystyle y}
)
d
y
d
x
=
e
x
;
d
x
d
y
=
1
y
=
e
−
x
{\displaystyle {\frac {dy}{dx}}=e^{x}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {dx}{dy}}={\frac {1}{y}}=e^{-x}}
d
y
d
x
⋅
d
x
d
y
=
e
x
⋅
e
−
x
=
1.
{\displaystyle {\frac {dy}{dx}}\,\cdot \,{\frac {dx}{dy}}=e^{x}\cdot e^{-x}=1.}
== Additional properties ==
Integrating this relationship gives
f
−
1
(
x
)
=
∫
1
f
′
(
f
−
1
(
x
)
)
d
x
+
C
.
{\displaystyle {f^{-1}}(x)=\int {\frac {1}{f'({f^{-1}}(x))}}\,{dx}+C.}
This is only useful if the integral exists. In particular we need
f
′
(
x
)
{\displaystyle f'(x)}
to be non-zero across the range of integration.
It follows that a function that has a continuous derivative has an inverse in a neighbourhood of every point where the derivative is non-zero. This need not be true if the derivative is not continuous.
Another very interesting and useful property is the following:
∫
f
−
1
(
x
)
d
x
=
x
f
−
1
(
x
)
−
F
(
f
−
1
(
x
)
)
+
C
{\displaystyle \int f^{-1}(x)\,{dx}=xf^{-1}(x)-F(f^{-1}(x))+C}
Where
F
{\displaystyle F}
denotes the antiderivative of
f
{\displaystyle f}
.
The inverse of the derivative of f(x) is also of interest, as it is used in showing the convexity of the Legendre transform.
Let
z
=
f
′
(
x
)
{\displaystyle z=f'(x)}
then we have, assuming
f
″
(
x
)
≠
0
{\displaystyle f''(x)\neq 0}
:
d
(
f
′
)
−
1
(
z
)
d
z
=
1
f
″
(
x
)
{\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {1}{f''(x)}}}
This can be shown using the previous notation
y
=
f
(
x
)
{\displaystyle y=f(x)}
. Then we have:
f
′
(
x
)
=
d
y
d
x
=
d
y
d
z
d
z
d
x
=
d
y
d
z
f
″
(
x
)
⇒
d
y
d
z
=
f
′
(
x
)
f
″
(
x
)
{\displaystyle f'(x)={\frac {dy}{dx}}={\frac {dy}{dz}}{\frac {dz}{dx}}={\frac {dy}{dz}}f''(x)\Rightarrow {\frac {dy}{dz}}={\frac {f'(x)}{f''(x)}}}
Therefore:
d
(
f
′
)
−
1
(
z
)
d
z
=
d
x
d
z
=
d
y
d
z
d
x
d
y
=
f
′
(
x
)
f
″
(
x
)
1
f
′
(
x
)
=
1
f
″
(
x
)
{\displaystyle {\frac {d(f')^{-1}(z)}{dz}}={\frac {dx}{dz}}={\frac {dy}{dz}}{\frac {dx}{dy}}={\frac {f'(x)}{f''(x)}}{\frac {1}{f'(x)}}={\frac {1}{f''(x)}}}
By induction, we can generalize this result for any integer
n
≥
1
{\displaystyle n\geq 1}
, with
z
=
f
(
n
)
(
x
)
{\displaystyle z=f^{(n)}(x)}
, the nth derivative of f(x), and
y
=
f
(
n
−
1
)
(
x
)
{\displaystyle y=f^{(n-1)}(x)}
, assuming
f
(
i
)
(
x
)
≠
0
for
0
<
i
≤
n
+
1
{\displaystyle f^{(i)}(x)\neq 0{\text{ for }}0<i\leq n+1}
:
d
(
f
(
n
)
)
−
1
(
z
)
d
z
=
1
f
(
n
+
1
)
(
x
)
{\displaystyle {\frac {d(f^{(n)})^{-1}(z)}{dz}}={\frac {1}{f^{(n+1)}(x)}}}
== Higher derivatives ==
The chain rule given above is obtained by differentiating the identity
f
−
1
(
f
(
x
)
)
=
x
{\displaystyle f^{-1}(f(x))=x}
with respect to x. One can continue the same process for higher derivatives. Differentiating the identity twice with respect to x, one obtains
d
2
y
d
x
2
⋅
d
x
d
y
+
d
d
x
(
d
x
d
y
)
⋅
(
d
y
d
x
)
=
0
,
{\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d}{dx}}\left({\frac {dx}{dy}}\right)\,\cdot \,\left({\frac {dy}{dx}}\right)=0,}
that is simplified further by the chain rule as
d
2
y
d
x
2
⋅
d
x
d
y
+
d
2
x
d
y
2
⋅
(
d
y
d
x
)
2
=
0.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}\,\cdot \,{\frac {dx}{dy}}+{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}=0.}
Replacing the first derivative, using the identity obtained earlier, we get
d
2
y
d
x
2
=
−
d
2
x
d
y
2
⋅
(
d
y
d
x
)
3
.
{\displaystyle {\frac {d^{2}y}{dx^{2}}}=-{\frac {d^{2}x}{dy^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{3}.}
Similarly for the third derivative:
d
3
y
d
x
3
=
−
d
3
x
d
y
3
⋅
(
d
y
d
x
)
4
−
3
d
2
x
d
y
2
⋅
d
2
y
d
x
2
⋅
(
d
y
d
x
)
2
{\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}-3{\frac {d^{2}x}{dy^{2}}}\,\cdot \,{\frac {d^{2}y}{dx^{2}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{2}}
or using the formula for the second derivative,
d
3
y
d
x
3
=
−
d
3
x
d
y
3
⋅
(
d
y
d
x
)
4
+
3
(
d
2
x
d
y
2
)
2
⋅
(
d
y
d
x
)
5
{\displaystyle {\frac {d^{3}y}{dx^{3}}}=-{\frac {d^{3}x}{dy^{3}}}\,\cdot \,\left({\frac {dy}{dx}}\right)^{4}+3\left({\frac {d^{2}x}{dy^{2}}}\right)^{2}\,\cdot \,\left({\frac {dy}{dx}}\right)^{5}}
These formulas are generalized by the Faà di Bruno's formula.
These formulas can also be written using Lagrange's notation. If f and g are inverses, then
g
″
(
x
)
=
−
f
″
(
g
(
x
)
)
[
f
′
(
g
(
x
)
)
]
3
{\displaystyle g''(x)={\frac {-f''(g(x))}{[f'(g(x))]^{3}}}}
== Example ==
y
=
e
x
{\displaystyle y=e^{x}}
has the inverse
x
=
ln
y
{\displaystyle x=\ln y}
. Using the formula for the second derivative of the inverse function,
d
y
d
x
=
d
2
y
d
x
2
=
e
x
=
y
;
(
d
y
d
x
)
3
=
y
3
;
{\displaystyle {\frac {dy}{dx}}={\frac {d^{2}y}{dx^{2}}}=e^{x}=y{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}\left({\frac {dy}{dx}}\right)^{3}=y^{3};}
so that
d
2
x
d
y
2
⋅
y
3
+
y
=
0
;
d
2
x
d
y
2
=
−
1
y
2
{\displaystyle {\frac {d^{2}x}{dy^{2}}}\,\cdot \,y^{3}+y=0{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }};{\mbox{ }}{\mbox{ }}{\mbox{ }}{\mbox{ }}{\frac {d^{2}x}{dy^{2}}}=-{\frac {1}{y^{2}}}}
,
which agrees with the direct calculation.
== See also ==
Calculus – Branch of mathematics
Chain rule – For derivatives of composed functions
Differentiation of trigonometric functions – Mathematical process of finding the derivative of a trigonometric function
Differentiation rules – Rules for computing derivatives of functions
Implicit function theorem – On converting relations to functions of several real variables
Integration of inverse functions – Mathematical theorem, used in calculusPages displaying short descriptions of redirect targets
Inverse function – Mathematical concept
Inverse function theorem – Theorem in mathematics
Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets
Vector calculus identities – Mathematical identities
== References ==
Marsden, Jerrold E.; Weinstein, Alan (1981). "Chapter 8: Inverse Functions and the Chain Rule". Calculus unlimited (PDF). Menlo Park, Calif.: Benjamin/Cummings Pub. Co. ISBN 0-8053-6932-5. | Wikipedia/Inverse_functions_and_differentiation |
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration.
Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous F = ma equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.
== Derivative ==
The derivative of
f
(
x
)
{\displaystyle f(x)}
at the point
x
=
a
{\displaystyle x=a}
is the slope of the tangent to
(
a
,
f
(
a
)
)
{\displaystyle (a,f(a))}
. In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form
y
=
m
x
+
b
{\displaystyle y=mx+b}
. The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in
y
{\displaystyle y}
by the change in
x
{\displaystyle x}
, meaning that
slope
=
change in
y
change in
x
{\displaystyle {\text{slope }}={\frac {{\text{ change in }}y}{{\text{change in }}x}}}
. For, the graph of
y
=
−
2
x
+
13
{\displaystyle y=-2x+13}
has a slope of
−
2
{\displaystyle -2}
, as shown in the diagram below:
change in
y
change in
x
=
−
6
+
3
=
−
2
{\displaystyle {\frac {{\text{change in }}y}{{\text{change in }}x}}={\frac {-6}{+3}}=-2}
For brevity,
change in
y
change in
x
{\displaystyle {\frac {{\text{change in }}y}{{\text{change in }}x}}}
is often written as
Δ
y
Δ
x
{\displaystyle {\frac {\Delta y}{\Delta x}}}
, with
Δ
{\displaystyle \Delta }
being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as
y
=
x
2
{\displaystyle y=x^{2}}
vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example,
y
=
x
2
{\displaystyle y=x^{2}}
has a slope of
4
{\displaystyle 4}
at
x
=
2
{\displaystyle x=2}
because the slope of the tangent line to that point is equal to
4
{\displaystyle 4}
:
The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:
The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
and
(
x
+
Δ
x
,
f
(
x
+
Δ
x
)
)
{\displaystyle (x+\Delta x,f(x+\Delta x))}
, where
Δ
x
{\displaystyle \Delta x}
is a small number. As before, the slope of the line passing through these two points can be calculated with the formula
slope
=
Δ
y
Δ
x
{\displaystyle {\text{slope }}={\frac {\Delta y}{\Delta x}}}
. This gives
slope
=
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
{\displaystyle {\text{slope}}={\frac {f(x+\Delta x)-f(x)}{\Delta x}}}
As
Δ
x
{\displaystyle \Delta x}
gets closer and closer to
0
{\displaystyle 0}
, the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
{\displaystyle \lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}
The above expression means 'as
Δ
x
{\displaystyle \Delta x}
gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of
f
(
x
)
{\displaystyle f(x)}
; this can be written as
f
′
(
x
)
{\displaystyle f'(x)}
. If
y
=
f
(
x
)
{\displaystyle y=f(x)}
, the derivative can also be written as
d
y
d
x
{\displaystyle {\frac {dy}{dx}}}
, with
d
{\displaystyle d}
representing an infinitesimal change. For example,
d
x
{\displaystyle dx}
represents an infinitesimal change in x. In summary, if
y
=
f
(
x
)
{\displaystyle y=f(x)}
, then the derivative of
f
(
x
)
{\displaystyle f(x)}
is
d
y
d
x
=
f
′
(
x
)
=
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
{\displaystyle {\frac {dy}{dx}}=f'(x)=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}
provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of
y
=
x
2
{\displaystyle y=x^{2}}
is
2
x
{\displaystyle 2x}
:
d
y
d
x
=
lim
Δ
x
→
0
f
(
x
+
Δ
x
)
−
f
(
x
)
Δ
x
=
lim
Δ
x
→
0
(
x
+
Δ
x
)
2
−
x
2
Δ
x
=
lim
Δ
x
→
0
x
2
+
2
x
Δ
x
+
(
Δ
x
)
2
−
x
2
Δ
x
=
lim
Δ
x
→
0
2
x
Δ
x
+
(
Δ
x
)
2
Δ
x
=
lim
Δ
x
→
0
2
x
+
Δ
x
{\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {(x+\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {x^{2}+2x\Delta x+(\Delta x)^{2}-x^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}{\frac {2x\Delta x+(\Delta x)^{2}}{\Delta x}}\\&=\lim _{\Delta x\to 0}2x+\Delta x\\\end{aligned}}}
As
Δ
x
{\displaystyle \Delta x}
approaches
0
{\displaystyle 0}
,
2
x
+
Δ
x
{\displaystyle 2x+\Delta x}
approaches
2
x
{\displaystyle 2x}
. Therefore,
d
y
d
x
=
2
x
{\displaystyle {\frac {dy}{dx}}=2x}
. This proof can be generalised to show that
d
(
a
x
n
)
d
x
=
a
n
x
n
−
1
{\displaystyle {\frac {d(ax^{n})}{dx}}=anx^{n-1}}
if
a
{\displaystyle a}
and
n
{\displaystyle n}
are constants. This is known as the power rule. For example,
d
d
x
(
5
x
4
)
=
5
(
4
)
x
3
=
20
x
3
{\displaystyle {\frac {d}{dx}}(5x^{4})=5(4)x^{3}=20x^{3}}
. However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability.
A closely related concept to the derivative of a function is its differential. When x and y are real variables, the derivative of f at x is the slope of the tangent line to the graph of f at x. Because the source and target of f are one-dimensional, the derivative of f is a real number. If x and y are vectors, then the best linear approximation to the graph of f depends on how f changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted ∂y/∂x. The linearization of f in all directions at once is called the total derivative.
== History of differentiation ==
The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems).
The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem".
The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive x) of the cubic ax2 – x3 occurs when x = 2a / 3, and concluded therefrom that the equation ax2 = x3 + c has exactly one positive solution when c = 4a3 / 27, and two positive solutions whenever 0 < c < 4a3 / 27. The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known.
The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.
The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations.
== Applications of derivatives ==
=== Optimization ===
If f is a differentiable function on ℝ (or an open interval) and x is a local maximum or a local minimum of f, then the derivative of f at x is zero. Points where f'(x) = 0 are called critical points or stationary points (and the value of f at x is called a critical value). If f is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.
If f is twice differentiable, then conversely, a critical point x of f can be analysed by considering the second derivative of f at x :
if it is positive, x is a local minimum;
if it is negative, x is a local maximum;
if it is zero, then x could be a local minimum, a local maximum, or neither. (For example, f(x) = x3 has a critical point at x = 0, but it has neither a maximum nor a minimum there, whereas f(x) = ± x4 has a critical point at x = 0 and a minimum and a maximum, respectively, there.)
This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the f' on each side of the critical point.
Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.
This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.
In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.
==== Calculus of variations ====
One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.
=== Physics ===
Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:
velocity is the derivative (with respect to time) of an object's displacement (distance from the original position)
acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.
For example, if an object's position on a line is given by
x
(
t
)
=
−
16
t
2
+
16
t
+
32
,
{\displaystyle x(t)=-16t^{2}+16t+32,\,\!}
then the object's velocity is
x
˙
(
t
)
=
x
′
(
t
)
=
−
32
t
+
16
,
{\displaystyle {\dot {x}}(t)=x'(t)=-32t+16,\,\!}
and the object's acceleration is
x
¨
(
t
)
=
x
″
(
t
)
=
−
32
,
{\displaystyle {\ddot {x}}(t)=x''(t)=-32,\,\!}
which is constant.
=== Differential equations ===
A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation
F
(
t
)
=
m
d
2
x
d
t
2
.
{\displaystyle F(t)=m{\frac {d^{2}x}{dt^{2}}}.}
The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation
∂
u
∂
t
=
α
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}=\alpha {\frac {\partial ^{2}u}{\partial x^{2}}}.}
Here u(x,t) is the temperature of the rod at position x and time t and α is a constant that depends on how fast heat diffuses through the rod.
=== Mean value theorem ===
The mean value theorem gives a relationship between values of the derivative and values of the original function. If f(x) is a real-valued function and a and b are numbers with a < b, then the mean value theorem says that under mild hypotheses, the slope between the two points (a, f(a)) and (b, f(b)) is equal to the slope of the tangent line to f at some point c between a and b. In other words,
f
′
(
c
)
=
f
(
b
)
−
f
(
a
)
b
−
a
.
{\displaystyle f'(c)={\frac {f(b)-f(a)}{b-a}}.}
In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that f has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of f must equal the slope of one of the tangent lines of f. All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
=== Taylor polynomials and Taylor series ===
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function f(x) at the point x0 is a linear polynomial a + b(x − x0), and it may be possible to get a better approximation by considering a quadratic polynomial a + b(x − x0) + c(x − x0)2. Still better might be a cubic polynomial a + b(x − x0) + c(x − x0)2 + d(x − x0)3, and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients a, b, c, and d that makes the approximation as good as possible.
In the neighbourhood of x0, for a the best possible choice is always f(x0), and for b the best possible choice is always f'(x0). For c, d, and higher-degree coefficients, these coefficients are determined by higher derivatives of f. c should always be f''(x0)/2, and d should always be f'''(x0)/3!. Using these coefficients gives the Taylor polynomial of f. The Taylor polynomial of degree d is the polynomial of degree d which best approximates f, and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If f is a polynomial of degree less than or equal to d, then the Taylor polynomial of degree d equals f.
The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic.
=== Implicit function theorem ===
Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if f(x, y) = x2 + y2 − 1, then the circle is the set of all pairs (x, y) such that f(x, y) = 0. This set is called the zero set of f, and is not the same as the graph of f, which is a paraboloid. The implicit function theorem converts relations such as f(x, y) = 0 into functions. It states that if f is continuously differentiable, then around most points, the zero set of f looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of f. The circle, for instance, can be pasted together from the graphs of the two functions ± √1 - x2. In a neighborhood of every point on the circle except (−1, 0) and (1, 0), one of these two functions has a graph that looks like the circle. (These two functions also happen to meet (−1, 0) and (1, 0), but this is not guaranteed by the implicit function theorem.)
The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.
== See also ==
Differential (calculus)
Numerical differentiation
Techniques for differentiation
List of calculus topics
Notation for differentiation
Mathematics portal
== Notes ==
== References ==
=== Citations ===
=== Works cited ===
Berggren, J. L. (1990). "Innovation and Tradition in Sharaf al-Din al-Tusi's Muadalat". Journal of the American Oriental Society. 110 (2): 304–309. doi:10.2307/604533. JSTOR 604533.
=== Other sources ===
J. Edwards (1892). Differential Calculus. London: MacMillan and Co. p. 1.
Boman, Eugene, and Robert Rogers. Differential Calculus: From Practice to Theory. 2022, personal.psu.edu/ecb5/DiffCalc.pdf [1] Archived 2022-12-20 at the Wayback Machine. | Wikipedia/Differential_calculus |
In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest.
The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from Picard & Queyranne (1982).
== Definitions and structure ==
We define an undirected graph to be a set of vertices and edges such that each edge has two vertices (which may coincide) as endpoints. That is, we allow multiple edges (edges with the same pair of endpoints) and loops (edges whose two endpoints are the same vertex). A subgraph of a graph is the graph formed by any subsets of its vertices and edges such that each edge in the edge subset has both endpoints in the vertex subset.
A connected component of an undirected graph is the subgraph consisting of the vertices and edges that can be reached by following edges from a single given starting vertex. A graph is connected if every vertex or edge is reachable from every other vertex or edge. A cycle in an undirected graph is a connected subgraph in which each vertex is incident to exactly two edges, or is a loop.
A pseudoforest is an undirected graph in which each connected component contains at most one cycle. Equivalently, it is an undirected graph in which each connected component has no more edges than vertices. The components that have no cycles are just trees, while the components that have a single cycle within them are called 1-trees or unicyclic graphs. That is, a 1-tree is a connected graph containing exactly one cycle. A pseudoforest with a single connected component (usually called a pseudotree, although some authors define a pseudotree to be a 1-tree) is either a tree or a 1-tree; in general a pseudoforest may have multiple connected components as long as all of them are trees or 1-trees.
If one removes from a 1-tree one of the edges in its cycle, the result is a tree. Reversing this process, if one augments a tree by connecting any two of its vertices by a new edge, the result is a 1-tree; the path in the tree connecting the two endpoints of the added edge, together with the added edge itself, form the 1-tree's unique cycle. If one augments a 1-tree by adding an edge that connects one of its vertices to a newly added vertex, the result is again a 1-tree, with one more vertex; an alternative method for constructing 1-trees is to start with a single cycle and then repeat this augmentation operation any number of times. The edges of any 1-tree can be partitioned in a unique way into two subgraphs, one of which is a cycle and the other of which is a forest, such that each tree of the forest contains exactly one vertex of the cycle.
Certain more specific types of pseudoforests have also been studied.
A 1-forest, sometimes called a maximal pseudoforest, is a pseudoforest to which no more edges can be added without causing some component of the graph to contain multiple cycles. If a pseudoforest contains a tree as one of its components, it cannot be a 1-forest, for one can add either an edge connecting two vertices within that tree, forming a single cycle, or an edge connecting that tree to some other component. Thus, the 1-forests are exactly the pseudoforests in which every component is a 1-tree.
The spanning pseudoforests of an undirected graph G are the pseudoforest subgraphs of G that have all the vertices of G. Such a pseudoforest need not have any edges, since for example the subgraph that has all the vertices of G and no edges is a pseudoforest (whose components are trees consisting of a single vertex).
The maximal pseudoforests of G are the pseudoforest subgraphs of G that are not contained within any larger pseudoforest of G. A maximal pseudoforest of G is always a spanning pseudoforest, but not conversely. If G has no connected components that are trees, then its maximal pseudoforests are 1-forests, but if G does have a tree component, its maximal pseudoforests are not 1-forests. Stated precisely, in any graph G its maximal pseudoforests consist of every tree component of G, together with one or more disjoint 1-trees covering the remaining vertices of G.
== Directed pseudoforests ==
Versions of these definitions are also used for directed graphs. Like an undirected graph, a directed graph consists of vertices and edges, but each edge is directed from one of its endpoints to the other endpoint. A directed pseudoforest is a directed graph in which each vertex has at most one outgoing edge; that is, it has outdegree at most one. A directed 1-forest – most commonly called a functional graph (see below), sometimes maximal directed pseudoforest – is a directed graph in which each vertex has outdegree exactly one. If D is a directed pseudoforest, the undirected graph formed by removing the direction from each edge of D is an undirected pseudoforest.
== Number of edges ==
Every pseudoforest on a set of n vertices has at most n edges, and every maximal pseudoforest on a set of n vertices has exactly n edges. Conversely, if a graph G has the property that, for every subset S of its vertices, the number of edges in the induced subgraph of S is at most the number of vertices in S, then G is a pseudoforest. 1-trees can be defined as connected graphs with equally many vertices and edges.
Moving from individual graphs to graph families, if a family of graphs has the property that every subgraph of a graph in the family is also in the family, and every graph in the family has at most as many edges as vertices, then the family contains only pseudoforests. For instance, every subgraph of a thrackle (a graph drawn so that every pair of edges has one point of intersection) is also a thrackle, so Conway's conjecture that every thrackle has at most as many edges as vertices can be restated as saying that every thrackle is a pseudoforest. A more precise characterization is that, if the conjecture is true, then the thrackles are exactly the pseudoforests with no four-vertex cycle and at most one odd cycle.
Streinu and Theran generalize the sparsity conditions defining pseudoforests: they define a graph as being (k,l)-sparse if every nonempty subgraph with n vertices has at most kn − l edges, and (k,l)-tight if it is (k,l)-sparse and has exactly kn − l edges. Thus, the pseudoforests are the (1,0)-sparse graphs, and the maximal pseudoforests are the (1,0)-tight graphs. Several other important families of graphs may be defined from other values of k and l,
and when l ≤ k the (k,l)-sparse graphs may be characterized as the graphs formed as the edge-disjoint union of l forests and k − l pseudoforests.
Almost every sufficiently sparse random graph is pseudoforest. That is, if c is a constant with 0 < c < 1/2, and Pc(n) is the probability that choosing uniformly at random among the n-vertex graphs with cn edges results in a pseudoforest, then Pc(n) tends to one in the limit for large n. However, for c > 1/2, almost every random graph with cn edges has a large component that is not unicyclic.
== Enumeration ==
A graph is simple if it has no self-loops and no multiple edges with the same endpoints. The number of simple 1-trees with n labelled vertices is
n
∑
k
=
1
n
(
−
1
)
k
−
1
k
∑
n
1
+
⋯
+
n
k
=
n
n
!
n
1
!
⋯
n
k
!
(
(
n
1
2
)
+
⋯
+
(
n
k
2
)
n
)
.
{\displaystyle n\sum _{k=1}^{n}{\frac {(-1)^{k-1}}{k}}\sum _{n_{1}+\cdots +n_{k}=n}{\frac {n!}{n_{1}!\cdots n_{k}!}}{\binom {{\binom {n_{1}}{2}}+\cdots +{\binom {n_{k}}{2}}}{n}}.}
The values for n up to 300 can be found in sequence OEIS: A057500 of the On-Line Encyclopedia of Integer Sequences.
The number of maximal directed pseudoforests on n vertices, allowing self-loops, is nn, because for each vertex there are n possible endpoints for the outgoing edge. André Joyal used this fact to provide a bijective proof of Cayley's formula, that the number of undirected trees on n nodes is nn − 2, by finding a bijection between maximal directed pseudoforests and undirected trees with two distinguished nodes. If self-loops are not allowed, the number of maximal directed pseudoforests is instead (n − 1)n.
== Graphs of functions ==
Directed pseudoforests and endofunctions are in some sense mathematically equivalent. Any function ƒ from a set X to itself (that is, an endomorphism of X) can be interpreted as defining a directed pseudoforest which has an edge from x to y whenever ƒ(x) = y. The resulting directed pseudoforest is maximal, and may include self-loops whenever some value x has ƒ(x) = x. Alternatively, omitting the self-loops produces a non-maximal pseudoforest. In the other direction, any maximal directed pseudoforest determines a function ƒ such that ƒ(x) is the target of the edge that goes out from x, and any non-maximal directed pseudoforest can be made maximal by adding self-loops and then converted into a function in the same way. For this reason, maximal directed pseudoforests are sometimes called functional graphs. Viewing a function as a functional graph provides a convenient language for describing properties that are not as easily described from the function-theoretic point of view; this technique is especially applicable to problems involving iterated functions, which correspond to paths in functional graphs.
Cycle detection, the problem of following a path in a functional graph to find a cycle in it, has applications in cryptography and computational number theory, as part of Pollard's rho algorithm for integer factorization and as a method for finding collisions in cryptographic hash functions. In these applications, ƒ is expected to behave randomly; Flajolet and Odlyzko study the graph-theoretic properties of the functional graphs arising from randomly chosen mappings. In particular, a form of the birthday paradox implies that, in a random functional graph with n vertices, the path starting from a randomly selected vertex will typically loop back on itself to form a cycle within O(√n) steps. Konyagin et al. have made analytical and computational progress on graph statistics.
Martin, Odlyzko, and Wolfram investigate pseudoforests that model the dynamics of cellular automata. These functional graphs, which they call state transition diagrams, have one vertex for each possible configuration that the ensemble of cells of the automaton can be in, and an edge connecting each configuration to the configuration that follows it according to the automaton's rule. One can infer properties of the automaton from the structure of these diagrams, such as the number of components, length of limiting cycles, depth of the trees connecting non-limiting states to these cycles, or symmetries of the diagram. For instance, any vertex with no incoming edge corresponds to a Garden of Eden pattern and a vertex with a self-loop corresponds to a still life pattern.
Another early application of functional graphs is in the trains used to study Steiner triple systems. The train of a triple system is a functional graph having a vertex for each possible triple of symbols; each triple pqr is mapped by ƒ to stu, where pqs, prt, and qru are the triples that belong to the triple system and contain the pairs pq, pr, and qr respectively. Trains have been shown to be a powerful invariant of triple systems although somewhat cumbersome to compute.
== Bicircular matroid ==
A matroid is a mathematical structure in which certain sets of elements are defined to be independent, in such a way that the independent sets satisfy properties modeled after the properties of linear independence in a vector space. One of the standard examples of a matroid is the graphic matroid in which the independent sets are the sets of edges in forests of a graph; the matroid structure of forests is important in algorithms for computing the minimum spanning tree of the graph. Analogously, we may define matroids from pseudoforests.
For any graph G = (V,E), we may define a matroid on the edges of G, in which a set of edges is independent if and only if it forms a pseudoforest; this matroid is known as the bicircular matroid (or bicycle matroid) of G. The smallest dependent sets for this matroid are the minimal connected subgraphs of G that have more than one cycle, and these subgraphs are sometimes called bicycles. There are three possible types of bicycle: a theta graph has two vertices that are connected by three internally disjoint paths, a figure 8 graph consists of two cycles sharing a single vertex, and a handcuff graph is formed by two disjoint cycles connected by a path.
A graph is a pseudoforest if and only if it does not contain a bicycle as a subgraph.
== Forbidden minors ==
Forming a minor of a pseudoforest by contracting some of its edges and deleting others produces another pseudoforest. Therefore, the family of pseudoforests is closed under minors, and the Robertson–Seymour theorem implies that pseudoforests can be characterized in terms of a finite set of forbidden minors, analogously to Wagner's theorem characterizing the planar graphs as the graphs having neither the complete graph K5 nor the complete bipartite graph K3,3 as minors.
As discussed above, any non-pseudoforest graph contains as a subgraph a handcuff, figure 8, or theta graph; any handcuff or figure 8 graph may be contracted to form a butterfly graph (five-vertex figure 8), and any theta graph may be contracted to form a diamond graph (four-vertex theta graph), so any non-pseudoforest contains either a butterfly or a diamond as a minor, and these are the only minor-minimal non-pseudoforest graphs. Thus, a graph is a pseudoforest if and only if it does not have the butterfly or the diamond as a minor. If one forbids only the diamond but not the butterfly, the resulting larger graph family consists of the cactus graphs and disjoint unions of multiple cactus graphs.
More simply, if multigraphs with self-loops are considered, there is only one forbidden minor, a vertex with two loops.
== Algorithms ==
An early algorithmic use of pseudoforests involves the network simplex algorithm and its application to generalized flow problems modeling the conversion between commodities of different types. In these problems, one is given as input a flow network in which the vertices model each commodity and the edges model allowable conversions between one commodity and another. Each edge is marked with a capacity (how much of a commodity can be converted per unit time), a flow multiplier (the conversion rate between commodities), and a cost (how much loss or, if negative, profit is incurred per unit of conversion). The task is to determine how much of each commodity to convert via each edge of the flow network, in order to minimize cost or maximize profit, while obeying the capacity constraints and not allowing commodities of any type to accumulate unused. This type of problem can be formulated as a linear program, and solved using the simplex algorithm. The intermediate solutions arising from this algorithm, as well as the eventual optimal solution, have a special structure: each edge in the input network is either unused or used to its full capacity, except for a subset of the edges, forming a spanning pseudoforest of the input network, for which the flow amounts may lie between zero and the full capacity. In this application, unicyclic graphs are also sometimes called augmented trees and maximal pseudoforests are also sometimes called augmented forests.
The minimum spanning pseudoforest problem involves finding a spanning pseudoforest of minimum weight in a larger edge-weighted graph G.
Due to the matroid structure of pseudoforests, minimum-weight maximal pseudoforests may be found by greedy algorithms similar to those for the minimum spanning tree problem. However, Gabow and Tarjan found a more efficient linear-time approach in this case.
The pseudoarboricity of a graph G is defined by analogy to the arboricity as the minimum number of pseudoforests into which its edges can be partitioned; equivalently, it is the minimum k such that G is (k,0)-sparse, or the minimum k such that the edges of G can be oriented to form a directed graph with outdegree at most k. Due to the matroid structure of pseudoforests, the pseudoarboricity may be computed in polynomial time.
A random bipartite graph with n vertices on each side of its bipartition, and with cn edges chosen independently at random from each of the n2 possible pairs of vertices, is a pseudoforest with high probability whenever c is a constant strictly less than one. This fact plays a key role in the analysis of cuckoo hashing, a data structure for looking up key-value pairs by looking in one of two hash tables at locations determined from the key: one can form a graph, the "cuckoo graph", whose vertices correspond to hash table locations and whose edges link the two locations at which one of the keys might be found, and the cuckoo hashing algorithm succeeds in finding locations for all of its keys if and only if the cuckoo graph is a pseudoforest.
Pseudoforests also play a key role in parallel algorithms for graph coloring and related problems.
== Notes ==
== References ==
== External links ==
Weisstein, Eric W., "Unicyclic Graph", MathWorld | Wikipedia/Functional_graph |
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing.
== Overview ==
Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities.
Connected studies include:
Applied mathematics
Computational geometry
Computational topology
Computer vision
Image processing
Information visualization
Scientific visualization
Applications of computer graphics include:
Print design
Digital art
Special effects
Video games
Visual effects
== History ==
There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics.
As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates).
== Subfields ==
A broad classification of major subfields in computer graphics might be:
Geometry: ways to represent and process surfaces
Animation: ways to represent and manipulate motion
Rendering: algorithms to reproduce light transport
Imaging: image acquisition or image editing
=== Geometry ===
The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example).
Geometry subfields include:
Implicit surface modeling – an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.
Digital geometry processing – surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.
Discrete differential geometry – a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.
Point-based graphics – a recent field which focuses on points as the fundamental representation of surfaces.
Subdivision surfaces
Out-of-core mesh processing – another recent field which focuses on mesh datasets that do not fit in main memory.
=== Animation ===
The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally.
Animation subfields include:
Performance capture
Character animation
Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)
=== Rendering ===
Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light).
Rendering subfields include:
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.
Scattering: Models of scattering (how light interacts with the surface at a given point) and shading (how material properties vary across the surface) are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function (BSDF). The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (There is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)
Non-photorealistic rendering
Physically based rendering – concerned with generating images according to the laws of geometric optics
Real-time rendering – focuses on rendering for interactive applications, typically using specialized hardware like GPUs
Relighting – recent area concerned with quickly re-rendering scenes
== Notable researchers ==
== Applications for their use ==
Bitmap Design / Image Editing
Adobe Photoshop
Corel Photo-Paint
GIMP
Krita
Vector drawing
Adobe Illustrator
CorelDRAW
Inkscape
Affinity Designer
Sketch
Architecture
VariCAD
FreeCAD
AutoCAD
QCAD
LibreCAD
DataCAD
Corel Designer
Video editing
Adobe Premiere Pro
Sony Vegas
Final Cut
DaVinci Resolve
Cinelerra
VirtualDub
Sculpting, Animation, and 3D Modeling
Blender 3D
Wings 3D
ZBrush
Sculptris
SolidWorks
Rhino3D
SketchUp
3ds Max
Cinema 4D
Maya
Houdini
Digital composition
Nuke
Blackmagic Fusion
Adobe After Effects
Natron
Rendering
V-Ray
RedShift
RenderMan
Octane Render
Mantra
Lumion (Architectural visualization)
Other applications examples
ACIS - geometric core
Autodesk Softimage
POV-Ray
Scribus
Silo
Hexagon
Lightwave
== See also ==
== References ==
== Further reading ==
Foley et al. Computer Graphics: Principles and Practice.
Shirley. Fundamentals of Computer Graphics.
Watt. 3D Computer Graphics.
== External links ==
A Critical History of Computer Graphics and Animation
History of Computer Graphics series of articles
=== Industry ===
Industrial labs doing "blue sky" graphics research include:
Adobe Advanced Technology Labs
MERL
Microsoft Research – Graphics
Nvidia Research
Major film studios notable for graphics research include:
ILM
PDI/Dreamworks Animation
Pixar | Wikipedia/Computer_graphics_(computer_science) |
The theory of functions of several complex variables is the branch of mathematics dealing with functions defined on the complex coordinate space
C
n
{\displaystyle \mathbb {C} ^{n}}
, that is, n-tuples of complex numbers. The name of the field dealing with the properties of these functions is called several complex variables (and analytic space), which the Mathematics Subject Classification has as a top-level heading.
As in complex analysis of functions of one variable, which is the case n = 1, the functions studied are holomorphic or complex analytic so that, locally, they are power series in the variables zi. Equivalently, they are locally uniform limits of polynomials; or locally square-integrable solutions to the n-dimensional Cauchy–Riemann equations. For one complex variable, every domain(
D
⊂
C
{\displaystyle D\subset \mathbb {C} }
), is the domain of holomorphy of some function, in other words every domain has a function for which it is the domain of holomorphy. For several complex variables, this is not the case; there exist domains (
D
⊂
C
n
,
n
≥
2
{\displaystyle D\subset \mathbb {C} ^{n},\ n\geq 2}
) that are not the domain of holomorphy of any function, and so is not always the domain of holomorphy, so the domain of holomorphy is one of the themes in this field. Patching the local data of meromorphic functions, i.e. the problem of creating a global meromorphic function from zeros and poles, is called the Cousin problem. Also, the interesting phenomena that occur in several complex variables are fundamentally important to the study of compact complex manifolds and complex projective varieties (
C
P
n
{\displaystyle \mathbb {CP} ^{n}}
) and has a different flavour to complex analytic geometry in
C
n
{\displaystyle \mathbb {C} ^{n}}
or on Stein manifolds, these are much similar to study of algebraic varieties that is study of the algebraic geometry than complex analytic geometry.
== Historical perspective ==
Many examples of such functions were familiar in nineteenth-century mathematics; abelian functions, theta functions, and some hypergeometric series, and also, as an example of an inverse problem; the Jacobi inversion problem. Naturally also same function of one variable that depends on some complex parameter is a candidate. The theory, however, for many years didn't become a full-fledged field in mathematical analysis, since its characteristic phenomena weren't uncovered. The Weierstrass preparation theorem would now be classed as commutative algebra; it did justify the local picture, ramification, that addresses the generalization of the branch points of Riemann surface theory.
With work of Friedrich Hartogs, Pierre Cousin, E. E. Levi, and of Kiyoshi Oka in the 1930s, a general theory began to emerge; others working in the area at the time were Heinrich Behnke, Peter Thullen, Karl Stein, Wilhelm Wirtinger and Francesco Severi. Hartogs proved some basic results, such as every isolated singularity is removable, for every analytic function
f
:
C
n
→
C
{\displaystyle f:\mathbb {C} ^{n}\to \mathbb {C} }
whenever n > 1. Naturally the analogues of contour integrals will be harder to handle; when n = 2 an integral surrounding a point should be over a three-dimensional manifold (since we are in four real dimensions), while iterating contour (line) integrals over two separate complex variables should come to a double integral over a two-dimensional surface. This means that the residue calculus will have to take a very different character.
After 1945 important work in France, in the seminar of Henri Cartan, and Germany with Hans Grauert and Reinhold Remmert, quickly changed the picture of the theory. A number of issues were clarified, in particular that of analytic continuation. Here a major difference is evident from the one-variable theory; while for every open connected set D in
C
{\displaystyle \mathbb {C} }
we can find a function that will nowhere continue analytically over the boundary, that cannot be said for n > 1. In fact the D of that kind are rather special in nature (especially in complex coordinate spaces
C
n
{\displaystyle \mathbb {C} ^{n}}
and Stein manifolds, satisfying a condition called pseudoconvexity). The natural domains of definition of functions, continued to the limit, are called Stein manifolds and their nature was to make sheaf cohomology groups vanish, on the other hand, the Grauert–Riemenschneider vanishing theorem is known as a similar result for compact complex manifolds, and the Grauert–Riemenschneider conjecture is a special case of the conjecture of Narasimhan. In fact it was the need to put (in particular) the work of Oka on a clearer basis that led quickly to the consistent use of sheaves for the formulation of the theory (with major repercussions for algebraic geometry, in particular from Grauert's work).
From this point onwards there was a foundational theory, which could be applied to analytic geometry, automorphic forms of several variables, and partial differential equations. The deformation theory of complex structures and complex manifolds was described in general terms by Kunihiko Kodaira and D. C. Spencer. The celebrated paper GAGA of Serre pinned down the crossover point from géometrie analytique to géometrie algébrique.
C. L. Siegel was heard to complain that the new theory of functions of several complex variables had few functions in it, meaning that the special function side of the theory was subordinated to sheaves. The interest for number theory, certainly, is in specific generalizations of modular forms. The classical candidates are the Hilbert modular forms and Siegel modular forms. These days these are associated to algebraic groups (respectively the Weil restriction from a totally real number field of GL(2), and the symplectic group), for which it happens that automorphic representations can be derived from analytic functions. In a sense this doesn't contradict Siegel; the modern theory has its own, different directions.
Subsequent developments included the hyperfunction theory, and the edge-of-the-wedge theorem, both of which had some inspiration from quantum field theory. There are a number of other fields, such as Banach algebra theory, that draw on several complex variables.
== The complex coordinate space ==
The complex coordinate space
C
n
{\displaystyle \mathbb {C} ^{n}}
is the Cartesian product of n copies of
C
{\displaystyle \mathbb {C} }
, and when
C
n
{\displaystyle \mathbb {C} ^{n}}
is a domain of holomorphy,
C
n
{\displaystyle \mathbb {C} ^{n}}
can be regarded as a Stein manifold, and more generalized Stein space.
C
n
{\displaystyle \mathbb {C} ^{n}}
is also considered to be a complex projective variety, a Kähler manifold, etc. It is also an n-dimensional vector space over the complex numbers, which gives its dimension 2n over
R
{\displaystyle \mathbb {R} }
. Hence, as a set and as a topological space,
C
n
{\displaystyle \mathbb {C} ^{n}}
may be identified to the real coordinate space
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
and its topological dimension is thus 2n.
In coordinate-free language, any vector space over complex numbers may be thought of as a real vector space of twice as many dimensions, where a complex structure is specified by a linear operator J (such that J 2 = −I) which defines multiplication by the imaginary unit i.
Any such space, as a real space, is oriented. On the complex plane thought of as a Cartesian plane, multiplication by a complex number w = u + iv may be represented by the real matrix
(
u
−
v
v
u
)
,
{\displaystyle {\begin{pmatrix}u&-v\\v&u\end{pmatrix}},}
with determinant
u
2
+
v
2
=
|
w
|
2
.
{\displaystyle u^{2}+v^{2}=|w|^{2}.}
Likewise, if one expresses any finite-dimensional complex linear operator as a real matrix (which will be composed from 2 × 2 blocks of the aforementioned form), then its determinant equals to the square of absolute value of the corresponding complex determinant. It is a non-negative number, which implies that the (real) orientation of the space is never reversed by a complex operator. The same applies to Jacobians of holomorphic functions from
C
n
{\displaystyle \mathbb {C} ^{n}}
to
C
n
{\displaystyle \mathbb {C} ^{n}}
.
== Holomorphic functions ==
=== Definition ===
A function f defined on a domain
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
and with values in
C
{\displaystyle \mathbb {C} }
is said to be holomorphic at a point
z
∈
D
{\displaystyle z\in D}
if it is complex-differentiable at this point, in the sense that there exists a complex linear map
L
:
C
n
→
C
{\displaystyle L:\mathbb {C} ^{n}\to \mathbb {C} }
such that
f
(
z
+
h
)
=
f
(
z
)
+
L
(
h
)
+
o
(
‖
h
‖
)
{\displaystyle f(z+h)=f(z)+L(h)+o(\lVert h\rVert )}
The function f is said to be holomorphic if it is holomorphic at all points of its domain of definition D.
If f is holomorphic, then all the partial maps :
z
↦
f
(
z
1
,
…
,
z
i
−
1
,
z
,
z
i
+
1
,
…
,
z
n
)
{\displaystyle z\mapsto f(z_{1},\dots ,z_{i-1},z,z_{i+1},\dots ,z_{n})}
are holomorphic as functions of one complex variable : we say that f is holomorphic in each variable separately. Conversely, if f is holomorphic in each variable separately, then f is in fact holomorphic : this is known as Hartog's theorem, or as Osgood's lemma under the additional hypothesis that f is continuous.
=== Cauchy–Riemann equations ===
In one complex variable, a function
f
:
C
→
C
{\displaystyle f:\mathbb {C} \to \mathbb {C} }
defined on the plane is holomorphic at a point
p
∈
C
{\displaystyle p\in \mathbb {C} }
if and only if its real part
u
{\displaystyle u}
and its imaginary part
v
{\displaystyle v}
satisfy the so-called Cauchy-Riemann equations at
p
{\displaystyle p}
:
∂
u
∂
x
(
p
)
=
∂
v
∂
y
(
p
)
and
∂
u
∂
y
(
p
)
=
−
∂
v
∂
x
(
p
)
{\displaystyle {\frac {\partial u}{\partial x}}(p)={\frac {\partial v}{\partial y}}(p)\quad {\text{ and }}\quad {\frac {\partial u}{\partial y}}(p)=-{\frac {\partial v}{\partial x}}(p)}
In several variables, a function
f
:
C
n
→
C
{\displaystyle f:\mathbb {C} ^{n}\to \mathbb {C} }
is holomorphic if and only if it is holomorphic in each variable separately, and hence if and only if the real part
u
{\displaystyle u}
and the imaginary part
v
{\displaystyle v}
of
f
{\displaystyle f}
satisfiy the Cauchy Riemann equations :
∀
i
∈
{
1
,
…
,
n
}
,
∂
u
∂
x
i
=
∂
v
∂
y
i
and
∂
u
∂
y
i
=
−
∂
v
∂
x
i
{\displaystyle \forall i\in \{1,\dots ,n\},\quad {\frac {\partial u}{\partial x_{i}}}={\frac {\partial v}{\partial y_{i}}}\quad {\text{ and }}\quad {\frac {\partial u}{\partial y_{i}}}=-{\frac {\partial v}{\partial x_{i}}}}
Using the formalism of Wirtinger derivatives, this can be reformulated as :
∀
i
∈
{
1
,
…
,
n
}
,
∂
f
∂
z
i
¯
=
0
,
{\displaystyle \forall i\in \{1,\dots ,n\},\quad {\frac {\partial f}{\partial {\overline {z_{i}}}}}=0,}
or even more compactly using the formalism of complex differential forms, as :
∂
¯
f
=
0.
{\displaystyle {\bar {\partial }}f=0.}
=== Cauchy's integral formula I (Polydisc version) ===
Prove the sufficiency of two conditions (A) and (B). Let f meets the conditions of being continuous and separately homorphic on domain D. Each disk has a rectifiable curve
γ
{\displaystyle \gamma }
,
γ
ν
{\displaystyle \gamma _{\nu }}
is piecewise smoothness, class
C
1
{\displaystyle {\mathcal {C}}^{1}}
Jordan closed curve. (
ν
=
1
,
2
,
…
,
n
{\displaystyle \nu =1,2,\ldots ,n}
) Let
D
ν
{\displaystyle D_{\nu }}
be the domain surrounded by each
γ
ν
{\displaystyle \gamma _{\nu }}
. Cartesian product closure
D
1
×
D
2
×
⋯
×
D
n
¯
{\displaystyle {\overline {D_{1}\times D_{2}\times \cdots \times D_{n}}}}
is
D
1
¯
×
D
2
¯
×
⋯
×
D
n
¯
∈
D
{\displaystyle {\overline {D_{1}}}\times {\overline {D_{2}}}\times \cdots \times {\overline {D_{n}}}\in D}
. Also, take the closed polydisc
Δ
¯
{\displaystyle {\overline {\Delta }}}
so that it becomes
Δ
¯
⊂
D
1
×
D
2
×
⋯
×
D
n
{\displaystyle {\overline {\Delta }}\subset {D_{1}\times D_{2}\times \cdots \times D_{n}}}
.
Δ
¯
(
z
,
r
)
=
{
ζ
=
(
ζ
1
,
ζ
2
,
…
,
ζ
n
)
∈
C
n
;
|
ζ
ν
−
z
ν
|
≤
r
ν
for all
ν
=
1
,
…
,
n
}
{\displaystyle {\overline {\Delta }}(z,r)=\left\{\zeta =(\zeta _{1},\zeta _{2},\dots ,\zeta _{n})\in \mathbb {C} ^{n};\left|\zeta _{\nu }-z_{\nu }\right|\leq r_{\nu }{\text{ for all }}\nu =1,\dots ,n\right\}}
and let
{
z
ν
}
ν
=
1
n
{\displaystyle \{z_{\nu }\}_{\nu =1}^{n}}
be the center of each disk.) Using the Cauchy's integral formula of one variable repeatedly,
f
(
z
1
,
…
,
z
n
)
=
1
2
π
i
∫
∂
D
1
f
(
ζ
1
,
z
2
,
…
,
z
n
)
ζ
1
−
z
1
d
ζ
1
=
1
(
2
π
i
)
2
∫
∂
D
2
d
ζ
2
∫
∂
D
1
f
(
ζ
1
,
ζ
2
,
z
3
,
…
,
z
n
)
(
ζ
1
−
z
1
)
(
ζ
2
−
z
2
)
d
ζ
1
=
1
(
2
π
i
)
n
∫
∂
D
n
d
ζ
n
⋯
∫
∂
D
2
d
ζ
2
∫
∂
D
1
f
(
ζ
1
,
ζ
2
,
…
,
ζ
n
)
(
ζ
1
−
z
1
)
(
ζ
2
−
z
2
)
⋯
(
ζ
n
−
z
n
)
d
ζ
1
{\displaystyle {\begin{aligned}f(z_{1},\ldots ,z_{n})&={\frac {1}{2\pi i}}\int _{\partial D_{1}}{\frac {f(\zeta _{1},z_{2},\ldots ,z_{n})}{\zeta _{1}-z_{1}}}\,d\zeta _{1}\\[6pt]&={\frac {1}{(2\pi i)^{2}}}\int _{\partial D_{2}}\,d\zeta _{2}\int _{\partial D_{1}}{\frac {f(\zeta _{1},\zeta _{2},z_{3},\ldots ,z_{n})}{(\zeta _{1}-z_{1})(\zeta _{2}-z_{2})}}\,d\zeta _{1}\\[6pt]&={\frac {1}{(2\pi i)^{n}}}\int _{\partial D_{n}}\,d\zeta _{n}\cdots \int _{\partial D_{2}}\,d\zeta _{2}\int _{\partial D_{1}}{\frac {f(\zeta _{1},\zeta _{2},\ldots ,\zeta _{n})}{(\zeta _{1}-z_{1})(\zeta _{2}-z_{2})\cdots (\zeta _{n}-z_{n})}}\,d\zeta _{1}\end{aligned}}}
Because
∂
D
{\displaystyle \partial D}
is a rectifiable Jordanian closed curve and f is continuous, so the order of products and sums can be exchanged so the iterated integral can be calculated as a multiple integral. Therefore,
==== Cauchy's evaluation formula ====
Because the order of products and sums is interchangeable, from (1) we get
f is class
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
-function.
From (2), if f is holomorphic, on polydisc
{
ζ
=
(
ζ
1
,
ζ
2
,
…
,
ζ
n
)
∈
C
n
;
|
ζ
ν
−
z
ν
|
≤
r
ν
,
for all
ν
=
1
,
…
,
n
}
{\displaystyle \left\{\zeta =(\zeta _{1},\zeta _{2},\dots ,\zeta _{n})\in \mathbb {C} ^{n};|\zeta _{\nu }-z_{\nu }|\leq r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}}
and
|
f
|
≤
M
{\displaystyle |f|\leq {M}}
, the following evaluation equation is obtained.
|
∂
k
1
+
⋯
+
k
n
f
(
ζ
1
,
ζ
2
,
…
,
ζ
n
)
∂
z
1
k
1
⋯
∂
z
n
k
n
|
≤
M
k
1
!
⋯
k
n
!
r
1
k
1
⋯
r
n
k
n
{\displaystyle \left|{\frac {\partial ^{k_{1}+\cdots +k_{n}}f(\zeta _{1},\zeta _{2},\ldots ,\zeta _{n})}{{\partial z_{1}}^{k_{1}}\cdots \partial {z_{n}}^{k_{n}}}}\right|\leq {\frac {Mk_{1}!\cdots k_{n}!}{{r_{1}}^{k_{1}}\cdots {r_{n}}^{k_{n}}}}}
Therefore, Liouville's theorem hold.
==== Power series expansion of holomorphic functions on polydisc ====
If function f is holomorphic, on polydisc
{
z
=
(
z
1
,
z
2
,
…
,
z
n
)
∈
C
n
;
|
z
ν
−
a
ν
|
<
r
ν
,
for all
ν
=
1
,
…
,
n
}
{\displaystyle \{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|<r_{\nu },{\text{ for all }}\nu =1,\dots ,n\}}
, from the Cauchy's integral formula, we can see that it can be uniquely expanded to the next power series.
f
(
z
)
=
∑
k
1
,
…
,
k
n
=
0
∞
c
k
1
,
…
,
k
n
(
z
1
−
a
1
)
k
1
⋯
(
z
n
−
a
n
)
k
n
,
c
k
1
⋯
k
n
=
1
(
2
π
i
)
n
∫
∂
D
1
⋯
∫
∂
D
n
f
(
ζ
1
,
…
,
ζ
n
)
(
ζ
1
−
a
1
)
k
1
+
1
⋯
(
ζ
n
−
a
n
)
k
n
+
1
d
ζ
1
⋯
d
ζ
n
{\displaystyle {\begin{aligned}&f(z)=\sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ ,\\&c_{k_{1}\cdots k_{n}}={\frac {1}{(2\pi i)^{n}}}\int _{\partial D_{1}}\cdots \int _{\partial D_{n}}{\frac {f(\zeta _{1},\dots ,\zeta _{n})}{(\zeta _{1}-a_{1})^{k_{1}+1}\cdots (\zeta _{n}-a_{n})^{k_{n}+1}}}\,d\zeta _{1}\cdots d\zeta _{n}\end{aligned}}}
In addition, f that satisfies the following conditions is called an analytic function.
For each point
a
=
(
a
1
,
…
,
a
n
)
∈
D
⊂
C
n
{\displaystyle a=(a_{1},\dots ,a_{n})\in D\subset \mathbb {C} ^{n}}
,
f
(
z
)
{\displaystyle f(z)}
is expressed as a power series expansion that is convergent on D :
f
(
z
)
=
∑
k
1
,
…
,
k
n
=
0
∞
c
k
1
,
…
,
k
n
(
z
1
−
a
1
)
k
1
⋯
(
z
n
−
a
n
)
k
n
,
{\displaystyle f(z)=\sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ ,}
We have already explained that holomorphic functions on polydisc are analytic. Also, from the theorem derived by Weierstrass, we can see that the analytic function on polydisc (convergent power series) is holomorphic.
If a sequence of functions
f
1
,
…
,
f
n
{\displaystyle f_{1},\ldots ,f_{n}}
which converges uniformly on compacta inside a domain D, the limit function f of
f
v
{\displaystyle f_{v}}
also uniformly on compacta inside a domain D. Also, respective partial derivative of
f
v
{\displaystyle f_{v}}
also compactly converges on domain D to the corresponding derivative of f.
∂
k
1
+
⋯
+
k
n
f
∂
z
1
k
1
⋯
∂
z
n
k
n
=
∑
v
=
1
∞
∂
k
1
+
⋯
+
k
n
f
v
∂
z
1
k
1
⋯
∂
z
n
k
n
{\displaystyle {\frac {\partial ^{k_{1}+\cdots +k_{n}}f}{\partial {z_{1}}^{k_{1}}\cdots \partial {z_{n}}^{k_{n}}}}=\sum _{v=1}^{\infty }{\frac {\partial ^{k_{1}+\cdots +k_{n}}f_{v}}{\partial {z_{1}}^{k_{1}}\cdots \partial {z_{n}}^{k_{n}}}}}
==== Radius of convergence of power series ====
It is possible to define a combination of positive real numbers
{
r
ν
(
ν
=
1
,
…
,
n
)
}
{\displaystyle \{r_{\nu }\ (\nu =1,\dots ,n)\}}
such that the power series
∑
k
1
,
…
,
k
n
=
0
∞
c
k
1
,
…
,
k
n
(
z
1
−
a
1
)
k
1
⋯
(
z
n
−
a
n
)
k
n
{\textstyle \sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ }
converges uniformly at
{
z
=
(
z
1
,
z
2
,
…
,
z
n
)
∈
C
n
;
|
z
ν
−
a
ν
|
<
r
ν
,
for all
ν
=
1
,
…
,
n
}
{\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|<r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}}
and does not converge uniformly at
{
z
=
(
z
1
,
z
2
,
…
,
z
n
)
∈
C
n
;
|
z
ν
−
a
ν
|
>
r
ν
,
for all
ν
=
1
,
…
,
n
}
{\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|>r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}}
.
In this way it is possible to have a similar, combination of radius of convergence for a one complex variable. This combination is generally not unique and there are an infinite number of combinations.
==== Laurent series expansion ====
Let
ω
(
z
)
{\displaystyle \omega (z)}
be holomorphic in the annulus
{
z
=
(
z
1
,
z
2
,
…
,
z
n
)
∈
C
n
;
r
ν
<
|
z
|
<
R
ν
,
for all
ν
+
1
,
…
,
n
}
{\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};r_{\nu }<|z|<R_{\nu },{\text{ for all }}\nu +1,\dots ,n\right\}}
and continuous on their circumference, then there exists the following expansion ;
ω
(
z
)
=
∑
k
=
0
∞
1
k
!
1
(
2
π
i
)
n
∫
|
ζ
ν
|
=
R
ν
⋯
∫
ω
(
ζ
)
×
[
d
k
d
z
k
1
ζ
−
z
]
z
=
0
d
f
ζ
⋅
z
k
+
∑
k
=
1
∞
1
k
!
1
2
π
i
∫
|
ζ
ν
|
=
r
ν
⋯
∫
ω
(
ζ
)
×
(
0
,
⋯
,
k
!
α
1
!
⋯
α
n
!
⋅
ζ
n
α
1
−
1
⋯
ζ
n
α
n
−
1
,
⋯
0
)
d
f
ζ
⋅
1
z
k
(
α
1
+
⋯
+
α
n
=
k
)
{\displaystyle {\begin{aligned}\omega (z)&=\sum _{k=0}^{\infty }{\frac {1}{k!}}{\frac {1}{(2\pi i)^{n}}}\int _{|\zeta _{\nu }|=R_{\nu }}\cdots \int \omega (\zeta )\times \left[{\frac {d^{k}}{dz^{k}}}{\frac {1}{\zeta -z}}\right]_{z=0}df_{\zeta }\cdot z^{k}\\[6pt]&+\sum _{k=1}^{\infty }{\frac {1}{k!}}{\frac {1}{2\pi i}}\int _{|\zeta _{\nu }|=r_{\nu }}\cdots \int \omega (\zeta )\times \left(0,\cdots ,{\sqrt {\frac {k!}{\alpha _{1}!\cdots \alpha _{n}!}}}\cdot \zeta _{n}^{\alpha _{1}-1}\cdots \zeta _{n}^{\alpha _{n}-1},\cdots 0\right)df_{\zeta }\cdot {\frac {1}{z^{k}}}\ (\alpha _{1}+\cdots +\alpha _{n}=k)\end{aligned}}}
The integral in the second term, of the right-hand side is performed so as to see the zero on the left in every plane, also this integrated series is uniformly convergent in the annulus
r
ν
′
<
|
z
|
<
R
ν
′
{\displaystyle r'_{\nu }<|z|<R'_{\nu }}
, where
r
ν
′
>
r
ν
{\displaystyle r'_{\nu }>r_{\nu }}
and
R
ν
′
<
R
ν
{\displaystyle R'_{\nu }<R_{\nu }}
, and so it is possible to integrate term.
=== Bochner–Martinelli formula (Cauchy's integral formula II) ===
The Cauchy integral formula holds only for polydiscs, and in the domain of several complex variables, polydiscs are only one of many possible domains, so we introduce the Bochner–Martinelli formula.
Suppose that f is a continuously differentiable function on the closure of a domain D on
C
n
{\displaystyle \mathbb {C} ^{n}}
with piecewise smooth boundary
∂
D
{\displaystyle \partial D}
, and let the symbol
∧
{\displaystyle \land }
denotes the exterior or wedge product of differential forms. Then the Bochner–Martinelli formula states that if z is in the domain D then, for
ζ
{\displaystyle \zeta }
, z in
C
n
{\displaystyle \mathbb {C} ^{n}}
the Bochner–Martinelli kernel
ω
(
ζ
,
z
)
{\displaystyle \omega (\zeta ,z)}
is a differential form in
ζ
{\displaystyle \zeta }
of bidegree
(
n
,
n
−
1
)
{\displaystyle (n,n-1)}
, defined by
ω
(
ζ
,
z
)
=
(
n
−
1
)
!
(
2
π
i
)
n
1
|
z
−
ζ
|
2
n
∑
1
≤
j
≤
n
(
ζ
¯
j
−
z
¯
j
)
d
ζ
¯
1
∧
d
ζ
1
∧
⋯
∧
d
ζ
j
∧
⋯
∧
d
ζ
¯
n
∧
d
ζ
n
{\displaystyle \omega (\zeta ,z)={\frac {(n-1)!}{(2\pi i)^{n}}}{\frac {1}{|z-\zeta |^{2n}}}\sum _{1\leq j\leq n}({\overline {\zeta }}_{j}-{\overline {z}}_{j})\,d{\overline {\zeta }}_{1}\land d\zeta _{1}\land \cdots \land d\zeta _{j}\land \cdots \land d{\overline {\zeta }}_{n}\land d\zeta _{n}}
f
(
z
)
=
∫
∂
D
f
(
ζ
)
ω
(
ζ
,
z
)
−
∫
D
∂
¯
f
(
ζ
)
∧
ω
(
ζ
,
z
)
.
{\displaystyle \displaystyle f(z)=\int _{\partial D}f(\zeta )\omega (\zeta ,z)-\int _{D}{\overline {\partial }}f(\zeta )\land \omega (\zeta ,z).}
In particular if f is holomorphic the second term vanishes, so
f
(
z
)
=
∫
∂
D
f
(
ζ
)
ω
(
ζ
,
z
)
.
{\displaystyle \displaystyle f(z)=\int _{\partial D}f(\zeta )\omega (\zeta ,z).}
=== Identity theorem ===
Holomorphic functions of several complex variables satisfy an identity theorem, as in one variable : two holomorphic functions defined on the same connected open set
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
and which coincide on an open subset N of D, are equal on the whole open set D. This result can be proven from the fact that holomorphics functions have power series extensions, and it can also be deduced from the one variable case. Contrary to the one variable case, it is possible that two different holomorphic functions coincide on a set which has an accumulation point, for instance the maps
f
(
z
1
,
z
2
)
=
0
{\displaystyle f(z_{1},z_{2})=0}
and
g
(
z
1
,
z
2
)
=
z
1
{\displaystyle g(z_{1},z_{2})=z_{1}}
coincide on the whole complex line of
C
2
{\displaystyle \mathbb {C} ^{2}}
defined by the equation
z
1
=
0
{\displaystyle z_{1}=0}
.
The maximal principle, inverse function theorem, and implicit function theorems also hold. For a generalized version of the implicit function theorem to complex variables, see the Weierstrass preparation theorem.
=== Biholomorphism ===
From the establishment of the inverse function theorem, the following mapping can be defined.
For the domain U, V of the n-dimensional complex space
C
n
{\displaystyle \mathbb {C} ^{n}}
, the bijective holomorphic function
ϕ
:
U
→
V
{\displaystyle \phi :U\to V}
and the inverse mapping
ϕ
−
1
:
V
→
U
{\displaystyle \phi ^{-1}:V\to U}
is also holomorphic. At this time,
ϕ
{\displaystyle \phi }
is called a U, V biholomorphism also, we say that U and V are biholomorphically equivalent or that they are biholomorphic.
==== The Riemann mapping theorem does not hold ====
When
n
>
1
{\displaystyle n>1}
, open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups. However, even in the case of several complex variables, there are some results similar to the results of the theory of uniformization in one complex variable.
=== Analytic continuation ===
Let U, V be domain on
C
n
{\displaystyle \mathbb {C} ^{n}}
, such that
f
∈
O
(
U
)
{\displaystyle f\in {\mathcal {O}}(U)}
and
g
∈
O
(
V
)
{\displaystyle g\in {\mathcal {O}}(V)}
, (
O
(
U
)
{\displaystyle {\mathcal {O}}(U)}
is the set/ring of holomorphic functions on U.) assume that
U
,
V
,
U
∩
V
≠
∅
{\displaystyle U,\ V,\ U\cap V\neq \varnothing }
and
W
{\displaystyle W}
is a connected component of
U
∩
V
{\displaystyle U\cap V}
. If
f
|
W
=
g
|
W
{\displaystyle f|_{W}=g|_{W}}
then f is said to be connected to V, and g is said to be analytic continuation of f. From the identity theorem, if g exists, for each way of choosing W it is unique. When n > 2, the following phenomenon occurs depending on the shape of the boundary
∂
U
{\displaystyle \partial U}
: there exists domain U, V, such that all holomorphic functions
f
{\displaystyle f}
over the domain U, have an analytic continuation
g
∈
O
(
V
)
{\displaystyle g\in {\mathcal {O}}(V)}
. In other words, there may not exist a function
f
∈
O
(
U
)
{\displaystyle f\in {\mathcal {O}}(U)}
such that
∂
U
{\displaystyle \partial U}
as the natural boundary. This is called the Hartogs's phenomenon. Therefore, investigating when domain boundaries become natural boundaries has become one of the main research themes of several complex variables. In addition, if
n
≥
2
{\displaystyle n\geq 2}
, it would be that the above V has an intersection part with U other than W. This contributed to advancement of the notion of sheaf cohomology.
== Reinhardt domain ==
In polydisks, the Cauchy's integral formula holds and the power series expansion of holomorphic functions is defined, but polydisks and open unit balls are not biholomorphic mapping because the Riemann mapping theorem does not hold, and also, polydisks was possible to separation of variables, but it doesn't always hold for any domain. Therefore, in order to study of the domain of convergence of the power series, it was necessary to make additional restriction on the domain, this was the Reinhardt domain. Early knowledge into the properties of field of study of several complex variables, such as Logarithmically-convex, Hartogs's extension theorem, etc., were given in the Reinhardt domain.
Let
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
(
n
≥
1
{\displaystyle n\geq 1}
) to be a domain, with centre at a point
a
=
(
a
1
,
…
,
a
n
)
∈
C
n
{\displaystyle a=(a_{1},\dots ,a_{n})\in \mathbb {C} ^{n}}
, such that, together with each point
z
0
=
(
z
1
0
,
…
,
z
n
0
)
∈
D
{\displaystyle z^{0}=(z_{1}^{0},\dots ,z_{n}^{0})\in D}
, the domain also contains the set
{
z
=
(
z
1
,
…
,
z
n
)
;
|
z
ν
−
a
ν
|
=
|
z
ν
0
−
a
ν
|
,
ν
=
1
,
…
,
n
}
.
{\displaystyle \left\{z=(z_{1},\dots ,z_{n});\left|z_{\nu }-a_{\nu }\right|=\left|z_{\nu }^{0}-a_{\nu }\right|,\ \nu =1,\dots ,n\right\}.}
A domain D is called a Reinhardt domain if it satisfies the following conditions:
Let
θ
ν
(
ν
=
1
,
…
,
n
)
{\displaystyle \theta _{\nu }\;(\nu =1,\dots ,n)}
is a arbitrary real numbers, a domain D is invariant under the rotation:
{
z
0
−
a
ν
}
→
{
e
i
θ
ν
(
z
ν
0
−
a
ν
)
}
{\displaystyle \left\{z^{0}-a_{\nu }\right\}\to \left\{e^{i\theta _{\nu }}(z_{\nu }^{0}-a_{\nu })\right\}}
.
The Reinhardt domains which are defined by the following condition; Together with all points of
z
0
∈
D
{\displaystyle z^{0}\in D}
, the domain contains the set
{
z
=
(
z
1
,
…
,
z
n
)
;
z
=
a
+
(
z
0
−
a
)
e
i
θ
,
0
≤
θ
<
2
π
}
.
{\displaystyle \left\{z=(z_{1},\dots ,z_{n});z=a+\left(z^{0}-a\right)e^{i\theta },\ 0\leq \theta <2\pi \right\}.}
A Reinhardt domain D is called a complete Reinhardt domain with centre at a point a if together with all point
z
0
∈
D
{\displaystyle z^{0}\in D}
it also contains the polydisc
{
z
=
(
z
1
,
…
,
z
n
)
;
|
z
ν
−
a
ν
|
≤
|
z
ν
0
−
a
ν
|
,
ν
=
1
,
…
,
n
}
.
{\displaystyle \left\{z=(z_{1},\dots ,z_{n});\left|z_{\nu }-a_{\nu }\right|\leq \left|z_{\nu }^{0}-a_{\nu }\right|,\ \nu =1,\dots ,n\right\}.}
A complete Reinhardt domain D is star-like with regard to its centre a. Therefore, the complete Reinhardt domain is simply connected, also when the complete Reinhardt domain is the boundary line, there is a way to prove the Cauchy's integral theorem without using the Jordan curve theorem.
=== Logarithmically-convex ===
When a some complete Reinhardt domain to be the domain of convergence of a power series, an additional condition is required, which is called logarithmically-convex.
A Reinhardt domain D is called logarithmically convex if the image
λ
(
D
∗
)
{\displaystyle \lambda (D^{*})}
of the set
D
∗
=
{
z
=
(
z
1
,
…
,
z
n
)
∈
D
;
z
1
,
…
,
z
n
≠
0
}
{\displaystyle D^{*}=\{z=(z_{1},\dots ,z_{n})\in D;z_{1},\dots ,z_{n}\neq 0\}}
under the mapping
λ
;
z
→
λ
(
z
)
=
(
ln
|
z
1
|
,
…
,
ln
|
z
n
|
)
{\displaystyle \lambda ;z\rightarrow \lambda (z)=(\ln |z_{1}|,\dots ,\ln |z_{n}|)}
is a convex set in the real coordinate space
R
n
{\displaystyle \mathbb {R} ^{n}}
.
Every such domain in
C
n
{\displaystyle \mathbb {C} ^{n}}
is the interior of the set of points of absolute convergence of some power series in
∑
k
1
,
…
,
k
n
=
0
∞
c
k
1
,
…
,
k
n
(
z
1
−
a
1
)
k
1
⋯
(
z
n
−
a
n
)
k
n
{\textstyle \sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ }
, and conversely; The domain of convergence of every power series in
z
1
,
…
,
z
n
{\displaystyle z_{1},\dots ,z_{n}}
is a logarithmically-convex Reinhardt domain with centre
a
=
0
{\displaystyle a=0}
.
But, there is an example of a complete Reinhardt domain D which is not logarithmically convex.
=== Some results ===
==== Hartogs's extension theorem and Hartogs's phenomenon ====
When examining the domain of convergence on the Reinhardt domain, Hartogs found the Hartogs's phenomenon in which holomorphic functions in some domain on the
C
n
{\displaystyle \mathbb {C} ^{n}}
were all connected to larger domain.
On the polydisk consisting of two disks
Δ
2
=
{
z
∈
C
2
;
|
z
1
|
<
1
,
|
z
2
|
<
1
}
{\displaystyle \Delta ^{2}=\{z\in \mathbb {C} ^{2};|z_{1}|<1,|z_{2}|<1\}}
when
0
<
ε
<
1
{\displaystyle 0<\varepsilon <1}
.
Internal domain of
H
ε
=
{
z
=
(
z
1
,
z
2
)
∈
Δ
2
;
|
z
1
|
<
ε
∪
1
−
ε
<
|
z
2
|
}
(
0
<
ε
<
1
)
{\displaystyle H_{\varepsilon }=\{z=(z_{1},z_{2})\in \Delta ^{2};|z_{1}|<\varepsilon \ \cup \ 1-\varepsilon <|z_{2}|\}\ (0<\varepsilon <1)}
Hartogs's extension theorem (1906); Let f be a holomorphic function on a set G \ K, where G is a bounded (surrounded by a rectifiable closed Jordan curve) domain on
C
n
{\displaystyle \mathbb {C} ^{n}}
(n ≥ 2) and K is a compact subset of G. If the complement G \ K is connected, then every holomorphic function f regardless of how it is chosen can be each extended to a unique holomorphic function on G.
It is also called Osgood–Brown theorem is that for holomorphic functions of several complex variables, the singularity is a accumulation point, not an isolated point. This means that the various properties that hold for holomorphic functions of one-variable complex variables do not hold for holomorphic functions of several complex variables. The nature of these singularities is also derived from Weierstrass preparation theorem. A generalization of this theorem using the same method as Hartogs was proved in 2007.
From Hartogs's extension theorem the domain of convergence extends from
H
ε
{\displaystyle H_{\varepsilon }}
to
Δ
2
{\displaystyle \Delta ^{2}}
. Looking at this from the perspective of the Reinhardt domain,
H
ε
{\displaystyle H_{\varepsilon }}
is the Reinhardt domain containing the center z = 0, and the domain of convergence of
H
ε
{\displaystyle H_{\varepsilon }}
has been extended to the smallest complete Reinhardt domain
Δ
2
{\displaystyle \Delta ^{2}}
containing
H
ε
{\displaystyle H_{\varepsilon }}
.
==== Thullen's classic results ====
Thullen's classical result says that a 2-dimensional bounded Reinhard domain containing the origin is biholomorphic to one of the following domains provided that the orbit of the origin by the automorphism group has positive dimension:
{
(
z
,
w
)
∈
C
2
;
|
z
|
<
1
,
|
w
|
<
1
}
{\displaystyle \{(z,w)\in \mathbb {C} ^{2};~|z|<1,~|w|<1\}}
(polydisc);
{
(
z
,
w
)
∈
C
2
;
|
z
|
2
+
|
w
|
2
<
1
}
{\displaystyle \{(z,w)\in \mathbb {C} ^{2};~|z|^{2}+|w|^{2}<1\}}
(unit ball);
{
(
z
,
w
)
∈
C
2
;
|
z
|
2
+
|
w
|
2
p
<
1
}
(
p
>
0
,
≠
1
)
{\displaystyle \{(z,w)\in \mathbb {C} ^{2};~|z|^{2}+|w|^{\frac {2}{p}}<1\}\,(p>0,\neq 1)}
(Thullen domain).
==== Sunada's results ====
Toshikazu Sunada (1978) established a generalization of Thullen's result:
Two n-dimensional bounded Reinhardt domains
G
1
{\displaystyle G_{1}}
and
G
2
{\displaystyle G_{2}}
are mutually biholomorphic if and only if there exists a transformation
φ
:
C
n
→
C
n
{\displaystyle \varphi :\mathbb {C} ^{n}\to \mathbb {C} ^{n}}
given by
z
i
↦
r
i
z
σ
(
i
)
(
r
i
>
0
)
{\displaystyle z_{i}\mapsto r_{i}z_{\sigma (i)}(r_{i}>0)}
,
σ
{\displaystyle \sigma }
being a permutation of the indices), such that
φ
(
G
1
)
=
G
2
{\displaystyle \varphi (G_{1})=G_{2}}
.
== Natural domain of the holomorphic function (domain of holomorphy) ==
When moving from the theory of one complex variable to the theory of several complex variables, depending on the range of the domain, it may not be possible to define a holomorphic function such that the boundary of the domain becomes a natural boundary. Considering the domain where the boundaries of the domain are natural boundaries (In the complex coordinate space
C
n
{\displaystyle \mathbb {C} ^{n}}
call the domain of holomorphy), the first result of the domain of holomorphy was the holomorphic convexity of H. Cartan and Thullen. Levi's problem shows that the pseudoconvex domain was a domain of holomorphy. (First for
C
2
{\displaystyle \mathbb {C} ^{2}}
, later extended to
C
n
{\displaystyle \mathbb {C} ^{n}}
.) Kiyoshi Oka's notion of idéal de domaines indéterminés is interpreted theory of sheaf cohomology by
H. Cartan and more development Serre. In sheaf cohomology, the domain of holomorphy has come to be interpreted as the theory of Stein manifolds. The notion of the domain of holomorphy is also considered in other complex manifolds, furthermore also the complex analytic space which is its generalization.
=== Domain of holomorphy ===
When a function f is holomorpic on the domain
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
and cannot directly connect to the domain outside D, including the point of the domain boundary
∂
D
{\displaystyle \partial D}
, the domain D is called the domain of holomorphy of f and the boundary is called the natural boundary of f. In other words, the domain of holomorphy D is the supremum of the domain where the holomorphic function f is holomorphic, and the domain D, which is holomorphic, cannot be extended any more. For several complex variables, i.e. domain
D
⊂
C
n
(
n
≥
2
)
{\displaystyle D\subset \mathbb {C} ^{n}\ (n\geq 2)}
, the boundaries may not be natural boundaries. Hartogs' extension theorem gives an example of a domain where boundaries are not natural boundaries.
Formally, a domain D in the n-dimensional complex coordinate space
C
n
{\displaystyle \mathbb {C} ^{n}}
is called a domain of holomorphy if there do not exist non-empty domain
U
⊂
D
{\displaystyle U\subset D}
and
V
⊂
C
n
{\displaystyle V\subset \mathbb {C} ^{n}}
,
V
⊄
D
{\displaystyle V\not \subset D}
and
U
⊂
D
∩
V
{\displaystyle U\subset D\cap V}
such that for every holomorphic function f on D there exists a holomorphic function g on V with
f
=
g
{\displaystyle f=g}
on U.
For the
n
=
1
{\displaystyle n=1}
case, every domain (
D
⊂
C
{\displaystyle D\subset \mathbb {C} }
) is a domain of holomorphy; we can find a holomorphic function that is not identically 0, but whose zeros accumulate everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal.
==== Properties of the domain of holomorphy ====
If
D
1
,
…
,
D
n
{\displaystyle D_{1},\dots ,D_{n}}
are domains of holomorphy, then their intersection
D
=
⋂
ν
=
1
n
D
ν
{\textstyle D=\bigcap _{\nu =1}^{n}D_{\nu }}
is also a domain of holomorphy.
If
D
1
⊆
D
2
⊆
⋯
{\displaystyle D_{1}\subseteq D_{2}\subseteq \cdots }
is an increasing sequence of domains of holomorphy, then their union
D
=
⋃
n
=
1
∞
D
n
{\textstyle D=\bigcup _{n=1}^{\infty }D_{n}}
is also a domain of holomorphy (see Behnke–Stein theorem).
If
D
1
{\displaystyle D_{1}}
and
D
2
{\displaystyle D_{2}}
are domains of holomorphy, then
D
1
×
D
2
{\displaystyle D_{1}\times D_{2}}
is a domain of holomorphy.
The first Cousin problem is always solvable in a domain of holomorphy, also Cartan showed that the converse of this result was incorrect for
n
≥
3
{\displaystyle n\geq 3}
. this is also true, with additional topological assumptions, for the second Cousin problem.
=== Holomorphically convex hull ===
Let
G
⊂
C
n
{\displaystyle G\subset \mathbb {C} ^{n}}
be a domain, or alternatively for a more general definition, let
G
{\displaystyle G}
be an
n
{\displaystyle n}
dimensional complex analytic manifold. Further let
O
(
G
)
{\displaystyle {\mathcal {O}}(G)}
stand for the set of holomorphic functions on G. For a compact set
K
⊂
G
{\displaystyle K\subset G}
, the holomorphically convex hull of K is
K
^
G
:=
{
z
∈
G
;
|
f
(
z
)
|
≤
sup
w
∈
K
|
f
(
w
)
|
for all
f
∈
O
(
G
)
.
}
.
{\displaystyle {\hat {K}}_{G}:=\left\{z\in G;|f(z)|\leq \sup _{w\in K}|f(w)|{\text{ for all }}f\in {\mathcal {O}}(G).\right\}.}
One obtains a narrower concept of polynomially convex hull by taking
O
(
G
)
{\displaystyle {\mathcal {O}}(G)}
instead to be the set of complex-valued polynomial functions on G. The polynomially convex hull contains the holomorphically convex hull.
The domain
G
{\displaystyle G}
is called holomorphically convex if for every compact subset
K
,
K
^
G
{\displaystyle K,{\hat {K}}_{G}}
is also compact in G. Sometimes this is just abbreviated as holomorph-convex.
When
n
=
1
{\displaystyle n=1}
, every domain
G
{\displaystyle G}
is holomorphically convex since then
K
^
G
{\displaystyle {\hat {K}}_{G}}
is the union of K with the relatively compact components of
G
∖
K
⊂
G
{\displaystyle G\setminus K\subset G}
.
When
n
≥
1
{\displaystyle n\geq 1}
, if f satisfies the above holomorphic convexity on D it has the following properties.
dist
(
K
,
D
c
)
=
dist
(
K
^
D
,
D
c
)
{\displaystyle {\text{dist}}(K,D^{c})={\text{dist}}({\hat {K}}_{D},D^{c})}
for every compact subset K in D, where
dist
(
K
,
D
c
)
{\displaystyle {\text{dist}}(K,D^{c})}
denotes the distance between K and
D
c
=
C
n
∖
D
{\displaystyle D^{c}=\mathbb {C} ^{n}\setminus D}
. Also, at this time, D is a domain of holomorphy. Therefore, every convex domain
(
D
⊂
C
n
)
{\displaystyle (D\subset \mathbb {C} ^{n})}
is domain of holomorphy.
=== Pseudoconvexity ===
Hartogs showed that
Hartogs (1906): Let D be a Hartogs's domain on
C
{\displaystyle \mathbb {C} }
and R be a positive function on D such that the set
Ω
{\displaystyle \Omega }
in
C
2
{\displaystyle \mathbb {C} ^{2}}
defined by
z
1
∈
D
{\displaystyle z_{1}\in D}
and
|
z
2
|
<
R
(
z
1
)
{\displaystyle |z_{2}|<R(z_{1})}
is a domain of holomorphy. Then
−
log
R
(
z
1
)
{\displaystyle -\log {R}(z_{1})}
is a subharmonic function on D.
If such a relations holds in the domain of holomorphy of several complex variables, it looks like a more manageable condition than a holomorphically convex. The subharmonic function looks like a kind of convex function, so it was named by Levi as a pseudoconvex domain (Hartogs's pseudoconvexity). Pseudoconvex domain (boundary of pseudoconvexity) are important, as they allow for classification of domains of holomorphy. A domain of holomorphy is a global property, by contrast, pseudoconvexity is that local analytic or local geometric property of the boundary of a domain.
==== Definition of plurisubharmonic function ====
A function
f
:
D
→
R
∪
{
−
∞
}
,
{\displaystyle f\colon D\to {\mathbb {R} }\cup \{-\infty \},}
with domain
D
⊂
C
n
{\displaystyle D\subset {\mathbb {C} }^{n}}
is called plurisubharmonic if it is upper semi-continuous, and for every complex line
{
a
+
b
z
;
z
∈
C
}
⊂
C
n
{\displaystyle \{a+bz;z\in \mathbb {C} \}\subset \mathbb {C} ^{n}}
with
a
,
b
∈
C
n
{\displaystyle a,b\in \mathbb {C} ^{n}}
the function
z
↦
f
(
a
+
b
z
)
{\displaystyle z\mapsto f(a+bz)}
is a subharmonic function on the set
{
z
∈
C
;
a
+
b
z
∈
D
}
.
{\displaystyle \{z\in \mathbb {C} ;a+bz\in D\}.}
In full generality, the notion can be defined on an arbitrary complex manifold or even a Complex analytic space
X
{\displaystyle X}
as follows. An upper semi-continuous function
f
:
X
→
R
∪
{
−
∞
}
{\displaystyle f\colon X\to \mathbb {R} \cup \{-\infty \}}
is said to be plurisubharmonic if and only if for any holomorphic map
φ
:
Δ
→
X
{\displaystyle \varphi \colon \Delta \to X}
the function
f
∘
φ
:
Δ
→
R
∪
{
−
∞
}
{\displaystyle f\circ \varphi \colon \Delta \to \mathbb {R} \cup \{-\infty \}}
is subharmonic, where
Δ
⊂
C
{\displaystyle \Delta \subset \mathbb {C} }
denotes the unit disk.
In one-complex variable, necessary and sufficient condition that the real-valued function
u
=
u
(
z
)
{\displaystyle u=u(z)}
, that can be second-order differentiable with respect to z of one-variable complex function is subharmonic is
Δ
=
4
(
∂
2
u
∂
z
∂
z
¯
)
≥
0
{\displaystyle \Delta =4\left({\frac {\partial ^{2}u}{\partial z\,\partial {\overline {z}}}}\right)\geq 0}
. Therefore, if
u
{\displaystyle u}
is of class
C
2
{\displaystyle {\mathcal {C}}^{2}}
, then
u
{\displaystyle u}
is plurisubharmonic if and only if the hermitian matrix
H
u
=
(
λ
i
j
)
,
λ
i
j
=
∂
2
u
∂
z
i
∂
z
¯
j
{\displaystyle H_{u}=(\lambda _{ij}),\lambda _{ij}={\frac {\partial ^{2}u}{\partial z_{i}\,\partial {\bar {z}}_{j}}}}
is positive semidefinite.
Equivalently, a
C
2
{\displaystyle {\mathcal {C}}^{2}}
-function u is plurisubharmonic if and only if
−
1
∂
∂
¯
f
{\displaystyle {\sqrt {-1}}\partial {\bar {\partial }}f}
is a positive (1,1)-form.: 39–40
===== Strictly plurisubharmonic function =====
When the hermitian matrix of u is positive-definite and class
C
2
{\displaystyle {\mathcal {C}}^{2}}
, we call u a strict plurisubharmonic function.
==== (Weakly) pseudoconvex (p-pseudoconvex) ====
Weak pseudoconvex is defined as : Let
X
⊂
C
n
{\displaystyle X\subset {\mathbb {C} }^{n}}
be a domain. One says that X is pseudoconvex if there exists a continuous plurisubharmonic function
φ
{\displaystyle \varphi }
on X such that the set
{
z
∈
X
;
φ
(
z
)
≤
sup
x
}
{\displaystyle \{z\in X;\varphi (z)\leq \sup x\}}
is a relatively compact subset of X for all real numbers x. i.e. there exists a smooth plurisubharmonic exhaustion function
ψ
∈
Psh
(
X
)
∩
C
∞
(
X
)
{\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)}
. Often, the definition of pseudoconvex is used here and is written as; Let X be a complex n-dimensional manifold. Then is said to be weeak pseudoconvex there exists a smooth plurisubharmonic exhaustion function
ψ
∈
Psh
(
X
)
∩
C
∞
(
X
)
{\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)}
.: 49
==== Strongly (Strictly) pseudoconvex ====
Let X be a complex n-dimensional manifold. Strongly (or Strictly) pseudoconvex if there exists a smooth strictly plurisubharmonic exhaustion function
ψ
∈
Psh
(
X
)
∩
C
∞
(
X
)
{\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)}
, i.e.,
H
ψ
{\displaystyle H\psi }
is positive definite at every point. The strongly pseudoconvex domain is the pseudoconvex domain.: 49 Strongly pseudoconvex and strictly pseudoconvex (i.e. 1-convex and 1-complete) are often used interchangeably, see Lempert for the technical difference.
==== Levi form ====
===== (Weakly) Levi(–Krzoska) pseudoconvexity =====
If
C
2
{\displaystyle {\mathcal {C}}^{2}}
boundary , it can be shown that D has a defining function; i.e., that there exists
ρ
:
C
n
→
R
{\displaystyle \rho :\mathbb {C} ^{n}\to \mathbb {R} }
which is
C
2
{\displaystyle {\mathcal {C}}^{2}}
so that
D
=
{
ρ
<
0
}
{\displaystyle D=\{\rho <0\}}
, and
∂
D
=
{
ρ
=
0
}
{\displaystyle \partial D=\{\rho =0\}}
. Now, D is pseudoconvex iff for every
p
∈
∂
D
{\displaystyle p\in \partial D}
and
w
{\displaystyle w}
in the complex tangent space at p, that is,
∇
ρ
(
p
)
w
=
∑
i
=
1
n
∂
ρ
(
p
)
∂
z
j
w
j
=
0
{\displaystyle \nabla \rho (p)w=\sum _{i=1}^{n}{\frac {\partial \rho (p)}{\partial z_{j}}}w_{j}=0}
, we have
H
(
ρ
)
=
∑
i
,
j
=
1
n
∂
2
ρ
(
p
)
∂
z
i
∂
z
j
¯
w
i
w
j
¯
≥
0.
{\displaystyle H(\rho )=\sum _{i,j=1}^{n}{\frac {\partial ^{2}\rho (p)}{\partial z_{i}\,\partial {\bar {z_{j}}}}}w_{i}{\bar {w_{j}}}\geq 0.}
If D does not have a
C
2
{\displaystyle {\mathcal {C}}^{2}}
boundary, the following approximation result can be useful.
Proposition 1 If D is pseudoconvex, then there exist bounded, strongly Levi pseudoconvex domains
D
k
⊂
D
{\displaystyle D_{k}\subset D}
with class
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
-boundary which are relatively compact in D, such that
D
=
⋃
k
=
1
∞
D
k
.
{\displaystyle D=\bigcup _{k=1}^{\infty }D_{k}.}
This is because once we have a
φ
{\displaystyle \varphi }
as in the definition we can actually find a
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
exhaustion function.
===== Strongly (or Strictly) Levi (–Krzoska) pseudoconvex (a.k.a. Strongly (Strictly) pseudoconvex) =====
When the Levi (–Krzoska) form is positive-definite, it is called strongly Levi (–Krzoska) pseudoconvex or often called simply strongly (or strictly) pseudoconvex.
==== Levi total pseudoconvex ====
If for every boundary point
ρ
{\displaystyle \rho }
of D, there exists an analytic variety
B
{\displaystyle {\mathcal {B}}}
passing
ρ
{\displaystyle \rho }
which lies entirely outside D in some neighborhood around
ρ
{\displaystyle \rho }
, except the point
ρ
{\displaystyle \rho }
itself. Domain D that satisfies these conditions is called Levi total pseudoconvex.
==== Oka pseudoconvex ====
===== Family of Oka's disk =====
Let n-functions
φ
:
z
j
=
φ
j
(
u
,
t
)
{\displaystyle \varphi :z_{j}=\varphi _{j}(u,t)}
be continuous on
Δ
:
|
U
|
≤
1
,
0
≤
t
≤
1
{\displaystyle \Delta :|U|\leq 1,0\leq t\leq 1}
, holomorphic in
|
u
|
<
1
{\displaystyle |u|<1}
when the parameter t is fixed in [0, 1], and assume that
∂
φ
j
∂
u
{\displaystyle {\frac {\partial \varphi _{j}}{\partial u}}}
are not all zero at any point on
Δ
{\displaystyle \Delta }
. Then the set
Q
(
t
)
:=
{
Z
j
=
φ
j
(
u
,
t
)
;
|
u
|
≤
1
}
{\displaystyle Q(t):=\{Z_{j}=\varphi _{j}(u,t);|u|\leq 1\}}
is called an analytic disc de-pending on a parameter t, and
B
(
t
)
:=
{
Z
j
=
φ
j
(
u
,
t
)
;
|
u
|
=
1
}
{\displaystyle B(t):=\{Z_{j}=\varphi _{j}(u,t);|u|=1\}}
is called its shell. If
Q
(
t
)
⊂
D
(
0
<
t
)
{\displaystyle Q(t)\subset D\ (0<t)}
and
B
(
0
)
⊂
D
{\displaystyle B(0)\subset D}
, Q(t) is called Family of Oka's disk.
===== Definition =====
When
Q
(
0
)
⊂
D
{\displaystyle Q(0)\subset D}
holds on any family of Oka's disk, D is called Oka pseudoconvex. Oka's proof of Levi's problem was that when the unramified Riemann domain over
C
n
{\displaystyle \mathbb {C} ^{n}}
was a domain of holomorphy (holomorphically convex), it was proved that it was necessary and sufficient that each boundary point of the domain of holomorphy is an Oka pseudoconvex.
==== Locally pseudoconvex (a.k.a. locally Stein, Cartan pseudoconvex, local Levi property) ====
For every point
x
∈
∂
D
{\displaystyle x\in \partial D}
there exist a neighbourhood U of x and f holomorphic. ( i.e.
U
∩
D
{\displaystyle U\cap D}
be holomorphically convex.) such that f cannot be extended to any neighbourhood of x. i.e., let
ψ
:
X
→
Y
{\displaystyle \psi :X\to Y}
be a holomorphic map, if every point
y
∈
Y
{\displaystyle y\in Y}
has a neighborhood U such that
ψ
−
1
(
U
)
{\displaystyle \psi ^{-1}(U)}
admits a
C
∞
{\displaystyle {\mathcal {C}}^{\infty }}
-plurisubharmonic exhaustion function (weakly 1-complete), in this situation, we call that X is locally pseudoconvex (or locally Stein) over Y. As an old name, it is also called Cartan pseudoconvex. In
C
n
{\displaystyle \mathbb {C} ^{n}}
the locally pseudoconvex domain is itself a pseudoconvex domain and it is a domain of holomorphy. For example, Diederich–Fornæss found local pseudoconvex bounded domains
Ω
{\displaystyle \Omega }
with smooth boundary on non-Kähler manifolds such that
Ω
{\displaystyle \Omega }
is not weakly 1-complete.
=== Conditions equivalent to domain of holomorphy ===
For a domain
D
⊂
C
n
{\displaystyle D\subset \mathbb {C} ^{n}}
the following conditions are equivalent:
D is a domain of holomorphy.
D is holomorphically convex.
D is the union of an increasing sequence of analytic polyhedrons in D.
D is pseudoconvex.
D is Locally pseudoconvex.
The implications
1
⇔
2
⇔
3
{\displaystyle 1\Leftrightarrow 2\Leftrightarrow 3}
,
1
⇒
4
{\displaystyle 1\Rightarrow 4}
, and
4
⇒
5
{\displaystyle 4\Rightarrow 5}
are standard results. Proving
5
⇒
1
{\displaystyle 5\Rightarrow 1}
, i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi) and was solved for unramified Riemann domains over
C
n
{\displaystyle \mathbb {C} ^{n}}
by Kiyoshi Oka, but for ramified Riemann domains, pseudoconvexity does not characterize holomorphically convexity, and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of
∂
¯
{\displaystyle {\bar {\partial }}}
-problem(equation) with a L2 methods).
== Sheaves ==
The introduction of sheaves into several complex variables allowed the reformulation of and solution to several important problems in the field.
=== Idéal de domaines indéterminés (The predecessor of the notion of the coherent (sheaf)) ===
Oka introduced the notion which he termed "idéal de domaines indéterminés" or "ideal of indeterminate domains". Specifically, it is a set
(
I
)
{\displaystyle (I)}
of pairs
(
f
,
δ
)
{\displaystyle (f,\delta )}
,
f
{\displaystyle f}
holomorphic on a non-empty open set
δ
{\displaystyle \delta }
, such that
If
(
f
,
δ
)
∈
(
I
)
{\displaystyle (f,\delta )\in (I)}
and
(
a
,
δ
′
)
{\displaystyle (a,\delta ')}
is arbitrary, then
(
a
f
,
δ
∩
δ
′
)
∈
(
I
)
{\displaystyle (af,\delta \cap \delta ')\in (I)}
.
For each
(
f
,
δ
)
,
(
f
′
,
δ
′
)
∈
(
I
)
{\displaystyle (f,\delta ),(f',\delta ')\in (I)}
, then
(
f
+
f
′
,
δ
∩
δ
′
)
∈
(
I
)
.
{\displaystyle (f+f',\delta \cap \delta ')\in (I).}
The origin of indeterminate domains comes from the fact that domains change depending on the pair
(
f
,
δ
)
{\displaystyle (f,\delta )}
. Cartan translated this notion into the notion of the coherent (sheaf) (Especially, coherent analytic sheaf) in sheaf cohomology. This name comes from
H. Cartan. Also, Serre (1955) introduced the notion of the coherent sheaf into algebraic geometry, that is, the notion of the coherent algebraic sheaf. The notion of coherent (coherent sheaf cohomology) helped solve the problems in several complex variables.
=== Coherent sheaf ===
==== Definition ====
The definition of the coherent sheaf is as follows.
: 83–89
A quasi-coherent sheaf on a ringed space
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a sheaf
F
{\displaystyle {\mathcal {F}}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules which has a local presentation, that is, every point in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
in which there is an exact sequence
O
X
⊕
I
|
U
→
O
X
⊕
J
|
U
→
F
|
U
→
0
{\displaystyle {\mathcal {O}}_{X}^{\oplus I}|_{U}\to {\mathcal {O}}_{X}^{\oplus J}|_{U}\to {\mathcal {F}}|_{U}\to 0}
for some (possibly infinite) sets
I
{\displaystyle I}
and
J
{\displaystyle J}
.
A coherent sheaf on a ringed space
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
is a sheaf
F
{\displaystyle {\mathcal {F}}}
satisfying the following two properties:
F
{\displaystyle {\mathcal {F}}}
is of finite type over
O
X
{\displaystyle {\mathcal {O}}_{X}}
, that is, every point in
X
{\displaystyle X}
has an open neighborhood
U
{\displaystyle U}
in
X
{\displaystyle X}
such that there is a surjective morphism
O
X
⊕
n
|
U
→
F
|
U
{\displaystyle {\mathcal {O}}_{X}^{\oplus n}|_{U}\to {\mathcal {F}}|_{U}}
for some natural number
n
{\displaystyle n}
;
for each open set
U
⊆
X
{\displaystyle U\subseteq X}
, integer
n
>
0
{\displaystyle n>0}
, and arbitrary morphism
φ
:
O
X
⊕
n
|
U
→
F
|
U
{\displaystyle \varphi :{\mathcal {O}}_{X}^{\oplus n}|_{U}\to {\mathcal {F}}|_{U}}
of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules, the kernel of
φ
{\displaystyle \varphi }
is of finite type.
Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules.
Also, Jean-Pierre Serre (1955) proves that
If in an exact sequence
0
→
F
1
|
U
→
F
2
|
U
→
F
3
|
U
→
0
{\displaystyle 0\to {\mathcal {F}}_{1}|_{U}\to {\mathcal {F}}_{2}|_{U}\to {\mathcal {F}}_{3}|_{U}\to 0}
of sheaves of
O
{\displaystyle {\mathcal {O}}}
-modules two of the three sheaves
F
j
{\displaystyle {\mathcal {F}}_{j}}
are coherent, then the third is coherent as well.
==== (Oka–Cartan) coherent theorem ====
(Oka–Cartan) coherent theorem says that each sheaf that meets the following conditions is a coherent.
the sheaf
O
:=
O
C
n
{\displaystyle {\mathcal {O}}:={\mathcal {O}}_{\mathbb {C} _{n}}}
of germs of holomorphic functions on
C
n
{\displaystyle \mathbb {C} _{n}}
, or the structure sheaf
O
X
{\displaystyle {\mathcal {O}}_{X}}
of complex submanifold or every complex analytic space
(
X
,
O
X
)
{\displaystyle (X,{\mathcal {O}}_{X})}
the ideal sheaf
I
⟨
A
⟩
{\displaystyle {\mathcal {I}}\langle A\rangle }
of an analytic subset A of an open subset of
C
n
{\displaystyle \mathbb {C} _{n}}
. (Cartan 1950)
the normalization of the structure sheaf of a complex analytic space
From the above Serre(1955) theorem,
O
p
{\displaystyle {\mathcal {O}}^{p}}
is a coherent sheaf, also, (i) is used to prove Cartan's theorems A and B.
=== Cousin problem ===
In the case of one variable complex functions, Mittag-Leffler's theorem was able to create a global meromorphic function from a given and principal parts (Cousin I problem), and Weierstrass factorization theorem was able to create a global meromorphic function from a given zeroes or zero-locus (Cousin II problem). However, these theorems do not hold in several complex variables because the singularities of analytic function in several complex variables are not isolated points; these problems are called the Cousin problems and are formulated in terms of sheaf cohomology. They were first introduced in special cases by Pierre Cousin in 1895. It was Oka who showed the conditions for solving first Cousin problem for the domain of holomorphy on the complex coordinate space, also solving the second Cousin problem with additional topological assumptions. The Cousin problem is a problem related to the analytical properties of complex manifolds, but the only obstructions to solving problems of a complex analytic property are pure topological; Serre called this the Oka principle. They are now posed, and solved, for arbitrary complex manifold M, in terms of conditions on M. M, which satisfies these conditions, is one way to define a Stein manifold. The study of the cousin's problem made us realize that in the study of several complex variables, it is possible to study of global properties from the patching of local data, that is it has developed the theory of sheaf cohomology. (e.g.Cartan seminar.)
==== First Cousin problem ====
Without the language of sheaves, the problem can be formulated as follows. On a complex manifold M, one is given several meromorphic functions
f
i
{\displaystyle f_{i}}
along with domains
U
i
{\displaystyle U_{i}}
where they are defined, and where each difference
f
i
−
f
j
{\displaystyle f_{i}-f_{j}}
is holomorphic (wherever the difference is defined). The first Cousin problem then asks for a meromorphic function
f
{\displaystyle f}
on M such that
f
−
f
i
{\displaystyle f-f_{i}}
is holomorphic on
U
i
{\displaystyle U_{i}}
; in other words, that
f
{\displaystyle f}
shares the singular behaviour of the given local function.
Now, let K be the sheaf of meromorphic functions and O the sheaf of holomorphic functions on M. The first Cousin problem can always be solved if the following map is surjective:
H
0
(
M
,
K
)
→
ϕ
H
0
(
M
,
K
/
O
)
.
{\displaystyle H^{0}(M,\mathbf {K} ){\xrightarrow {\phi }}H^{0}(M,\mathbf {K} /\mathbf {O} ).}
By the long exact cohomology sequence,
H
0
(
M
,
K
)
→
ϕ
H
0
(
M
,
K
/
O
)
→
H
1
(
M
,
O
)
{\displaystyle H^{0}(M,\mathbf {K} ){\xrightarrow {\phi }}H^{0}(M,\mathbf {K} /\mathbf {O} )\to H^{1}(M,\mathbf {O} )}
is exact, and so the first Cousin problem is always solvable provided that the first cohomology group H1(M,O) vanishes. In particular, by Cartan's theorem B, the Cousin problem is always solvable if M is a Stein manifold.
==== Second Cousin problem ====
The second Cousin problem starts with a similar set-up to the first, specifying instead that each ratio
f
i
/
f
j
{\displaystyle f_{i}/f_{j}}
is a non-vanishing holomorphic function (where said difference is defined). It asks for a meromorphic function
f
{\displaystyle f}
on M such that
f
/
f
i
{\displaystyle f/f_{i}}
is holomorphic and non-vanishing.
Let
O
∗
{\displaystyle \mathbf {O} ^{*}}
be the sheaf of holomorphic functions that vanish nowhere, and
K
∗
{\displaystyle \mathbf {K} ^{*}}
the sheaf of meromorphic functions that are not identically zero. These are both then sheaves of abelian groups, and the quotient sheaf
K
∗
/
O
∗
{\displaystyle \mathbf {K} ^{*}/\mathbf {O} ^{*}}
is well-defined. If the following map
ϕ
{\displaystyle \phi }
is surjective, then Second Cousin problem can be solved:
H
0
(
M
,
K
∗
)
→
ϕ
H
0
(
M
,
K
∗
/
O
∗
)
.
{\displaystyle H^{0}(M,\mathbf {K} ^{*}){\xrightarrow {\phi }}H^{0}(M,\mathbf {K} ^{*}/\mathbf {O} ^{*}).}
The long exact sheaf cohomology sequence associated to the quotient is
H
0
(
M
,
K
∗
)
→
ϕ
H
0
(
M
,
K
∗
/
O
∗
)
→
H
1
(
M
,
O
∗
)
{\displaystyle H^{0}(M,\mathbf {K} ^{*}){\xrightarrow {\phi }}H^{0}(M,\mathbf {K} ^{*}/\mathbf {O} ^{*})\to H^{1}(M,\mathbf {O} ^{*})}
so the second Cousin problem is solvable in all cases provided that
H
1
(
M
,
O
∗
)
=
0.
{\displaystyle H^{1}(M,\mathbf {O} ^{*})=0.}
The cohomology group
H
1
(
M
,
O
∗
)
{\displaystyle H^{1}(M,\mathbf {O} ^{*})}
for the multiplicative structure on
O
∗
{\displaystyle \mathbf {O} ^{*}}
can be compared with the cohomology group
H
1
(
M
,
O
)
{\displaystyle H^{1}(M,\mathbf {O} )}
with its additive structure by taking a logarithm. That is, there is an exact sequence of sheaves
0
→
2
π
i
Z
→
O
→
exp
O
∗
→
0
{\displaystyle 0\to 2\pi i\mathbb {Z} \to \mathbf {O} \xrightarrow {\exp } \mathbf {O} ^{*}\to 0}
where the leftmost sheaf is the locally constant sheaf with fiber
2
π
i
Z
{\displaystyle 2\pi i\mathbb {Z} }
. The obstruction to defining a logarithm at the level of H1 is in
H
2
(
M
,
Z
)
{\displaystyle H^{2}(M,\mathbb {Z} )}
, from the long exact cohomology sequence
H
1
(
M
,
O
)
→
H
1
(
M
,
O
∗
)
→
2
π
i
H
2
(
M
,
Z
)
→
H
2
(
M
,
O
)
.
{\displaystyle H^{1}(M,\mathbf {O} )\to H^{1}(M,\mathbf {O} ^{*})\to 2\pi iH^{2}(M,\mathbb {Z} )\to H^{2}(M,\mathbf {O} ).}
When M is a Stein manifold, the middle arrow is an isomorphism because
H
q
(
M
,
O
)
=
0
{\displaystyle H^{q}(M,\mathbf {O} )=0}
for
q
>
0
{\displaystyle q>0}
so that a necessary and sufficient condition in that case for the second Cousin problem to be always solvable is that
H
2
(
M
,
Z
)
=
0.
{\displaystyle H^{2}(M,\mathbb {Z} )=0.}
(This condition called Oka principle.)
== Manifolds and analytic varieties with several complex variables ==
=== Stein manifold (non-compact Kähler manifold) ===
Since a non-compact (open) Riemann surface always has a non-constant single-valued holomorphic function, and satisfies the second axiom of countability, the open Riemann surface is in fact a 1-dimensional complex manifold possessing a holomorphic mapping into the complex plane
C
{\displaystyle \mathbb {C} }
. (In fact, Gunning and Narasimhan have shown (1967) that every non-compact Riemann surface actually has a holomorphic immersion into the complex plane. In other words, there is a holomorphic mapping into the complex plane whose derivative never vanishes.) The Whitney embedding theorem tells us that every smooth n-dimensional manifold can be embedded as a smooth submanifold of
R
2
n
{\displaystyle \mathbb {R} ^{2n}}
, whereas it is "rare" for a complex manifold to have a holomorphic embedding into
C
n
{\displaystyle \mathbb {C} ^{n}}
. For example, for an arbitrary compact connected complex manifold X, every holomorphic function on it is constant by Liouville's theorem, and so it cannot have any embedding into complex n-space. That is, for several complex variables, arbitrary complex manifolds do not always have holomorphic functions that are not constants. So, consider the conditions under which a complex manifold has a holomorphic function that is not a constant. Now if we had a holomorphic embedding of X into
C
n
{\displaystyle \mathbb {C} ^{n}}
, then the coordinate functions of
C
n
{\displaystyle \mathbb {C} ^{n}}
would restrict to nonconstant holomorphic functions on X, contradicting compactness, except in the case that X is just a point. Complex manifolds that can be holomorphic embedded into
C
n
{\displaystyle \mathbb {C} ^{n}}
are called Stein manifolds. Also Stein manifolds satisfy the second axiom of countability.
A Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after Karl Stein (1951). A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry. If the univalent domain on
C
n
{\displaystyle \mathbb {C} ^{n}}
is connection to a manifold, can be regarded as a complex manifold and satisfies the separation condition described later, the condition for becoming a Stein manifold is to satisfy the holomorphic convexity. Therefore, the Stein manifold is the properties of the domain of definition of the (maximal) analytic continuation of an analytic function.
==== Definition ====
Suppose X is a paracompact complex manifolds of complex dimension
n
{\displaystyle n}
and let
O
(
X
)
{\displaystyle {\mathcal {O}}(X)}
denote the ring of holomorphic functions on X. We call X a Stein manifold if the following conditions hold:
X is holomorphically convex, i.e. for every compact subset
K
⊂
X
{\displaystyle K\subset X}
, the so-called holomorphically convex hull,
K
¯
=
{
z
∈
X
;
|
f
(
z
)
|
≤
sup
w
∈
K
|
f
(
w
)
|
,
∀
f
∈
O
(
X
)
}
,
{\displaystyle {\bar {K}}=\left\{z\in X;|f(z)|\leq \sup _{w\in K}|f(w)|,\ \forall f\in {\mathcal {O}}(X)\right\},}
is also a compact subset of X.
X is holomorphically separable, i.e. if
x
≠
y
{\displaystyle x\neq y}
are two points in X, then there exists
f
∈
O
(
X
)
{\displaystyle f\in {\mathcal {O}}(X)}
such that
f
(
x
)
≠
f
(
y
)
.
{\displaystyle f(x)\neq f(y).}
The open neighborhood of every point on the manifold has a holomorphic chart to the
O
(
X
)
{\displaystyle {\mathcal {O}}(X)}
.
Note that condition (3) can be derived from conditions (1) and (2).
==== Every non-compact (open) Riemann surface is a Stein manifold ====
Let X be a connected, non-compact (open) Riemann surface. A deep theorem of Behnke and Stein (1948) asserts that X is a Stein manifold.
Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so
H
1
(
X
,
O
X
∗
)
=
0
{\displaystyle H^{1}(X,{\mathcal {O}}_{X}^{*})=0}
. The exponential sheaf sequence leads to the following exact sequence:
H
1
(
X
,
O
X
)
⟶
H
1
(
X
,
O
X
∗
)
⟶
H
2
(
X
,
Z
)
⟶
H
2
(
X
,
O
X
)
{\displaystyle H^{1}(X,{\mathcal {O}}_{X})\longrightarrow H^{1}(X,{\mathcal {O}}_{X}^{*})\longrightarrow H^{2}(X,\mathbb {Z} )\longrightarrow H^{2}(X,{\mathcal {O}}_{X})}
Now Cartan's theorem B shows that
H
1
(
X
,
O
X
)
=
H
2
(
X
,
O
X
)
=
0
{\displaystyle H^{1}(X,{\mathcal {O}}_{X})=H^{2}(X,{\mathcal {O}}_{X})=0}
, therefore
H
2
(
X
,
Z
)
=
0
{\displaystyle H^{2}(X,\mathbb {Z} )=0}
.
This is related to the solution of the second (multiplicative) Cousin problem.
==== Levi problems ====
Cartan extended Levi's problem to Stein manifolds.
If the relative compact open subset
D
⊂
X
{\displaystyle D\subset X}
of the Stein manifold X is a Locally pseudoconvex, then D is a Stein manifold, and conversely, if D is a Locally pseudoconvex, then X is a Stein manifold. i.e. Then X is a Stein manifold if and only if D is locally the Stein manifold.
This was proved by Bremermann by embedding it in a sufficiently high dimensional
C
n
{\displaystyle \mathbb {C} ^{n}}
, and reducing it to the result of Oka.
Also, Grauert proved for arbitrary complex manifolds M.
If the relative compact subset
D
⊂
M
{\displaystyle D\subset M}
of a arbitrary complex manifold M is a strongly pseudoconvex on M, then M is a holomorphically convex (i.e. Stein manifold). Also, D is itself a Stein manifold.
And Narasimhan extended Levi's problem to complex analytic space, a generalized in the singular case of complex manifolds.
A Complex analytic space which admits a continuous strictly plurisubharmonic exhaustion function (i.e.strongly pseudoconvex) is Stein space.
Levi's problem remains unresolved in the following cases;
Suppose that X is a singular Stein space,
D
⊂⊂
X
{\displaystyle D\subset \subset X}
. Suppose that for all
p
∈
∂
D
{\displaystyle p\in \partial D}
there is an open neighborhood
U
(
p
)
{\displaystyle U(p)}
so that
U
∩
D
{\displaystyle U\cap D}
is Stein space. Is D itself Stein?
more generalized
Suppose that N be a Stein space and f an injective, and also
f
:
M
→
N
{\displaystyle f:M\to N}
a Riemann unbranched domain, such that map f is a locally pseudoconvex map (i.e. Stein morphism). Then M is itself Stein ?: 109
and also,
Suppose that X be a Stein space and
D
=
⋃
n
∈
N
D
n
{\displaystyle D=\bigcup _{n\in \mathbb {N} }D_{n}}
an increasing union of Stein open sets. Then D is itself Stein ?
This means that Behnke–Stein theorem, which holds for Stein manifolds, has not found a conditions to be established in Stein space.
===== K-complete =====
Grauert introduced the concept of K-complete in the proof of Levi's problem.
Let X is complex manifold, X is K-complete if, to each point
x
0
∈
X
{\displaystyle x_{0}\in X}
, there exist finitely many holomorphic map
f
1
,
…
,
f
k
{\displaystyle f_{1},\dots ,f_{k}}
of X into
C
p
{\displaystyle \mathbb {C} ^{p}}
,
p
=
p
(
x
0
)
{\displaystyle p=p(x_{0})}
, such that
x
0
{\displaystyle x_{0}}
is an isolated point of the set
A
=
{
x
∈
X
;
f
−
1
f
(
x
0
)
(
v
=
1
,
…
,
k
)
}
{\displaystyle A=\{x\in X;f^{-1}f(x_{0})\ (v=1,\dots ,k)\}}
. This concept also applies to complex analytic space.
==== Properties and examples of Stein manifolds ====
The standard complex space
C
n
{\displaystyle \mathbb {C} ^{n}}
is a Stein manifold.
Every domain of holomorphy in
C
n
{\displaystyle \mathbb {C} ^{n}}
is a Stein manifold.
It can be shown quite easily that every closed complex submanifold of a Stein manifold is a Stein manifold, too.
The embedding theorem for Stein manifolds states the following: Every Stein manifold X of complex dimension n can be embedded into
C
2
n
+
1
{\displaystyle \mathbb {C} ^{2n+1}}
by a biholomorphic proper map.
These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic).
Every Stein manifold of (complex) dimension n has the homotopy type of an n-dimensional CW-Complex.
In one complex dimension the Stein condition can be simplified: a connected Riemann surface is a Stein manifold if and only if it is not compact. This can be proved using a version of the Runge theorem for Riemann surfaces, due to Behnke and Stein.
Every Stein manifold X is holomorphically spreadable, i.e. for every point
x
∈
X
{\displaystyle x\in X}
, there are n holomorphic functions defined on all of X which form a local coordinate system when restricted to some open neighborhood of x.
The first Cousin problem can always be solved on a Stein manifold.
Being a Stein manifold is equivalent to being a (complex) strongly pseudoconvex manifold. The latter means that it has a strongly pseudoconvex (or plurisubharmonic) exhaustive function, i.e. a smooth real function
ψ
{\displaystyle \psi }
on X (which can be assumed to be a Morse function) with
i
∂
∂
¯
ψ
>
0
{\displaystyle i\partial {\bar {\partial }}\psi >0}
, such that the subsets
{
z
∈
X
∣
ψ
(
z
)
≤
c
}
{\displaystyle \{z\in X\mid \psi (z)\leq c\}}
are compact in X for every real number c. This is a solution to the so-called Levi problem, named after E. E. Levi (1911). The function
ψ
{\displaystyle \psi }
invites a generalization of Stein manifold to the idea of a corresponding class of compact complex manifolds with boundary called Stein domain. A Stein domain is the preimage
{
z
∣
−
∞
≤
ψ
(
z
)
≤
c
}
{\displaystyle \{z\mid -\infty \leq \psi (z)\leq c\}}
. Some authors call such manifolds therefore strictly pseudoconvex manifolds.
Related to the previous item, another equivalent and more topological definition in complex dimension 2 is the following: a Stein surface is a complex surface X with a real-valued Morse function f on X such that, away from the critical points of f, the field of complex tangencies to the preimage
X
c
=
f
−
1
(
c
)
{\displaystyle X_{c}=f^{-1}(c)}
is a contact structure that induces an orientation on Xc agreeing with the usual orientation as the boundary of
f
−
1
(
−
∞
,
c
)
.
{\displaystyle f^{-1}(-\infty ,c).}
That is,
f
−
1
(
−
∞
,
c
)
{\displaystyle f^{-1}(-\infty ,c)}
is a Stein filling of Xc.
Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B, relating to sheaf cohomology.
In the GAGA set of analogies, Stein manifolds correspond to affine varieties.
Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory".
=== Complex projective varieties (compact complex manifold) ===
Meromorphic function in one-variable complex function were studied in a
compact (closed) Riemann surface, because since the Riemann-Roch theorem (Riemann's inequality) holds for compact Riemann surfaces (Therefore the theory of compact Riemann surface can be regarded as the theory of (smooth (non-singular) projective) algebraic curve over
C
{\displaystyle \mathbb {C} }
). In fact, compact Riemann surface had a non-constant single-valued meromorphic function, and also a compact Riemann surface had enough meromorphic functions. A compact one-dimensional complex manifold was a Riemann sphere
C
^
≅
C
P
1
{\displaystyle {\widehat {\mathbb {C} }}\cong \mathbb {CP} ^{1}}
. However, the abstract notion of a compact Riemann surface is always algebraizable (The Riemann's existence theorem, Kodaira embedding theorem.), but it is not easy to verify which compact complex analytic spaces are algebraizable. In fact, Hopf found a class of compact complex manifolds without nonconstant meromorphic functions. However, there is a Siegel result that gives the necessary conditions for compact complex manifolds to be algebraic. The generalization of the Riemann-Roch theorem to several complex variables was first extended to compact analytic surfaces by Kodaira, Kodaira also extended the theorem to three-dimensional, and n-dimensional Kähler varieties. Serre formulated the Riemann–Roch theorem as a problem of dimension of coherent sheaf cohomology, and also Serre proved Serre duality. Cartan and Serre proved the following property: the cohomology group is finite-dimensional for a coherent sheaf on a compact complex manifold M. Riemann–Roch on a Riemann surface for a vector bundle was proved by Weil in 1938.
Hirzebruch generalized the theorem to compact complex manifolds in 1994 and Grothendieck generalized it to a relative version (relative statements about morphisms.). Next, the generalization of the result that "the compact Riemann surfaces are projective" to the high-dimension. In particular, consider the conditions that when embedding of compact complex submanifold X into the complex projective space
C
P
n
{\displaystyle \mathbb {CP} ^{n}}
. The vanishing theorem (was first introduced by Kodaira in 1953) gives the condition, when the sheaf cohomology group vanishing, and the condition is to satisfy a kind of positivity. As an application of this theorem, the Kodaira embedding theorem says that a compact Kähler manifold M, with a Hodge metric, there is a complex-analytic embedding of M into complex projective space of enough high-dimension N. In addition the Chow's theorem shows that the complex analytic subspace (subvariety) of a closed complex projective space to be an algebraic that is, so it is the common zero of some homogeneous polynomials, such a relationship is one example of what is called Serre's GAGA principle. The complex analytic sub-space(variety) of the complex projective space has both algebraic and analytic properties. Then combined with Kodaira's result, a compact Kähler manifold M embeds as an algebraic variety. This result gives an example of a complex manifold with enough meromorphic functions. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory. Also, the deformation theory of compact complex manifolds has developed as Kodaira–Spencer theory. However, despite being a compact complex manifold, there are counterexample of that cannot be embedded in projective space and are not algebraic. Analogy of the Levi problems on the complex projective space
C
P
n
{\displaystyle \mathbb {CP} ^{n}}
by Takeuchi.
== See also ==
Bicomplex number
Complex geometry
CR manifold
Dolbeault cohomology
Harmonic maps
Harmonic morphisms
Infinite-dimensional holomorphy
Oka–Weil theorem
== Annotation ==
== References ==
=== Inline citations ===
=== Textbooks ===
=== Encyclopedia of Mathematics ===
== Further reading ==
== External links ==
Tasty Bits of Several Complex Variables open source book by Jiří Lebl
Complex Analytic and Differential Geometry (OpenContent book See B2)
Demailly, Jean-Pierre (2012). "Henri Cartan et les fonctions holomorphes de plusieurs variables" (PDF). In Harinck, Pascale; Plagne, Alain; Sabbah, Claude (eds.). Henri Cartan et André Weil. Mathématiciens du XXesiècle. Journées mathématiques X-UPS, Palaiseau, France, May 3–4, 2012. Palaiseau: Les Éditions de l'École Polytechnique. pp. 99–168. ISBN 978-2-7302-1610-4.
Victor Guillemin. 18.117 Topics in Several Complex Variables. Spring 2005. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA.
This article incorporates material from the following PlanetMath articles, which are licensed under the Creative Commons Attribution/Share-Alike License: Reinhardt domain, Holomorphically convex, Domain of holomorphy, polydisc, biholomorphically equivalent, Levi pseudoconvex, Pseudoconvex, exhaustion function. | Wikipedia/Function_of_several_complex_variables |
In the mathematical field of representation theory, a representation of a Lie superalgebra is an action of Lie superalgebra L on a Z2-graded vector space V, such that if A and B are any two pure elements of L and X and Y are any two pure elements of V, then
(
c
1
A
+
c
2
B
)
⋅
X
=
c
1
A
⋅
X
+
c
2
B
⋅
X
{\displaystyle (c_{1}A+c_{2}B)\cdot X=c_{1}A\cdot X+c_{2}B\cdot X}
A
⋅
(
c
1
X
+
c
2
Y
)
=
c
1
A
⋅
X
+
c
2
A
⋅
Y
{\displaystyle A\cdot (c_{1}X+c_{2}Y)=c_{1}A\cdot X+c_{2}A\cdot Y}
(
−
1
)
A
⋅
X
=
(
−
1
)
A
(
−
1
)
X
{\displaystyle (-1)^{A\cdot X}=(-1)^{A}(-1)^{X}}
[
A
,
B
]
⋅
X
=
A
⋅
(
B
⋅
X
)
−
(
−
1
)
A
B
B
⋅
(
A
⋅
X
)
.
{\displaystyle [A,B]\cdot X=A\cdot (B\cdot X)-(-1)^{AB}B\cdot (A\cdot X).}
Equivalently, a representation of L is a Z2-graded representation of the universal enveloping algebra of L which respects the third equation above.
== Unitary representation of a star Lie superalgebra ==
A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and
[a,b]*=[b*,a*].
A unitary representation of such a Lie algebra is a Z2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations.
This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1)LaL*[a*]) and H is the unitary rep and also, H is a unitary representation of A.
These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra,
L[a|ψ>)]=(L[a])|ψ>+(-1)Laa(L[|ψ>]).
Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to
L[a]=La-(-1)LaaL.
This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers.
== See also ==
Graded vector space
Lie algebra representation
Representation theory of Hopf algebras | Wikipedia/Representation_of_a_Lie_superalgebra |
In mathematics, and more specifically in graph theory, a directed graph (or digraph) is a graph that is made up of a set of vertices connected by directed edges, often called arcs.
== Definition ==
In formal terms, a directed graph is an ordered pair G = (V, A) where
V is a set whose elements are called vertices, nodes, or points;
A is a set of ordered pairs of vertices, called arcs, directed edges (sometimes simply edges with the corresponding set named E instead of A), arrows, or directed lines.
It differs from an ordinary or undirected graph, in that the latter is defined in terms of unordered pairs of vertices, which are usually called edges, links or lines.
The aforementioned definition does not allow a directed graph to have multiple arrows with the same source and target nodes, but some authors consider a broader definition that allows directed graphs to have such multiple arcs (namely, they allow the arc set to be a multiset). Sometimes these entities are called directed multigraphs (or multidigraphs).
On the other hand, the aforementioned definition allows a directed graph to have loops (that is, arcs that directly connect nodes with themselves), but some authors consider a narrower definition that does not allow directed graphs to have loops.
Directed graphs without loops may be called simple directed graphs, while directed graphs with loops may be called loop-digraphs (see section Types of directed graph).
== Types of directed graphs ==
=== Subclasses ===
Symmetric directed graphs are directed graphs where all edges appear twice, one in each direction (that is, for every arrow that belongs to the digraph, the corresponding inverse arrow also belongs to it). (Such an edge is sometimes called "bidirected" and such graphs are sometimes called "bidirected", but this conflicts with the meaning for bidirected graphs.)
Simple directed graphs are directed graphs that have no loops (arrows that directly connect vertices to themselves) and no multiple arrows with same source and target nodes. As already introduced, in case of multiple arrows the entity is usually addressed as directed multigraph. Some authors describe digraphs with loops as loop-digraphs.
Complete directed graphs are simple directed graphs where each pair of vertices is joined by a symmetric pair of directed arcs (it is equivalent to an undirected complete graph with the edges replaced by pairs of inverse arcs). It follows that a complete digraph is symmetric.
Semicomplete multipartite digraphs are simple digraphs in which the vertex set is partitioned into sets such that for every pair of vertices x and y in different sets, there is an arc between x and y. There can be one arc between x and y or two arcs in opposite directions.
Semicomplete digraphs are simple digraphs where there is an arc between each pair of vertices. Every semicomplete digraph is a semicomplete multipartite digraph in a trivial way, with each vertex constituting a set of the partition.
Quasi-transitive digraphs are simple digraphs where for every triple x, y, z of distinct vertices with arcs from x to y and from y to z, there is an arc between x and z. There can be just one arc between x and z or two arcs in opposite directions. A semicomplete digraph is a quasi-transitive digraph. There are extensions of quasi-transitive digraphs called k-quasi-transitive digraphs.
Oriented graphs are directed graphs having no opposite pairs of directed edges (i.e. at most one of (x, y) and (y, x) may be arrows of the graph). It follows that a directed graph is an oriented graph if and only if it has no 2-cycle. Such a graph can be obtained by applying an orientation to an undirected graph.
Tournaments are oriented graphs obtained by choosing a direction for each edge in undirected complete graphs. A tournament is a semicomplete digraph.
A directed graph is acyclic if it has no directed cycles. The usual name for such a digraph is directed acyclic graph (DAG).
Multitrees are DAGs in which there are no two distinct directed paths from the same starting vertex to the same ending vertex.
Oriented trees or polytrees are DAGs formed by orienting the edges of trees (connected, acyclic undirected graphs).
Rooted trees are oriented trees in which all edges of the underlying undirected tree are directed either away from or towards the root (they are called, respectively, arborescences or out-trees, and in-trees.
=== Digraphs with supplementary properties ===
Weighted directed graphs (also known as directed networks) are (simple) directed graphs with weights assigned to their arrows, similarly to weighted graphs (which are also known as undirected networks or weighted networks).
Flow networks are weighted directed graphs where two nodes are distinguished, a source and a sink.
Rooted directed graphs (also known as flow graphs) are digraphs in which a vertex has been distinguished as the root.
Control-flow graphs are rooted digraphs used in computer science as a representation of the paths that might be traversed through a program during its execution.
Signal-flow graphs are directed graphs in which nodes represent system variables and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes.
Flow graphs are digraphs associated with a set of linear algebraic or differential equations.
State diagrams are directed multigraphs that represent finite-state machines.
Commutative diagrams are digraphs used in category theory, where the vertices represent (mathematical) objects and the arrows represent morphisms, with the property that all directed paths with the same start and endpoints lead to the same result by composition.
In the theory of Lie groups, a quiver Q is a directed graph serving as the domain of, and thus characterizing the shape of, a representation V defined as a functor, specifically an object of the functor category FinVctKF(Q) where F(Q) is the free category on Q consisting of paths in Q and FinVctK is the category of finite-dimensional vector spaces over a field K. Representations of a quiver label its vertices with vector spaces and its edges (and hence paths) compatibly with linear transformations between them, and transform via natural transformations.
== Basic terminology ==
An arc (x, y) is considered to be directed from x to y; y is called the head and x is called the tail of the arc; y is said to be a direct successor of x and x is said to be a direct predecessor of y. If a path leads from x to y, then y is said to be a successor of x and reachable from x, and x is said to be a predecessor of y. The arc (y, x) is called the reversed arc of (x, y).
The adjacency matrix of a multidigraph with loops is the integer-valued matrix with rows and columns corresponding to the vertices, where a nondiagonal entry aij is the number of arcs from vertex i to vertex j, and the diagonal entry aii is the number of loops at vertex i. The adjacency matrix of a directed graph is a logical matrix, and is
unique up to permutation of rows and columns.
Another matrix representation for a directed graph is its incidence matrix.
See direction for more definitions.
== Indegree and outdegree ==
For a vertex, the number of head ends adjacent to a vertex is called the indegree of the vertex and the number of tail ends adjacent to a vertex is its outdegree (called branching factor in trees).
Let G = (V, E) and v ∈ V. The indegree of v is denoted deg−(v) and its outdegree is denoted deg+(v).
A vertex with deg−(v) = 0 is called a source, as it is the origin of each of its outgoing arcs. Similarly, a vertex with deg+(v) = 0 is called a sink, since it is the end of each of its incoming arcs.
The degree sum formula states that, for a directed graph,
∑
v
∈
V
deg
−
(
v
)
=
∑
v
∈
V
deg
+
(
v
)
=
|
E
|
.
{\displaystyle \sum _{v\in V}\deg ^{-}(v)=\sum _{v\in V}\deg ^{+}(v)=|E|.}
If for every vertex v ∈ V, deg+(v) = deg−(v), the graph is called a balanced directed graph.
== Degree sequence ==
The degree sequence of a directed graph is the list of its indegree and outdegree pairs; for the above example we have degree sequence ((2, 0), (2, 2), (0, 2), (1, 1)). The degree sequence is a directed graph invariant so isomorphic directed graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a directed graph; in some cases, non-isomorphic digraphs have the same degree sequence.
The directed graph realization problem is the problem of finding a directed graph with the degree sequence a given sequence of positive integer pairs. (Trailing pairs of zeros may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the directed graph.) A sequence which is the degree sequence of some directed graph, i.e. for which the directed graph realization problem has a solution, is called a directed graphic or directed graphical sequence. This problem can either be solved by the Kleitman–Wang algorithm or by the Fulkerson–Chen–Anstee theorem.
== Directed graph connectivity ==
A directed graph is weakly connected (or just connected) if the undirected underlying graph obtained by replacing all directed edges of the graph with undirected edges is a connected graph.
A directed graph is strongly connected or strong if it contains a directed path from x to y (and from y to x) for every pair of vertices (x, y). The strong components are the maximal strongly connected subgraphs.
A connected rooted graph (or flow graph) is one where there exists a directed path to every vertex from a distinguished root vertex.
== See also ==
== Notes ==
== References ==
Bang-Jensen, Jørgen; Gutin, Gregory (2000), Digraphs: Theory, Algorithms and Applications, Springer, ISBN 1-85233-268-9(the corrected 1st edition of 2007 is now freely available on the authors' site; the 2nd edition appeared in 2009 ISBN 1-84800-997-6).
Bang-Jensen, Jørgen; Gutin, Gregory (2018), Classes of Directed Graphs, Springer, ISBN 978-3319718408.
Bondy, John Adrian; Murty, U. S. R. (1976), Graph Theory with Applications, North-Holland, ISBN 0-444-19451-7.
Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 3-540-26182-6 (the electronic 3rd edition is freely available on author's site).
Harary, Frank; Norman, Robert Z.; Cartwright, Dorwin (1965), Structural Models: An Introduction to the Theory of Directed Graphs, New York: Wiley.
Number of directed graphs (or directed graphs) with n nodes from On-Line Encyclopedia of Integer Sequences
== External links == | Wikipedia/Directed_graph |
In mathematics, an element (or member) of a set is any one of the distinct objects that belong to that set. For example, given a set called A containing the first four positive integers (
A
=
{
1
,
2
,
3
,
4
}
{\displaystyle A=\{1,2,3,4\}}
), one could say that "3 is an element of A", expressed notationally as
3
∈
A
{\displaystyle 3\in A}
.
== Sets ==
Writing
A
=
{
1
,
2
,
3
,
4
}
{\displaystyle A=\{1,2,3,4\}}
means that the elements of the set A are the numbers 1, 2, 3 and 4. Sets of elements of A, for example
{
1
,
2
}
{\displaystyle \{1,2\}}
, are subsets of A.
Sets can themselves be elements. For example, consider the set
B
=
{
1
,
2
,
{
3
,
4
}
}
{\displaystyle B=\{1,2,\{3,4\}\}}
. The elements of B are not 1, 2, 3, and 4. Rather, there are only three elements of B, namely the numbers 1 and 2, and the set
{
3
,
4
}
{\displaystyle \{3,4\}}
.
The elements of a set can be anything. For example the elements of the set
C
=
{
r
e
d
,
12
,
B
}
{\displaystyle C=\{\mathrm {\color {Red}red} ,\mathrm {12} ,B\}}
are the color red, the number 12, and the set B.
In logical terms,
(
x
∈
y
)
↔
∀
x
[
P
x
=
y
]
:
x
∈
D
y
{\displaystyle (x\in y)\leftrightarrow \forall x[P_{x}=y]:x\in {\mathfrak {D}}y}
.
== Notation and terminology ==
The binary relation "is an element of", also called set membership, is denoted by the symbol "∈". Writing
x
∈
A
{\displaystyle x\in A}
means that "x is an element of A". Equivalent expressions are "x is a member of A", "x belongs to A", "x is in A" and "x lies in A". The expressions "A includes x" and "A contains x" are also used to mean set membership, although some authors use them to mean instead "x is a subset of A". Logician George Boolos strongly urged that "contains" be used for membership only, and "includes" for the subset relation only.
For the relation ∈ , the converse relation ∈T may be written
A
∋
x
{\displaystyle A\ni x}
meaning "A contains or includes x".
The negation of set membership is denoted by the symbol "∉". Writing
x
∉
A
{\displaystyle x\notin A}
means that "x is not an element of A".
The symbol ∈ was first used by Giuseppe Peano, in his 1889 work Arithmetices principia, nova methodo exposita. Here he wrote on page X:
Signum ∈ significat est. Ita a ∈ b legitur a est quoddam b; …
which means
The symbol ∈ means is. So a ∈ b is read as a is a certain b; …
The symbol itself is a stylized lowercase Greek letter epsilon ("ϵ"), the first letter of the word ἐστί, which means "is".
== Examples ==
Using the sets defined above, namely A = {1, 2, 3, 4}, B = {1, 2, {3, 4}} and C = {red, green, blue}, the following statements are true:
2 ∈ A
5 ∉ A
{3, 4} ∈ B
3 ∉ B
4 ∉ B
yellow ∉ C
== Cardinality of sets ==
The number of elements in a particular set is a property known as cardinality; informally, this is the size of a set. In the above examples, the cardinality of the set A is 4, while the cardinality of set B and set C are both 3. An infinite set is a set with an infinite number of elements, while a finite set is a set with a finite number of elements. The above examples are examples of finite sets. An example of an infinite set is the set of positive integers {1, 2, 3, 4, ...}.
== Formal relation ==
As a relation, set membership must have a domain and a range. Conventionally the domain is called the universe denoted U. The range is the set of subsets of U called the power set of U and denoted P(U). Thus the relation
∈
{\displaystyle \in }
is a subset of U × P(U). The converse relation
∋
{\displaystyle \ni }
is a subset of P(U) × U.
== See also ==
Identity element
Singleton (mathematics)
== References ==
== Further reading ==
Halmos, Paul R. (1974) [1960], Naive Set Theory, Undergraduate Texts in Mathematics (Hardcover ed.), NY: Springer-Verlag, ISBN 0-387-90092-6 - "Naive" means that it is not fully axiomatized, not that it is silly or easy (Halmos's treatment is neither).
Jech, Thomas (2002), "Set Theory", Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University
Suppes, Patrick (1972) [1960], Axiomatic Set Theory, NY: Dover Publications, Inc., ISBN 0-486-61630-4 - Both the notion of set (a collection of members), membership or element-hood, the axiom of extension, the axiom of separation, and the union axiom (Suppes calls it the sum axiom) are needed for a more thorough understanding of "set element". | Wikipedia/Element_(set_theory) |
In mathematics, more specifically in group theory, the character of a group representation is a function on the group that associates to each group element the trace of the corresponding matrix. The character carries the essential information about the representation in a more condensed form. Georg Frobenius initially developed representation theory of finite groups entirely based on the characters, and without any explicit matrix realization of representations themselves. This is possible because a complex representation of a finite group is determined (up to isomorphism) by its character. The situation with representations over a field of positive characteristic, so-called "modular representations", is more delicate, but Richard Brauer developed a powerful theory of characters in this case as well. Many deep theorems on the structure of finite groups use characters of modular representations.
== Applications ==
Characters of irreducible representations encode many important properties of a group and can thus be used to study its structure. Character theory is an essential tool in the classification of finite simple groups. Close to half of the proof of the Feit–Thompson theorem involves intricate calculations with character values. Easier, but still essential, results that use character theory include Burnside's theorem (a purely group-theoretic proof of Burnside's theorem has since been found, but that proof came over half a century after Burnside's original proof), and a theorem of Richard Brauer and Michio Suzuki stating that a finite simple group cannot have a generalized quaternion group as its Sylow 2-subgroup.
== Definitions ==
Let V be a finite-dimensional vector space over a field F and let ρ : G → GL(V) be a representation of a group G on V. The character of ρ is the function χρ : G → F given by
χ
ρ
(
g
)
=
Tr
(
ρ
(
g
)
)
{\displaystyle \chi _{\rho }(g)=\operatorname {Tr} (\rho (g))}
where Tr is the trace.
A character χρ is called irreducible or simple if ρ is an irreducible representation. The degree of the character χ is the dimension of ρ; in characteristic zero this is equal to the value χ(1). A character of degree 1 is called linear. When G is finite and F has characteristic zero, the kernel of the character χρ is the normal subgroup:
ker
χ
ρ
:=
{
g
∈
G
∣
χ
ρ
(
g
)
=
χ
ρ
(
1
)
}
,
{\displaystyle \ker \chi _{\rho }:=\left\lbrace g\in G\mid \chi _{\rho }(g)=\chi _{\rho }(1)\right\rbrace ,}
which is precisely the kernel of the representation ρ. However, the character is not a group homomorphism in general.
== Properties ==
Characters are class functions, that is, they each take a constant value on a given conjugacy class. More precisely, the set of irreducible characters of a given group G into a field F form a basis of the F-vector space of all class functions G → F.
Isomorphic representations have the same characters. Over a field of characteristic 0, two representations are isomorphic if and only if they have the same character.
If a representation is the direct sum of subrepresentations, then the corresponding character is the sum of the characters of those subrepresentations.
If a character of the finite group G is restricted to a subgroup H, then the result is also a character of H.
Every character value χ(g) is a sum of n m-th roots of unity, where n is the degree (that is, the dimension of the associated vector space) of the representation with character χ and m is the order of g. In particular, when F = C, every such character value is an algebraic integer.
If F = C and χ is irreducible, then
[
G
:
C
G
(
x
)
]
χ
(
x
)
χ
(
1
)
{\displaystyle [G:C_{G}(x)]{\frac {\chi (x)}{\chi (1)}}}
is an algebraic integer for all x in G.
If F is algebraically closed and char(F) does not divide the order of G, then the number of irreducible characters of G is equal to the number of conjugacy classes of G. Furthermore, in this case, the degrees of the irreducible characters are divisors of the order of G (and they even divide [G : Z(G)] if F = C).
=== Arithmetic properties ===
Let ρ and σ be representations of G. Then the following identities hold:
χ
ρ
⊕
σ
=
χ
ρ
+
χ
σ
{\displaystyle \chi _{\rho \oplus \sigma }=\chi _{\rho }+\chi _{\sigma }}
χ
ρ
⊗
σ
=
χ
ρ
⋅
χ
σ
{\displaystyle \chi _{\rho \otimes \sigma }=\chi _{\rho }\cdot \chi _{\sigma }}
χ
ρ
∗
=
χ
ρ
¯
{\displaystyle \chi _{\rho ^{*}}={\overline {\chi _{\rho }}}}
χ
A
l
t
2
ρ
(
g
)
=
1
2
[
(
χ
ρ
(
g
)
)
2
−
χ
ρ
(
g
2
)
]
{\displaystyle \chi _{{\scriptscriptstyle {\rm {{Alt}^{2}}}}\rho }(g)={\tfrac {1}{2}}\!\left[\left(\chi _{\rho }(g)\right)^{2}-\chi _{\rho }(g^{2})\right]}
χ
S
y
m
2
ρ
(
g
)
=
1
2
[
(
χ
ρ
(
g
)
)
2
+
χ
ρ
(
g
2
)
]
{\displaystyle \chi _{{\scriptscriptstyle {\rm {{Sym}^{2}}}}\rho }(g)={\tfrac {1}{2}}\!\left[\left(\chi _{\rho }(g)\right)^{2}+\chi _{\rho }(g^{2})\right]}
where ρ⊕σ is the direct sum, ρ⊗σ is the tensor product, ρ∗ denotes the conjugate transpose of ρ, and Alt2 is the alternating product Alt2ρ = ρ ∧ ρ and Sym2 is the symmetric square, which is determined by
ρ
⊗
ρ
=
(
ρ
∧
ρ
)
⊕
Sym
2
ρ
.
{\displaystyle \rho \otimes \rho =\left(\rho \wedge \rho \right)\oplus {\textrm {Sym}}^{2}\rho .}
== Character tables ==
The irreducible complex characters of a finite group form a character table which encodes much useful information about the group G in a compact form. Each row is labelled by an irreducible representation and the entries in the row are the characters of the representation on the respective conjugacy class of G. The columns are labelled by (representatives of) the conjugacy classes of G. It is customary to label the first row by the character of the trivial representation, which is the trivial action of G on a 1-dimensional vector space by
ρ
(
g
)
=
1
{\displaystyle \rho (g)=1}
for all
g
∈
G
{\displaystyle g\in G}
. Each entry in the first row is therefore 1. Similarly, it is customary to label the first column by the identity. Therefore, the first column contains the degree of each irreducible character.
Here is the character table of
C
3
=
⟨
u
∣
u
3
=
1
⟩
,
{\displaystyle C_{3}=\langle u\mid u^{3}=1\rangle ,}
the cyclic group with three elements and generator u:
where ω is a primitive third root of unity.
The character table is always square, because the number of irreducible representations is equal to the number of conjugacy classes.
=== Orthogonality relations ===
The space of complex-valued class functions of a finite group G has a natural inner product:
⟨
α
,
β
⟩
:=
1
|
G
|
∑
g
∈
G
α
(
g
)
β
(
g
)
¯
{\displaystyle \left\langle \alpha ,\beta \right\rangle :={\frac {1}{|G|}}\sum _{g\in G}\alpha (g){\overline {\beta (g)}}}
where β(g) is the complex conjugate of β(g). With respect to this inner product, the irreducible characters form an orthonormal basis for the space of class-functions, and this yields the orthogonality relation for the rows of the character table:
⟨
χ
i
,
χ
j
⟩
=
{
0
if
i
≠
j
,
1
if
i
=
j
.
{\displaystyle \left\langle \chi _{i},\chi _{j}\right\rangle ={\begin{cases}0&{\mbox{ if }}i\neq j,\\1&{\mbox{ if }}i=j.\end{cases}}}
For g, h in G, applying the same inner product to the columns of the character table yields:
∑
χ
i
χ
i
(
g
)
χ
i
(
h
)
¯
=
{
|
C
G
(
g
)
|
,
if
g
,
h
are conjugate
0
otherwise.
{\displaystyle \sum _{\chi _{i}}\chi _{i}(g){\overline {\chi _{i}(h)}}={\begin{cases}\left|C_{G}(g)\right|,&{\mbox{ if }}g,h{\mbox{ are conjugate }}\\0&{\mbox{ otherwise.}}\end{cases}}}
where the sum is over all of the irreducible characters χi of G and the symbol |CG(g)| denotes the order of the centralizer of g. Note that since g and h are conjugate iff they are in the same column of the character table, this implies that the columns of the character table are orthogonal.
The orthogonality relations can aid many computations including:
Decomposing an unknown character as a linear combination of irreducible characters.
Constructing the complete character table when only some of the irreducible characters are known.
Finding the orders of the centralizers of representatives of the conjugacy classes of a group.
Finding the order of the group.
=== Character table properties ===
Certain properties of the group G can be deduced from its character table:
The order of G is given by the sum of the squares of the entries of the first column (the degrees of the irreducible characters). More generally, the sum of the squares of the absolute values of the entries in any column gives the order of the centralizer of an element of the corresponding conjugacy class.
All normal subgroups of G (and thus whether or not G is simple) can be recognised from its character table. The kernel of a character χ is the set of elements g in G for which χ(g) = χ(1); this is a normal subgroup of G. Each normal subgroup of G is the intersection of the kernels of some of the irreducible characters of G.
The commutator subgroup of G is the intersection of the kernels of the linear characters of G.
If G is finite, then since the character table is square and has as many rows as conjugacy classes, it follows that G is abelian iff each conjugacy class is a singleton iff the character table of G is
|
G
|
×
|
G
|
{\displaystyle |G|\!\times \!|G|}
iff each irreducible character is linear.
It follows, using some results of Richard Brauer from modular representation theory, that the prime divisors of the orders of the elements of each conjugacy class of a finite group can be deduced from its character table (an observation of Graham Higman).
The character table does not in general determine the group up to isomorphism: for example, the quaternion group Q and the dihedral group of 8 elements, D4, have the same character table. Brauer asked whether the character table, together with the knowledge of how the powers of elements of its conjugacy classes are distributed, determines a finite group up to isomorphism. In 1964, this was answered in the negative by E. C. Dade.
The linear representations of G are themselves a group under the tensor product, since the tensor product of 1-dimensional vector spaces is again 1-dimensional. That is, if
ρ
1
:
G
→
V
1
{\displaystyle \rho _{1}:G\to V_{1}}
and
ρ
2
:
G
→
V
2
{\displaystyle \rho _{2}:G\to V_{2}}
are linear representations, then
ρ
1
⊗
ρ
2
(
g
)
=
(
ρ
1
(
g
)
⊗
ρ
2
(
g
)
)
{\displaystyle \rho _{1}\otimes \rho _{2}(g)=(\rho _{1}(g)\otimes \rho _{2}(g))}
defines a new linear representation. This gives rise to a group of linear characters, called the character group under the operation
[
χ
1
∗
χ
2
]
(
g
)
=
χ
1
(
g
)
χ
2
(
g
)
{\displaystyle [\chi _{1}*\chi _{2}](g)=\chi _{1}(g)\chi _{2}(g)}
. This group is connected to Dirichlet characters and Fourier analysis.
== Induced characters and Frobenius reciprocity ==
The characters discussed in this section are assumed to be complex-valued. Let H be a subgroup of the finite group G. Given a character χ of G, let χH denote its restriction to H. Let θ be a character of H. Ferdinand Georg Frobenius showed how to construct a character of G from θ, using what is now known as Frobenius reciprocity. Since the irreducible characters of G form an orthonormal basis for the space of complex-valued class functions of G, there is a unique class function θG of G with the property that
⟨
θ
G
,
χ
⟩
G
=
⟨
θ
,
χ
H
⟩
H
{\displaystyle \langle \theta ^{G},\chi \rangle _{G}=\langle \theta ,\chi _{H}\rangle _{H}}
for each irreducible character χ of G (the leftmost inner product is for class functions of G and the rightmost inner product is for class functions of H). Since the restriction of a character of G to the subgroup H is again a character of H, this definition makes it clear that θG is a non-negative integer combination of irreducible characters of G, so is indeed a character of G. It is known as the character of G induced from θ. The defining formula of Frobenius reciprocity can be extended to general complex-valued class functions.
Given a matrix representation ρ of H, Frobenius later gave an explicit way to construct a matrix representation of G, known as the representation induced from ρ, and written analogously as ρG. This led to an alternative description of the induced character θG. This induced character vanishes on all elements of G which are not conjugate to any element of H. Since the induced character is a class function of G, it is only now necessary to describe its values on elements of H. If one writes G as a disjoint union of right cosets of H, say
G
=
H
t
1
∪
…
∪
H
t
n
,
{\displaystyle G=Ht_{1}\cup \ldots \cup Ht_{n},}
then, given an element h of H, we have:
θ
G
(
h
)
=
∑
i
:
t
i
h
t
i
−
1
∈
H
θ
(
t
i
h
t
i
−
1
)
.
{\displaystyle \theta ^{G}(h)=\sum _{i\ :\ t_{i}ht_{i}^{-1}\in H}\theta \left(t_{i}ht_{i}^{-1}\right).}
Because θ is a class function of H, this value does not depend on the particular choice of coset representatives.
This alternative description of the induced character sometimes allows explicit computation from relatively little information about the embedding of H in G, and is often useful for calculation of particular character tables. When θ is the trivial character of H, the induced character obtained is known as the permutation character of G (on the cosets of H).
The general technique of character induction and later refinements found numerous applications in finite group theory and elsewhere in mathematics, in the hands of mathematicians such as Emil Artin, Richard Brauer, Walter Feit and Michio Suzuki, as well as Frobenius himself.
== Mackey decomposition ==
The Mackey decomposition was defined and explored by George Mackey in the context of Lie groups, but is a powerful tool in the character theory and representation theory of finite groups. Its basic form concerns the way a character (or module) induced from a subgroup H of a finite group G behaves on restriction back to a (possibly different) subgroup K of G, and makes use of the decomposition of G into (H, K)-double cosets.
If
G
=
⋃
t
∈
T
H
t
K
{\textstyle G=\bigcup _{t\in T}HtK}
is a disjoint union, and θ is a complex class function of H, then Mackey's formula states that
(
θ
G
)
K
=
∑
t
∈
T
(
[
θ
t
]
t
−
1
H
t
∩
K
)
K
,
{\displaystyle \left(\theta ^{G}\right)_{K}=\sum _{t\in T}\left(\left[\theta ^{t}\right]_{t^{-1}Ht\cap K}\right)^{K},}
where θt is the class function of t−1Ht defined by θt(t−1ht) = θ(h) for all h in H. There is a similar formula for the restriction of an induced module to a subgroup, which holds for representations over any ring, and has applications in a wide variety of algebraic and topological contexts.
Mackey decomposition, in conjunction with Frobenius reciprocity, yields a well-known and useful formula for the inner product of two class functions θ and ψ induced from respective subgroups H and K, whose utility lies in the fact that it only depends on how conjugates of H and K intersect each other. The formula (with its derivation) is:
⟨
θ
G
,
ψ
G
⟩
=
⟨
(
θ
G
)
K
,
ψ
⟩
=
∑
t
∈
T
⟨
(
[
θ
t
]
t
−
1
H
t
∩
K
)
K
,
ψ
⟩
=
∑
t
∈
T
⟨
(
θ
t
)
t
−
1
H
t
∩
K
,
ψ
t
−
1
H
t
∩
K
⟩
,
{\displaystyle {\begin{aligned}\left\langle \theta ^{G},\psi ^{G}\right\rangle &=\left\langle \left(\theta ^{G}\right)_{K},\psi \right\rangle \\&=\sum _{t\in T}\left\langle \left(\left[\theta ^{t}\right]_{t^{-1}Ht\cap K}\right)^{K},\psi \right\rangle \\&=\sum _{t\in T}\left\langle \left(\theta ^{t}\right)_{t^{-1}Ht\cap K},\psi _{t^{-1}Ht\cap K}\right\rangle ,\end{aligned}}}
(where T is a full set of (H, K)-double coset representatives, as before). This formula is often used when θ and ψ are linear characters, in which case all the inner products appearing in the right hand sum are either 1 or 0, depending on whether or not the linear characters θt and ψ have the same restriction to t−1Ht ∩ K. If θ and ψ are both trivial characters, then the inner product simplifies to |T|.
== "Twisted" dimension ==
One may interpret the character of a representation as the "twisted" dimension of a vector space. Treating the character as a function of the elements of the group χ(g), its value at the identity is the dimension of the space, since χ(1) = Tr(ρ(1)) = Tr(IV) = dim(V). Accordingly, one can view the other values of the character as "twisted" dimensions.
One can find analogs or generalizations of statements about dimensions to statements about characters or representations. A sophisticated example of this occurs in the theory of monstrous moonshine: the j-invariant is the graded dimension of an infinite-dimensional graded representation of the Monster group, and replacing the dimension with the character gives the McKay–Thompson series for each element of the Monster group.
== Characters of Lie groups and Lie algebras ==
If
G
{\displaystyle G}
is a Lie group and
ρ
{\displaystyle \rho }
a finite-dimensional representation of
G
{\displaystyle G}
, the character
χ
ρ
{\displaystyle \chi _{\rho }}
of
ρ
{\displaystyle \rho }
is defined precisely as for any group as
χ
ρ
(
g
)
=
Tr
(
ρ
(
g
)
)
{\displaystyle \chi _{\rho }(g)=\operatorname {Tr} (\rho (g))}
.
Meanwhile, if
g
{\displaystyle {\mathfrak {g}}}
is a Lie algebra and
ρ
{\displaystyle \rho }
a finite-dimensional representation of
g
{\displaystyle {\mathfrak {g}}}
, we can define the character
χ
ρ
{\displaystyle \chi _{\rho }}
by
χ
ρ
(
X
)
=
Tr
(
e
ρ
(
X
)
)
{\displaystyle \chi _{\rho }(X)=\operatorname {Tr} (e^{\rho (X)})}
.
The character will satisfy
χ
ρ
(
Ad
g
(
X
)
)
=
χ
ρ
(
X
)
{\displaystyle \chi _{\rho }(\operatorname {Ad} _{g}(X))=\chi _{\rho }(X)}
for all
g
{\displaystyle g}
in the associated Lie group
G
{\displaystyle G}
and all
X
∈
g
{\displaystyle X\in {\mathfrak {g}}}
. If we have a Lie group representation and an associated Lie algebra representation, the character
χ
ρ
{\displaystyle \chi _{\rho }}
of the Lie algebra representation is related to the character
X
ρ
{\displaystyle \mathrm {X} _{\rho }}
of the group representation by the formula
χ
ρ
(
X
)
=
X
ρ
(
e
X
)
{\displaystyle \chi _{\rho }(X)=\mathrm {X} _{\rho }(e^{X})}
.
Suppose now that
g
{\displaystyle {\mathfrak {g}}}
is a complex semisimple Lie algebra with Cartan subalgebra
h
{\displaystyle {\mathfrak {h}}}
. The value of the character
χ
ρ
{\displaystyle \chi _{\rho }}
of an irreducible representation
ρ
{\displaystyle \rho }
of
g
{\displaystyle {\mathfrak {g}}}
is determined by its values on
h
{\displaystyle {\mathfrak {h}}}
. The restriction of the character to
h
{\displaystyle {\mathfrak {h}}}
can easily be computed in terms of the weight spaces, as follows:
χ
ρ
(
H
)
=
∑
λ
m
λ
e
λ
(
H
)
,
H
∈
h
{\displaystyle \chi _{\rho }(H)=\sum _{\lambda }m_{\lambda }e^{\lambda (H)},\quad H\in {\mathfrak {h}}}
,
where the sum is over all weights
λ
{\displaystyle \lambda }
of
ρ
{\displaystyle \rho }
and where
m
λ
{\displaystyle m_{\lambda }}
is the multiplicity of
λ
{\displaystyle \lambda }
.
The (restriction to
h
{\displaystyle {\mathfrak {h}}}
of the) character can be computed more explicitly by the Weyl character formula.
== See also ==
Irreducible representation § Applications in theoretical physics and chemistry
Association schemes, a combinatorial generalization of group-character theory.
Clifford theory, introduced by A. H. Clifford in 1937, yields information about the restriction of a complex irreducible character of a finite group G to a normal subgroup N.
Frobenius formula
Real element, a group element g such that χ(g) is a real number for all characters χ
== References ==
== External links ==
Character at PlanetMath. | Wikipedia/Character_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.