text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Geometric function theory is the study of geometric properties of analytic functions. A fundamental result in the theory is the Riemann mapping theorem.
== Topics in geometric function theory ==
The following are some of the most important topics in geometric function theory:
=== Conformal maps ===
A conformal map is a function which preserves angles locally. In the most common case the function has a domain and range in the complex plane.
More formally, a map,
f
:
U
→
V
{\displaystyle f:U\rightarrow V\qquad }
with
U
,
V
⊂
C
n
{\displaystyle U,V\subset \mathbb {C} ^{n}}
is called conformal (or angle-preserving) at a point
u
0
{\displaystyle u_{0}}
if it preserves oriented angles between curves through
u
0
{\displaystyle u_{0}}
with respect to their orientation (i.e., not just the magnitude of the angle). Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.
=== Quasiconformal maps ===
In mathematical complex analysis, a quasiconformal mapping, introduced by Grötzsch (1928) and named by Ahlfors (1935), is a homeomorphism between plane domains which to first order takes small circles to small ellipses of bounded eccentricity.
Intuitively, let f : D → D′ be an orientation-preserving homeomorphism between open sets in the plane. If f is continuously differentiable, then it is K-quasiconformal if the derivative of f at every point maps circles to ellipses with eccentricity bounded by K.
If K is 0, then the function is conformal.
=== Analytic continuation ===
Analytic continuation is a technique to extend the domain of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a new region where an infinite series representation in terms of which it is initially defined becomes divergent.
The step-wise continuation technique may, however, come up against difficulties. These may have an essentially topological nature, leading to inconsistencies (defining more than one value). They may alternatively have to do with the presence of mathematical singularities. The case of several complex variables is rather different, since singularities then cannot be isolated points, and its investigation was a major reason for the development of sheaf cohomology.
=== Geometric properties of polynomials and algebraic functions ===
Topics in this area include Riemann surfaces for algebraic functions and zeros for algebraic functions.
==== Riemann surface ====
A Riemann surface, first studied by and named after Bernhard Riemann, is a one-dimensional complex manifold. Riemann surfaces can be thought of as deformed versions of the complex plane: locally near every point they look like patches of the complex plane, but the global topology can be quite different. For example, they can look like a sphere or a torus or several sheets glued together.
The main point of Riemann surfaces is that holomorphic functions may be defined between them. Riemann surfaces are nowadays considered the natural setting for studying the global behavior of these functions, especially multi-valued functions such as the square root and other algebraic functions, or the logarithm.
=== Extremal problems ===
Topics in this area include "Maximum principle; Schwarz's lemma, Lindelöf principle, analogues and generalizations".
=== Univalent and multivalent functions ===
A holomorphic function on an open subset of the complex plane is called univalent if it is injective.
One can prove that if
G
{\displaystyle G}
and
Ω
{\displaystyle \Omega }
are two open connected sets in the complex plane, and
f
:
G
→
Ω
{\displaystyle f:G\to \Omega }
is a univalent function such that
f
(
G
)
=
Ω
{\displaystyle f(G)=\Omega }
(that is,
f
{\displaystyle f}
is surjective), then the derivative of
f
{\displaystyle f}
is never zero,
f
{\displaystyle f}
is invertible, and its inverse
f
−
1
{\displaystyle f^{-1}}
is also holomorphic. More, one has by the chain rule
Alternate terms in common use are schlicht( this is German for plain, simple) and simple. It is a remarkable fact, fundamental to the theory of univalent functions, that univalence is essentially preserved under uniform convergence.
== Important theorems ==
=== Riemann mapping theorem ===
Let
z
0
{\displaystyle z_{0}}
be a point in a simply-connected region
D
1
(
D
1
≠
C
)
{\displaystyle D_{1}(D_{1}\neq \mathbb {C} )}
and
D
1
{\displaystyle D_{1}}
having at least two boundary points. Then there exists a unique analytic function
w
=
f
(
z
)
{\displaystyle w=f(z)}
mapping
D
1
{\displaystyle D_{1}}
bijectively into the open unit disk
|
w
|
<
1
{\displaystyle |w|<1}
such that
f
(
z
0
)
=
0
{\displaystyle f(z_{0})=0}
and
f
′
(
z
0
)
>
0
{\displaystyle f'(z_{0})>0}
.
Although Riemann's mapping theorem demonstrates the existence of a mapping function, it does not actually exhibit this function. An example is given below.
In the above figure, consider
D
1
{\displaystyle D_{1}}
and
D
2
{\displaystyle D_{2}}
as two simply connected regions different from
C
{\displaystyle \mathbb {C} }
. The Riemann mapping theorem provides the existence of
w
=
f
(
z
)
{\displaystyle w=f(z)}
mapping
D
1
{\displaystyle D_{1}}
onto the unit disk and existence of
w
=
g
(
z
)
{\displaystyle w=g(z)}
mapping
D
2
{\displaystyle D_{2}}
onto the unit disk. Thus
g
−
1
f
{\displaystyle g^{-1}f}
is a one-to-one mapping of
D
1
{\displaystyle D_{1}}
onto
D
2
{\displaystyle D_{2}}
.
If we can show that
g
−
1
{\displaystyle g^{-1}}
, and consequently the composition, is analytic, we then have a conformal mapping of
D
1
{\displaystyle D_{1}}
onto
D
2
{\displaystyle D_{2}}
, proving "any two simply connected regions different from the whole plane
C
{\displaystyle \mathbb {C} }
can be mapped conformally onto each other."
=== Schwarz's Lemma ===
The Schwarz lemma, named after Hermann Amandus Schwarz, is a result in complex analysis about holomorphic functions from the open unit disk to itself. The lemma is less celebrated than stronger theorems, such as the Riemann mapping theorem, which it helps to prove. It is however one of the simplest results capturing the rigidity of holomorphic functions.
==== Statement ====
Schwarz Lemma. Let D = {z : |z| < 1} be the open unit disk in the complex plane C centered at the origin and let f : D → D be a holomorphic map such that f(0) = 0.
Then, |f(z)| ≤ |z| for all z in D and |f′(0)| ≤ 1.
Moreover, if |f(z)| = |z| for some non-zero z or if |f′(0)| = 1, then f(z) = az for some a in C with |a| (necessarily) equal to 1.
=== Maximum principle ===
The maximum principle is a property of solutions to certain partial differential equations, of the elliptic and parabolic types. Roughly speaking, it says that the maximum of a function in a domain is to be found on the boundary of that domain. Specifically, the strong maximum principle says that if a function achieves its maximum in the interior of the domain, the function is uniformly a constant. The weak maximum principle says that the maximum of the function is to be found on the boundary, but may re-occur in the interior as well. Other, even weaker maximum principles exist which merely bound a function in terms of its maximum on the boundary.
=== Riemann-Hurwitz formula ===
the Riemann–Hurwitz formula, named after Bernhard Riemann and Adolf Hurwitz, describes the relationship of the Euler characteristics of two surfaces when one is a ramified covering of the other. It therefore connects ramification with algebraic topology, in this case. It is a prototype result for many others, and is often applied in the theory of Riemann surfaces (which is its origin) and algebraic curves.
==== Statement ====
For an orientable surface S the Euler characteristic χ(S) is
2
−
2
g
{\displaystyle 2-2g\,}
where g is the genus (the number of handles), since the Betti numbers are 1, 2g, 1, 0, 0, ... . In the case of an (unramified) covering map of surfaces
π
:
S
′
→
S
{\displaystyle \pi :S'\to S\,}
that is surjective and of degree N, we should have the formula
χ
(
S
′
)
=
N
⋅
χ
(
S
)
.
{\displaystyle \chi (S')=N\cdot \chi (S).\,}
That is because each simplex of S should be covered by exactly N in S′ — at least if we use a fine enough triangulation of S, as we are entitled to do since the Euler characteristic is a topological invariant. What the Riemann–Hurwitz formula does is to add in a correction to allow for ramification (sheets coming together).
Now assume that S and S′ are Riemann surfaces, and that the map π is complex analytic. The map π is said to be ramified at a point P in S′ if there exist analytic coordinates near P and π(P) such that π takes the form π(z) = zn, and n > 1. An equivalent way of thinking about this is that there exists a small neighborhood U of P such that π(P) has exactly one preimage in U, but the image of any other point in U has exactly n preimages in U. The number n is called the ramification index at P and also denoted by eP. In calculating the Euler characteristic of S′ we notice the loss of eP − 1 copies of P above π(P) (that is, in the inverse image of π(P)). Now let us choose triangulations of S and S′ with vertices at the branch and ramification points, respectively, and use these to compute the Euler characteristics. Then S′ will have the same number of d-dimensional faces for d different from zero, but fewer than expected vertices. Therefore, we find a "corrected" formula
χ
(
S
′
)
=
N
⋅
χ
(
S
)
−
∑
P
∈
S
′
(
e
P
−
1
)
{\displaystyle \chi (S')=N\cdot \chi (S)-\sum _{P\in S'}(e_{P}-1)}
(all but finitely many P have eP = 1, so this is quite safe). This formula is known as the Riemann–Hurwitz formula and also as Hurwitz's theorem.
== References ==
Ahlfors, Lars (1935), "Zur Theorie der Überlagerungsflächen", Acta Mathematica (in German), 65 (1): 157–194, doi:10.1007/BF02420945, ISSN 0001-5962, JFM 61.0365.03, Zbl 0012.17204.
Grötzsch, Herbert (1928), "Über einige Extremalprobleme der konformen Abbildung. I, II.", Berichte über die Verhandlungen der Königlich Sächsischen Gesellschaft der Wissenschaften zu Leipzig. Mathematisch-Physische Classe (in German), 80: 367–376, 497–502, JFM 54.0378.01.
Hurwitz-Courant, Vorlesunger über allgemeine Funcktionen Theorie, 1922 (4th ed., appendix by H. Röhrl, vol. 3, Grundlehren der mathematischen Wissenschaften. Springer, 1964.)
Krantz, Steven (2006). Geometric Function Theory: Explorations in Complex Analysis. Springer. ISBN 0-8176-4339-7.
Bulboacă, T.; Cho, N. E.; Kanas, S. A. R. (2012). "New Trends in Geometric Function Theory 2011" (PDF). International Journal of Mathematics and Mathematical Sciences. 2012: 1–2. doi:10.1155/2012/976374.
Ahlfors, Lars (2010). Conformal Invariants: Topics in Geometric Function Theory. AMS Chelsea Publishing. ISBN 978-0821852705. | Wikipedia/Geometric_function_theory |
In complex analysis, a complex-valued function
f
{\displaystyle f}
of a complex variable
z
{\displaystyle z}
:
is said to be holomorphic at a point
a
{\displaystyle a}
if it is differentiable at every point within some open disk centered at
a
{\displaystyle a}
, and
is said to be analytic at
a
{\displaystyle a}
if in some open disk centered at
a
{\displaystyle a}
it can be expanded as a convergent power series
f
(
z
)
=
∑
n
=
0
∞
c
n
(
z
−
a
)
n
{\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}(z-a)^{n}}
(this implies that the radius of convergence is positive).
One of the most important theorems of complex analysis is that holomorphic functions are analytic and vice versa. Among the corollaries of this theorem are
the identity theorem that two holomorphic functions that agree at every point of an infinite set
S
{\displaystyle S}
with an accumulation point inside the intersection of their domains also agree everywhere in every connected open subset of their domains that contains the set
S
{\displaystyle S}
, and
the fact that, since power series are infinitely differentiable, so are holomorphic functions (this is in contrast to the case of real differentiable functions), and
the fact that the radius of convergence is always the distance from the center
a
{\displaystyle a}
to the nearest non-removable singularity; if there are no singularities (i.e., if
f
{\displaystyle f}
is an entire function), then the radius of convergence is infinite. Strictly speaking, this is not a corollary of the theorem but rather a by-product of the proof.
no bump function on the complex plane can be entire. In particular, on any connected open subset of the complex plane, there can be no bump function defined on that set which is holomorphic on the set. This has important ramifications for the study of complex manifolds, as it precludes the use of partitions of unity. In contrast the partition of unity is a tool which can be used on any real manifold.
== Proof ==
The argument, first given by Cauchy, hinges on Cauchy's integral formula and the power series expansion of the expression
1
w
−
z
.
{\displaystyle {\frac {1}{w-z}}.}
Let
D
{\displaystyle D}
be an open disk centered at
a
{\displaystyle a}
and suppose
f
{\displaystyle f}
is differentiable everywhere within an open neighborhood containing the closure of
D
{\displaystyle D}
. Let
C
{\displaystyle C}
be the positively oriented (i.e., counterclockwise) circle which is the boundary of
D
{\displaystyle D}
and let
z
{\displaystyle z}
be a point in
D
{\displaystyle D}
. Starting with Cauchy's integral formula, we have
f
(
z
)
=
1
2
π
i
∫
C
f
(
w
)
w
−
z
d
w
=
1
2
π
i
∫
C
f
(
w
)
(
w
−
a
)
−
(
z
−
a
)
d
w
=
1
2
π
i
∫
C
1
w
−
a
⋅
1
1
−
z
−
a
w
−
a
f
(
w
)
d
w
=
1
2
π
i
∫
C
1
w
−
a
⋅
∑
n
=
0
∞
(
z
−
a
w
−
a
)
n
f
(
w
)
d
w
=
∑
n
=
0
∞
1
2
π
i
∫
C
(
z
−
a
)
n
(
w
−
a
)
n
+
1
f
(
w
)
d
w
.
{\displaystyle {\begin{aligned}f(z)&{}={1 \over 2\pi i}\int _{C}{f(w) \over w-z}\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{f(w) \over (w-a)-(z-a)}\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{1 \over w-a}\cdot {1 \over 1-{z-a \over w-a}}f(w)\,\mathrm {d} w\\[10pt]&{}={1 \over 2\pi i}\int _{C}{1 \over w-a}\cdot {\sum _{n=0}^{\infty }\left({z-a \over w-a}\right)^{n}}f(w)\,\mathrm {d} w\\[10pt]&{}=\sum _{n=0}^{\infty }{1 \over 2\pi i}\int _{C}{(z-a)^{n} \over (w-a)^{n+1}}f(w)\,\mathrm {d} w.\end{aligned}}}
Interchange of the integral and infinite sum is justified by observing that
f
(
w
)
/
(
w
−
a
)
{\displaystyle f(w)/(w-a)}
is bounded on
C
{\displaystyle C}
by some positive number
M
{\displaystyle M}
, while for all
w
{\displaystyle w}
in
C
{\displaystyle C}
|
z
−
a
w
−
a
|
≤
r
<
1
{\displaystyle \left|{\frac {z-a}{w-a}}\right|\leq r<1}
for some positive
r
{\displaystyle r}
as well. We therefore have
|
(
z
−
a
)
n
(
w
−
a
)
n
+
1
f
(
w
)
|
≤
M
r
n
,
{\displaystyle \left|{(z-a)^{n} \over (w-a)^{n+1}}f(w)\right|\leq Mr^{n},}
on
C
{\displaystyle C}
, and as the Weierstrass M-test shows the series converges uniformly over
C
{\displaystyle C}
, the sum and the integral may be interchanged.
As the factor
(
z
−
a
)
n
{\displaystyle (z-a)^{n}}
does not depend on the variable of integration
w
{\displaystyle w}
, it may be factored out to yield
f
(
z
)
=
∑
n
=
0
∞
(
z
−
a
)
n
1
2
π
i
∫
C
f
(
w
)
(
w
−
a
)
n
+
1
d
w
,
{\displaystyle f(z)=\sum _{n=0}^{\infty }(z-a)^{n}{1 \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,\mathrm {d} w,}
which has the desired form of a power series in
z
{\displaystyle z}
:
f
(
z
)
=
∑
n
=
0
∞
c
n
(
z
−
a
)
n
{\displaystyle f(z)=\sum _{n=0}^{\infty }c_{n}(z-a)^{n}}
with coefficients
c
n
=
1
2
π
i
∫
C
f
(
w
)
(
w
−
a
)
n
+
1
d
w
.
{\displaystyle c_{n}={1 \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,\mathrm {d} w.}
== Remarks ==
Since power series can be differentiated term-wise, applying the above argument in the reverse direction and the power series expression for
1
(
w
−
z
)
n
+
1
{\displaystyle {\frac {1}{(w-z)^{n+1}}}}
gives
f
(
n
)
(
a
)
=
n
!
2
π
i
∫
C
f
(
w
)
(
w
−
a
)
n
+
1
d
w
.
{\displaystyle f^{(n)}(a)={n! \over 2\pi i}\int _{C}{f(w) \over (w-a)^{n+1}}\,dw.}
This is a Cauchy integral formula for derivatives. Therefore the power series obtained above is the Taylor series of
f
{\displaystyle f}
.
The argument works if
z
{\displaystyle z}
is any point that is closer to the center
a
{\displaystyle a}
than is any singularity of
f
{\displaystyle f}
. Therefore, the radius of convergence of the Taylor series cannot be smaller than the distance from
a
{\displaystyle a}
to the nearest singularity (nor can it be larger, since power series have no singularities in the interiors of their circles of convergence).
A special case of the identity theorem follows from the preceding remark. If two holomorphic functions agree on a (possibly quite small) open neighborhood
U
{\displaystyle U}
of
a
{\displaystyle a}
, then they coincide on the open disk
B
d
(
a
)
{\displaystyle B_{d}(a)}
, where
d
{\displaystyle d}
is the distance from
a
{\displaystyle a}
to the nearest singularity.
== External links ==
"Existence of power series". PlanetMath. | Wikipedia/Proof_that_holomorphic_functions_are_analytic |
In mathematics, the FBI transform or Fourier–Bros–Iagolnitzer transform is a generalization of the Fourier transform developed by the French mathematical physicists Jacques Bros and Daniel Iagolnitzer in order to characterise the local analyticity of functions (or distributions) on Rn. The transform provides an alternative approach to analytic wave front sets of distributions, developed independently by the Japanese mathematicians Mikio Sato, Masaki Kashiwara and Takahiro Kawai in their approach to microlocal analysis. It can also be used to prove the analyticity of solutions of analytic elliptic partial differential equations as well as a version of the classical uniqueness theorem, strengthening the Cauchy–Kowalevski theorem, due to the Swedish mathematician Erik Albert Holmgren (1872–1943).
== Definitions ==
The Fourier transform of a Schwartz function f in S(Rn) is defined by
(
F
f
)
(
t
)
=
(
2
π
)
−
n
/
2
∫
R
n
f
(
x
)
e
−
i
x
⋅
t
d
x
.
{\displaystyle ({\mathcal {F}}f)(t)=(2\pi )^{-n/2}\int _{{\mathbf {R} }^{n}}f(x)e^{-ix\cdot t}\,dx.}
The FBI transform of f is defined for a ≥ 0 by
(
F
a
f
)
(
t
,
y
)
=
(
2
π
)
−
n
/
2
∫
R
n
f
(
x
)
e
−
a
|
x
−
y
|
2
/
2
e
−
i
x
⋅
t
d
x
.
{\displaystyle ({\mathcal {F}}_{a}f)(t,y)=(2\pi )^{-n/2}\int _{{\mathbf {R} }^{n}}f(x)e^{-a|x-y|^{2}/2}e^{-ix\cdot t}\,dx.}
Thus, when a = 0, it essentially coincides with the Fourier transform.
The same formulas can be used to define the Fourier and FBI transforms of tempered distributions in
S'(Rn).
== Inversion formula ==
The Fourier inversion formula
f
(
x
)
=
F
2
f
(
−
x
)
{\displaystyle f(x)={\mathcal {F}}^{2}f(-x)}
allows a function f to be recovered from its Fourier transform.
In particular
f
(
x
)
=
(
2
π
)
−
n
/
2
∫
R
n
e
i
t
⋅
x
F
(
f
)
(
t
)
d
t
{\displaystyle f(x)=(2\pi )^{-n/2}\int _{{\mathbf {R} }^{n}}e^{it\cdot x}{\mathcal {F}}(f)(t)\,dt}
Similarly, at a positive value of a, f(0) can be recovered from the FBI transform of f(x) by the inversion formula
f
(
x
)
=
(
2
π
)
−
n
/
2
∫
R
n
e
i
t
⋅
x
e
a
|
x
−
y
|
2
/
2
F
a
(
f
)
(
t
,
y
)
d
t
{\displaystyle f(x)=(2\pi )^{-n/2}\int _{{\mathbf {R} }^{n}}e^{it\cdot x}e^{a|x-y|^{2}/2}{\mathcal {F}}_{a}(f)(t,y)\,dt}
== Criterion for local analyticity ==
Bros and Iagolnitzer showed that a distribution f is locally equal to a real analytic function at y, in the direction ξ
if and only if its FBI transform satisfies an inequality of the form
|
(
F
|
ξ
|
f
)
(
ξ
,
y
)
|
≤
C
e
−
ε
|
ξ
|
,
{\displaystyle |({\mathcal {F}}_{|\xi |}f)(\xi ,y)|\leq Ce^{-\varepsilon |\xi |},}
for |ξ| sufficiently large.
== Holmgren's uniqueness theorem ==
A simple consequence of the Bros and Iagolnitzer characterisation of local analyticity is the following regularity result of Lars Hörmander and Mikio Sato (Sjöstrand (1982)).
Theorem. Let P be an elliptic partial differential operator with analytic coefficients defined on an open subset
X of Rn. If Pf is analytic in X, then so too is f.
When "analytic" is replaced by "smooth" in this theorem, the result is just Hermann Weyl's classical lemma on elliptic regularity, usually proved using Sobolev spaces (Warner 1983). It is a special case of more general results involving the analytic wave front set (see below), which imply Holmgren's classical strengthening of the Cauchy–Kowalevski theorem on linear partial differential equations with real analytic coefficients. In modern language, Holmgren's uniquess theorem states that any distributional solution of such a system of equations must be analytic and therefore unique, by the Cauchy–Kowalevski theorem.
== The analytic wave front set ==
The analytic wave front set or singular spectrum WFA(f) of a distribution f (or more generally of a hyperfunction) can be defined in terms of the FBI transform (Hörmander (1983)) as the complement of the conical set of points (x, λ ξ) (λ > 0) such that the FBI transform satisfies the Bros–Iagolnitzer inequality
|
(
F
|
ξ
|
f
)
(
ξ
,
y
)
|
≤
C
e
−
ε
|
ξ
|
,
{\displaystyle |({\mathcal {F}}_{|\xi |}f)(\xi ,y)|\leq Ce^{-\varepsilon |\xi |},}
for y the point at which one would like to test for analyticity, and |ξ| sufficiently large and pointing in the direction one would like to look for the wave front, that is, the direction at which the singularity at y, if it exists, propagates. J.M. Bony (Sjöstrand (1982), Hörmander (1983)) proved that this definition coincided with other definitions introduced independently by Sato, Kashiwara and Kawai and by Hörmander. If P is an mth order linear differential operator having analytic coefficients
P
=
∑
|
α
|
≤
m
a
α
(
x
)
D
α
,
{\displaystyle P=\sum _{|\alpha |\leq m}a_{\alpha }(x)D^{\alpha },}
with principal symbol
σ
P
(
x
,
ξ
)
=
∑
|
α
|
=
m
a
α
(
x
)
ξ
α
,
{\displaystyle \sigma _{P}(x,\xi )=\sum _{|\alpha |=m}a_{\alpha }(x)\xi ^{\alpha },}
and characteristic variety
c
h
a
r
P
=
{
(
x
,
ξ
)
:
ξ
≠
0
,
σ
P
(
x
,
ξ
)
=
0
}
,
{\displaystyle {\rm {char}}\,P=\{(x,\xi ):\xi \neq 0,\,\sigma _{P}(x,\xi )=0\},}
then
WF
A
(
P
f
)
⊆
WF
A
(
f
)
{\displaystyle \operatorname {WF} _{A}(Pf)\subseteq \operatorname {WF} _{A}(f)}
WF
A
(
f
)
⊆
WF
A
(
P
f
)
∪
char
P
.
{\displaystyle \operatorname {WF} _{A}(f)\subseteq \operatorname {WF} _{A}(Pf)\cup \operatorname {char} P.}
In particular, when P is elliptic, char P = ø, so that
WFA(Pf) = WFA(f).
This is a strengthening of the analytic version of elliptic regularity mentioned
above.
== References ==
Folland, Gerald B. (1989), Harmonic Analysis in Phase Space, Annals of Mathematics Studies, vol. 122, Princeton University Press, ISBN 0-691-08528-5
Gårding, Lars (1998), Mathematics and Mathematicians: Mathematics in Sweden Before 1950, American Mathematical Society, ISBN 0-8218-0612-2
Hörmander, Lars (1983), Analysis of Partial Differential Operators I, Springer-Verlag, ISBN 3-540-12104-8 (Chapter 9.6, The Analytic Wavefront Set.)
Iagolnitzer, Daniel (1975), Microlocal essential support of a distribution and local decompositions – an introduction. In Hyperfunctions and theoretical physics, Lecture Notes in Mathematics, vol. 449, Springer-Verlag, pp. 121–132
Krantz, Steven; Parks, Harold R. (1992), A Primer of Real Analytic Functions, Birkhäuser, ISBN 0-8176-4264-1. 2nd ed., Birkhäuser (2002), ISBN 0-8176-4264-1.
Sjöstrand, Johannes (1982), "Singularités analytiques microlocales. [Microlocal analytic singularities]", Astérisque, 95: 1–166
Trèves, François (1992), Hypo-analytic structures: Local theory, Princeton Mathematical Series, vol. 40, Princeton University Press, ISBN 0-691-08744-X (Chapter 9, FBI Transform in a Hypo-Analytic Manifold.)
Warner, Frank (1983), Foundations of differential geometry and Lie groups, Graduate texts in mathematics, vol. 94, Springer-Verlag, ISBN 0-387-90894-3 | Wikipedia/Fourier–Bros–Iagolnitzer_transform |
In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which form a necessary and sufficient condition for a complex function of a complex variable to be complex differentiable.
These equations are
and
where u(x, y) and v(x, y) are real bivariate differentiable functions.
Typically, u and v are respectively the real and imaginary parts of a complex-valued function f(x + iy) = f(x, y) = u(x, y) + iv(x, y) of a single complex variable z = x + iy where x and y are real variables; u and v are real differentiable functions of the real variables. Then f is complex differentiable at a complex point if and only if the partial derivatives of u and v satisfy the Cauchy–Riemann equations at that point.
A holomorphic function is a complex function that is differentiable at every point of some open subset of the complex plane
C
{\displaystyle \mathbb {C} }
. It has been proved that holomorphic functions are analytic and analytic complex functions are complex-differentiable. In particular, holomorphic functions are infinitely complex-differentiable.
This equivalence between differentiability and analyticity is the starting point of all complex analysis.
== History ==
The Cauchy–Riemann equations first appeared in the work of Jean le Rond d'Alembert. Later, Leonhard Euler connected this system to the analytic functions. Cauchy then used these equations to construct his theory of functions. Riemann's dissertation on the theory of functions appeared in 1851.
== Simple example ==
Suppose that
z
=
x
+
i
y
{\displaystyle z=x+iy}
. The complex-valued function
f
(
z
)
=
z
2
{\displaystyle f(z)=z^{2}}
is differentiable at any point z in the complex plane.
f
(
z
)
=
(
x
+
i
y
)
2
=
x
2
−
y
2
+
2
i
x
y
{\displaystyle f(z)=(x+iy)^{2}=x^{2}-y^{2}+2ixy}
The real part
u
(
x
,
y
)
{\displaystyle u(x,y)}
and the imaginary part
v
(
x
,
y
)
{\displaystyle v(x,y)}
are
u
(
x
,
y
)
=
x
2
−
y
2
v
(
x
,
y
)
=
2
x
y
{\displaystyle {\begin{aligned}u(x,y)&=x^{2}-y^{2}\\v(x,y)&=2xy\end{aligned}}}
and their partial derivatives are
u
x
=
2
x
;
u
y
=
−
2
y
;
v
x
=
2
y
;
v
y
=
2
x
{\displaystyle u_{x}=2x;\quad u_{y}=-2y;\quad v_{x}=2y;\quad v_{y}=2x}
We see that indeed the Cauchy–Riemann equations are satisfied,
u
x
=
v
y
{\displaystyle u_{x}=v_{y}}
and
u
y
=
−
v
x
{\displaystyle u_{y}=-v_{x}}
.
== Interpretation and reformulation ==
The Cauchy-Riemann equations are one way of looking at the condition for a function to be differentiable in the sense of complex analysis: in other words, they encapsulate the notion of function of a complex variable by means of conventional differential calculus. In the theory there are several other major ways of looking at this notion, and the translation of the condition into other language is often needed.
=== Conformal mappings ===
First, the Cauchy–Riemann equations may be written in complex form
In this form, the equations correspond structurally to the condition that the Jacobian matrix is of the form
(
a
−
b
b
a
)
,
{\displaystyle {\begin{pmatrix}a&-b\\b&a\end{pmatrix}},}
where
a
=
∂
u
/
∂
x
=
∂
v
/
∂
y
{\displaystyle a=\partial u/\partial x=\partial v/\partial y}
and
b
=
∂
v
/
∂
x
=
−
∂
u
/
∂
y
{\displaystyle b=\partial v/\partial x=-\partial u/\partial y}
. A matrix of this form is the matrix representation of a complex number. Geometrically, such a matrix is always the composition of a rotation with a scaling, and in particular preserves angles. The Jacobian of a function f(z) takes infinitesimal line segments at the intersection of two curves in z and rotates them to the corresponding segments in f(z). Consequently, a function satisfying the Cauchy–Riemann equations, with a nonzero derivative, preserves the angle between curves in the plane. That is, the Cauchy–Riemann equations are the conditions for a function to be conformal.
Moreover, because the composition of a conformal transformation with another conformal transformation is also conformal, the composition of a solution of the Cauchy–Riemann equations with a conformal map must itself solve the Cauchy–Riemann equations. Thus the Cauchy–Riemann equations are conformally invariant.
=== Complex differentiability ===
Let
f
(
z
)
=
u
(
z
)
+
i
⋅
v
(
z
)
{\displaystyle f(z)=u(z)+i\cdot v(z)}
where
u
{\textstyle u}
and
v
{\displaystyle v}
are real-valued functions, be a complex-valued function of a complex variable
z
=
x
+
i
y
{\textstyle z=x+iy}
where
x
{\textstyle x}
and
y
{\textstyle y}
are real variables.
f
(
z
)
=
f
(
x
+
i
y
)
=
f
(
x
,
y
)
{\textstyle f(z)=f(x+iy)=f(x,y)}
so the function can also be regarded as a function of real variables
x
{\textstyle x}
and
y
{\textstyle y}
. Then, the complex-derivative of
f
{\textstyle f}
at a point
z
0
=
x
0
+
i
y
0
{\textstyle z_{0}=x_{0}+iy_{0}}
is defined by
f
′
(
z
0
)
=
lim
h
→
0
h
∈
C
f
(
z
0
+
h
)
−
f
(
z
0
)
h
{\displaystyle f'(z_{0})=\lim _{\underset {h\in \mathbb {C} }{h\to 0}}{\frac {f(z_{0}+h)-f(z_{0})}{h}}}
provided this limit exists (that is, the limit exists along every path approaching
z
0
{\textstyle z_{0}}
, and does not depend on the chosen path).
A fundamental result of complex analysis is that
f
{\displaystyle f}
is complex differentiable at
z
0
{\displaystyle z_{0}}
(that is, it has a complex-derivative), if and only if the bivariate real functions
u
(
x
+
i
y
)
{\displaystyle u(x+iy)}
and
v
(
x
+
i
y
)
{\displaystyle v(x+iy)}
are differentiable at
(
x
0
,
y
0
)
,
{\displaystyle (x_{0},y_{0}),}
and satisfy the Cauchy–Riemann equations at this point.
In fact, if the complex derivative exists at
z
0
{\textstyle z_{0}}
, then it may be computed by taking the limit at
z
0
{\textstyle z_{0}}
along the real axis and the imaginary axis, and the two limits must be equal. Along the real axis, the limit is
lim
h
→
0
h
∈
R
f
(
z
0
+
h
)
−
f
(
z
0
)
h
=
∂
f
∂
x
|
z
0
{\displaystyle \lim _{\underset {h\in \mathbb {R} }{h\to 0}}{\frac {f(z_{0}+h)-f(z_{0})}{h}}=\left.{\frac {\partial f}{\partial x}}\right\vert _{z_{0}}}
and along the imaginary axis, the limit is
lim
h
→
0
h
∈
R
f
(
z
0
+
i
h
)
−
f
(
z
0
)
i
h
=
1
i
∂
f
∂
y
|
z
0
.
{\displaystyle \lim _{\underset {h\in \mathbb {R} }{h\to 0}}{\frac {f(z_{0}+ih)-f(z_{0})}{ih}}=\left.{\frac {1}{i}}{\frac {\partial f}{\partial y}}\right\vert _{z_{0}}.}
So, the equality of the derivatives implies
i
∂
f
∂
x
|
z
0
=
∂
f
∂
y
|
z
0
{\displaystyle i\left.{\frac {\partial f}{\partial x}}\right\vert _{z_{0}}=\left.{\frac {\partial f}{\partial y}}\right\vert _{z_{0}}}
which is the complex form of Cauchy–Riemann equations (2) at
z
0
{\textstyle z_{0}}
.
(Note that if
f
{\displaystyle f}
is complex differentiable at
z
0
{\displaystyle z_{0}}
, it is also real differentiable and the Jacobian of
f
{\displaystyle f}
at
z
0
{\displaystyle z_{0}}
is the complex scalar
f
′
(
z
0
)
{\displaystyle f'(z_{0})}
, regarded as a real-linear map of
C
{\displaystyle \mathbb {C} }
, since the limit
|
f
(
z
)
−
f
(
z
0
)
−
f
′
(
z
0
)
(
z
−
z
0
)
|
/
|
z
−
z
0
|
→
0
{\displaystyle |f(z)-f(z_{0})-f'(z_{0})(z-z_{0})|/|z-z_{0}|\to 0}
as
z
→
z
0
{\displaystyle z\to z_{0}}
.)
Conversely, if f is differentiable at
z
0
{\textstyle z_{0}}
(in the real sense) and satisfies the Cauchy-Riemann equations there, then it is complex-differentiable at this point. Assume that f as a function of two real variables x and y is differentiable at z0 (real differentiable). This is equivalent to the existence of the following linear approximation
Δ
f
(
z
0
)
=
f
(
z
0
+
Δ
z
)
−
f
(
z
0
)
=
f
x
Δ
x
+
f
y
Δ
y
+
η
(
Δ
z
)
{\displaystyle \Delta f(z_{0})=f(z_{0}+\Delta z)-f(z_{0})=f_{x}\,\Delta x+f_{y}\,\Delta y+\eta (\Delta z)}
where
f
x
=
∂
f
∂
x
|
z
0
{\textstyle f_{x}=\left.{\frac {\partial f}{\partial x}}\right\vert _{z_{0}}}
,
f
y
=
∂
f
∂
y
|
z
0
{\textstyle f_{y}=\left.{\frac {\partial f}{\partial y}}\right\vert _{z_{0}}}
, z = x + iy, and
η
(
Δ
z
)
/
|
Δ
z
|
→
0
{\textstyle \eta (\Delta z)/|\Delta z|\to 0}
as Δz → 0.
Since
Δ
z
+
Δ
z
¯
=
2
Δ
x
{\textstyle \Delta z+\Delta {\bar {z}}=2\,\Delta x}
and
Δ
z
−
Δ
z
¯
=
2
i
Δ
y
{\textstyle \Delta z-\Delta {\bar {z}}=2i\,\Delta y}
, the above can be re-written as
Δ
f
(
z
0
)
=
f
x
−
i
f
y
2
Δ
z
+
f
x
+
i
f
y
2
Δ
z
¯
+
η
(
Δ
z
)
{\displaystyle \Delta f(z_{0})={\frac {f_{x}-if_{y}}{2}}\,\Delta z+{\frac {f_{x}+if_{y}}{2}}\,\Delta {\bar {z}}+\eta (\Delta z)\,}
Δ
f
Δ
z
=
f
x
−
i
f
y
2
+
f
x
+
i
f
y
2
⋅
Δ
z
¯
Δ
z
+
η
(
Δ
z
)
Δ
z
,
(
Δ
z
≠
0
)
.
{\displaystyle {\frac {\Delta f}{\Delta z}}={\frac {f_{x}-if_{y}}{2}}+{\frac {f_{x}+if_{y}}{2}}\cdot {\frac {\Delta {\bar {z}}}{\Delta z}}+{\frac {\eta (\Delta z)}{\Delta z}},\;\;\;\;(\Delta z\neq 0).}
Now, if
Δ
z
{\textstyle \Delta z}
is real,
Δ
z
¯
/
Δ
z
=
1
{\textstyle \Delta {\bar {z}}/\Delta z=1}
, while if it is imaginary, then
Δ
z
¯
/
Δ
z
=
−
1
{\textstyle \Delta {\bar {z}}/\Delta z=-1}
. Therefore, the second term is independent of the path of the limit
Δ
z
→
0
{\textstyle \Delta z\to 0}
when (and only when) it vanishes identically:
f
x
+
i
f
y
=
0
{\textstyle f_{x}+if_{y}=0}
, which is precisely the Cauchy–Riemann equations in the complex form. This proof also shows that, in that case,
d
f
d
z
|
z
0
=
lim
Δ
z
→
0
Δ
f
Δ
z
=
f
x
−
i
f
y
2
.
{\displaystyle \left.{\frac {df}{dz}}\right|_{z_{0}}=\lim _{\Delta z\to 0}{\frac {\Delta f}{\Delta z}}={\frac {f_{x}-if_{y}}{2}}.}
Note that the hypothesis of real differentiability at the point
z
0
{\displaystyle z_{0}}
is essential and cannot be dispensed with. For example, the function
f
(
x
,
y
)
=
|
x
y
|
{\displaystyle f(x,y)={\sqrt {|xy|}}}
, regarded as a complex function with imaginary part identically zero, has both partial derivatives at
(
x
0
,
y
0
)
=
(
0
,
0
)
{\displaystyle (x_{0},y_{0})=(0,0)}
, and it moreover satisfies the Cauchy–Riemann equations at that point, but it is not differentiable in the sense of real functions (of several variables), and so the first condition, that of real differentiability, is not met. Therefore, this function is not complex differentiable.
Some sources state a sufficient condition for the complex differentiability at a point
z
0
{\displaystyle z_{0}}
as, in addition to the Cauchy–Riemann equations, the partial derivatives of
u
{\displaystyle u}
and
v
{\displaystyle v}
be continuous at the point because this continuity condition ensures the existence of the aforementioned linear approximation. Note that it is not a necessary condition for the complex differentiability. For example, the function
f
(
z
)
=
z
2
e
i
/
|
z
|
{\displaystyle f(z)=z^{2}e^{i/|z|}}
is complex differentiable at 0, but its real and imaginary parts have discontinuous partial derivatives there. Since complex differentiability is usually considered in an open set, where it in fact implies continuity of all partial derivatives (see below), this distinction is often elided in the literature.
=== Independence of the complex conjugate ===
The above proof suggests another interpretation of the Cauchy–Riemann equations. The complex conjugate of
z
{\displaystyle z}
, denoted
z
¯
{\textstyle {\bar {z}}}
, is defined by
x
+
i
y
¯
:=
x
−
i
y
{\displaystyle {\overline {x+iy}}:=x-iy}
for real variables
x
{\displaystyle x}
and
y
{\displaystyle y}
. Defining the two Wirtinger derivatives as
∂
∂
z
=
1
2
(
∂
∂
x
−
i
∂
∂
y
)
,
∂
∂
z
¯
=
1
2
(
∂
∂
x
+
i
∂
∂
y
)
,
{\displaystyle {\frac {\partial }{\partial z}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}-i{\frac {\partial }{\partial y}}\right),\;\;\;{\frac {\partial }{\partial {\bar {z}}}}={\frac {1}{2}}\left({\frac {\partial }{\partial x}}+i{\frac {\partial }{\partial y}}\right),}
the Cauchy–Riemann equations can then be written as a single equation
∂
f
∂
z
¯
=
0
,
{\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}=0,}
and the complex derivative of
f
{\textstyle f}
in that case is
d
f
d
z
=
∂
f
∂
z
.
{\textstyle {\frac {df}{dz}}={\frac {\partial f}{\partial z}}.}
In this form, the Cauchy–Riemann equations can be interpreted as the statement that a complex function
f
{\textstyle f}
of a complex variable
z
{\textstyle z}
is independent of the variable
z
¯
{\textstyle {\bar {z}}}
. As such, we can view analytic functions as true functions of one complex variable (
z
{\textstyle z}
) instead of complex functions of two real variables (
x
{\textstyle x}
and
y
{\textstyle y}
).
=== Physical interpretation ===
A standard physical interpretation of the Cauchy–Riemann equations going back to Riemann's work on function theory is that u represents a velocity potential of an incompressible steady fluid flow in the plane, and v is its stream function. Suppose that the pair of (twice continuously differentiable) functions u and v satisfies the Cauchy–Riemann equations. We will take u to be a velocity potential, meaning that we imagine a flow of fluid in the plane such that the velocity vector of the fluid at each point of the plane is equal to the gradient of u, defined by
∇
u
=
∂
u
∂
x
i
+
∂
u
∂
y
j
.
{\displaystyle \nabla u={\frac {\partial u}{\partial x}}\mathbf {i} +{\frac {\partial u}{\partial y}}\mathbf {j} .}
By differentiating the Cauchy–Riemann equations for the functions u and v, with the symmetry of second derivatives, one shows that u solves Laplace's equation:
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
=
0.
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.}
That is, u is a harmonic function. This means that the divergence of the gradient is zero, and so the fluid is incompressible.
The function v also satisfies the Laplace equation, by a similar analysis. Also, the Cauchy–Riemann equations imply that the dot product
∇
u
⋅
∇
v
=
0
{\textstyle \nabla u\cdot \nabla v=0}
(
∇
u
⋅
∇
v
=
∂
u
∂
x
⋅
∂
v
∂
x
+
∂
u
∂
y
⋅
∂
v
∂
y
=
∂
u
∂
x
⋅
∂
v
∂
x
−
∂
u
∂
x
⋅
∂
v
∂
x
=
0
{\textstyle \nabla u\cdot \nabla v={\frac {\partial u}{\partial x}}\cdot {\frac {\partial v}{\partial x}}+{\frac {\partial u}{\partial y}}\cdot {\frac {\partial v}{\partial y}}={\frac {\partial u}{\partial x}}\cdot {\frac {\partial v}{\partial x}}-{\frac {\partial u}{\partial x}}\cdot {\frac {\partial v}{\partial x}}=0}
), i.e., the direction of the maximum slope of u and that of v are orthogonal to each other. This implies that the gradient of u must point along the
v
=
const
{\textstyle v={\text{const}}}
curves; so these are the streamlines of the flow. The
u
=
const
{\textstyle u={\text{const}}}
curves are the equipotential curves of the flow.
A holomorphic function can therefore be visualized by plotting the two families of level curves
u
=
const
{\textstyle u={\text{const}}}
and
v
=
const
{\textstyle v={\text{const}}}
. Near points where the gradient of u (or, equivalently, v) is not zero, these families form an orthogonal family of curves. At the points where
∇
u
=
0
{\textstyle \nabla u=0}
, the stationary points of the flow, the equipotential curves of
u
=
const
{\textstyle u={\text{const}}}
intersect. The streamlines also intersect at the same point, bisecting the angles formed by the equipotential curves.
=== Harmonic vector field ===
Another interpretation of the Cauchy–Riemann equations can be found in Pólya & Szegő. Suppose that u and v satisfy the Cauchy–Riemann equations in an open subset of R2, and consider the vector field
f
¯
=
[
u
−
v
]
{\displaystyle {\bar {f}}={\begin{bmatrix}u\\-v\end{bmatrix}}}
regarded as a (real) two-component vector. Then the second Cauchy–Riemann equation (1b) asserts that
f
¯
{\displaystyle {\bar {f}}}
is irrotational (its curl is 0):
∂
(
−
v
)
∂
x
−
∂
u
∂
y
=
0.
{\displaystyle {\frac {\partial (-v)}{\partial x}}-{\frac {\partial u}{\partial y}}=0.}
The first Cauchy–Riemann equation (1a) asserts that the vector field is solenoidal (or divergence-free):
∂
u
∂
x
+
∂
(
−
v
)
∂
y
=
0.
{\displaystyle {\frac {\partial u}{\partial x}}+{\frac {\partial (-v)}{\partial y}}=0.}
Owing respectively to Green's theorem and the divergence theorem, such a field is necessarily a conservative one, and it is free from sources or sinks, having net flux equal to zero through any open domain without holes. (These two observations combine as real and imaginary parts in Cauchy's integral theorem.) In fluid dynamics, such a vector field is a potential flow. In magnetostatics, such vector fields model static magnetic fields on a region of the plane containing no current. In electrostatics, they model static electric fields in a region of the plane containing no electric charge.
This interpretation can equivalently be restated in the language of differential forms. The pair u and v satisfy the Cauchy–Riemann equations if and only if the one-form
v
d
x
+
u
d
y
{\displaystyle v\,dx+u\,dy}
is both closed and coclosed (a harmonic differential form).
=== Preservation of complex structure ===
Another formulation of the Cauchy–Riemann equations involves the complex structure in the plane, given by
J
=
[
0
−
1
1
0
]
.
{\displaystyle J={\begin{bmatrix}0&-1\\1&0\end{bmatrix}}.}
This is a complex structure in the sense that the square of J is the negative of the 2×2 identity matrix:
J
2
=
−
I
{\displaystyle J^{2}=-I}
. As above, if u(x,y) and v(x,y) are two functions in the plane, put
f
(
x
,
y
)
=
[
u
(
x
,
y
)
v
(
x
,
y
)
]
.
{\displaystyle f(x,y)={\begin{bmatrix}u(x,y)\\v(x,y)\end{bmatrix}}.}
The Jacobian matrix of f is the matrix of partial derivatives
D
f
(
x
,
y
)
=
[
∂
u
∂
x
∂
u
∂
y
∂
v
∂
x
∂
v
∂
y
]
{\displaystyle Df(x,y)={\begin{bmatrix}{\dfrac {\partial u}{\partial x}}&{\dfrac {\partial u}{\partial y}}\\[5pt]{\dfrac {\partial v}{\partial x}}&{\dfrac {\partial v}{\partial y}}\end{bmatrix}}}
Then the pair of functions u, v satisfies the Cauchy–Riemann equations if and only if the 2×2 matrix Df commutes with J.
This interpretation is useful in symplectic geometry, where it is the starting point for the study of pseudoholomorphic curves.
=== Other representations ===
Other representations of the Cauchy–Riemann equations occasionally arise in other coordinate systems. If (1a) and (1b) hold for a differentiable pair of functions u and v, then so do
∂
u
∂
n
=
∂
v
∂
s
,
∂
v
∂
n
=
−
∂
u
∂
s
{\displaystyle {\frac {\partial u}{\partial n}}={\frac {\partial v}{\partial s}},\quad {\frac {\partial v}{\partial n}}=-{\frac {\partial u}{\partial s}}}
for any coordinate system (n(x, y), s(x, y)) such that the pair
(
∇
n
,
∇
s
)
{\textstyle (\nabla n,\nabla s)}
is orthonormal and positively oriented. As a consequence, in particular, in the system of coordinates given by the polar representation
z
=
r
e
i
θ
{\displaystyle z=re^{i\theta }}
, the equations then take the form
∂
u
∂
r
=
1
r
∂
v
∂
θ
,
∂
v
∂
r
=
−
1
r
∂
u
∂
θ
.
{\displaystyle {\partial u \over \partial r}={1 \over r}{\partial v \over \partial \theta },\quad {\partial v \over \partial r}=-{1 \over r}{\partial u \over \partial \theta }.}
Combining these into one equation for f gives
∂
f
∂
r
=
1
i
r
∂
f
∂
θ
.
{\displaystyle {\partial f \over \partial r}={1 \over ir}{\partial f \over \partial \theta }.}
The inhomogeneous Cauchy–Riemann equations consist of the two equations for a pair of unknown functions u(x, y) and v(x, y) of two real variables
∂
u
∂
x
−
∂
v
∂
y
=
α
(
x
,
y
)
∂
u
∂
y
+
∂
v
∂
x
=
β
(
x
,
y
)
{\displaystyle {\begin{aligned}{\frac {\partial u}{\partial x}}-{\frac {\partial v}{\partial y}}&=\alpha (x,y)\\[4pt]{\frac {\partial u}{\partial y}}+{\frac {\partial v}{\partial x}}&=\beta (x,y)\end{aligned}}}
for some given functions α(x, y) and β(x, y) defined in an open subset of R2. These equations are usually combined into a single equation
∂
f
∂
z
¯
=
φ
(
z
,
z
¯
)
{\displaystyle {\frac {\partial f}{\partial {\bar {z}}}}=\varphi (z,{\bar {z}})}
where f = u + iv and 𝜑 = (α + iβ)/2.
If 𝜑 is Ck, then the inhomogeneous equation is explicitly solvable in any bounded domain D, provided 𝜑 is continuous on the closure of D. Indeed, by the Cauchy integral formula,
f
(
ζ
,
ζ
¯
)
=
1
2
π
i
∬
D
φ
(
z
,
z
¯
)
d
z
∧
d
z
¯
z
−
ζ
{\displaystyle f\left(\zeta ,{\bar {\zeta }}\right)={\frac {1}{2\pi i}}\iint _{D}\varphi \left(z,{\bar {z}}\right)\,{\frac {dz\wedge d{\bar {z}}}{z-\zeta }}}
for all ζ ∈ D.
== Generalizations ==
=== Goursat's theorem and its generalizations ===
Suppose that f = u + iv is a complex-valued function which is differentiable as a function
f
:
R
2
→
R
2
{\displaystyle f:\mathbb {R} ^{2}\rightarrow \mathbb {R} ^{2}}
. Then Goursat's theorem asserts that f is analytic in an open complex domain Ω if and only if it satisfies the Cauchy–Riemann equation in the domain. In particular, continuous differentiability of f need not be assumed.
The hypotheses of Goursat's theorem can be weakened significantly. If f = u + iv is continuous in an open set Ω and the partial derivatives of f with respect to x and y exist in Ω, and satisfy the Cauchy–Riemann equations throughout Ω, then f is holomorphic (and thus analytic). This result is the Looman–Menchoff theorem.
The hypothesis that f obey the Cauchy–Riemann equations throughout the domain Ω is essential. It is possible to construct a continuous function satisfying the Cauchy–Riemann equations at a point, but which is not analytic at the point (e.g., f(z) = z5/|z|4). Similarly, some additional assumption is needed besides the Cauchy–Riemann equations (such as continuity), as the following example illustrates
f
(
z
)
=
{
exp
(
−
z
−
4
)
if
z
≠
0
0
if
z
=
0
{\displaystyle f(z)={\begin{cases}\exp \left(-z^{-4}\right)&{\text{if }}z\not =0\\0&{\text{if }}z=0\end{cases}}}
which satisfies the Cauchy–Riemann equations everywhere, but fails to be continuous at z = 0.
Nevertheless, if a function satisfies the Cauchy–Riemann equations in an open set in a weak sense, then the function is analytic. More precisely:
If f(z) is locally integrable in an open domain
Ω
∈
C
,
{\displaystyle \Omega \in \mathbb {C} ,}
and satisfies the Cauchy–Riemann equations weakly, then f agrees almost everywhere with an analytic function in Ω.
This is in fact a special case of a more general result on the regularity of solutions of hypoelliptic partial differential equations.
=== Several variables ===
There are Cauchy–Riemann equations, appropriately generalized, in the theory of several complex variables. They form a significant overdetermined system of PDEs. This is done using a straightforward generalization of the Wirtinger derivative, where the function in question is required to have the (partial) Wirtinger derivative with respect to each complex variable vanish.
=== Complex differential forms ===
As often formulated, the d-bar operator
∂
¯
{\displaystyle {\bar {\partial }}}
annihilates holomorphic functions. This generalizes most directly the formulation
∂
f
∂
z
¯
=
0
,
{\displaystyle {\partial f \over \partial {\bar {z}}}=0,}
where
∂
f
∂
z
¯
=
1
2
(
∂
f
∂
x
+
i
∂
f
∂
y
)
.
{\displaystyle {\partial f \over \partial {\bar {z}}}={1 \over 2}\left({\partial f \over \partial x}+i{\partial f \over \partial y}\right).}
=== Bäcklund transform ===
Viewed as conjugate harmonic functions, the Cauchy–Riemann equations are a simple example of a Bäcklund transform. More complicated, generally non-linear Bäcklund transforms, such as in the sine-Gordon equation, are of great interest in the theory of solitons and integrable systems.
=== Definition in Clifford algebra ===
In the Clifford algebra
C
ℓ
(
2
)
{\displaystyle C\ell (2)}
, the complex number
z
=
x
+
i
y
{\displaystyle z=x+iy}
is represented as
z
≡
x
+
J
y
{\displaystyle z\equiv x+Jy}
where
J
≡
σ
1
σ
2
{\displaystyle J\equiv \sigma _{1}\sigma _{2}}
, (
σ
1
2
=
σ
2
2
=
1
,
σ
1
σ
2
+
σ
2
σ
1
=
0
{\displaystyle \sigma _{1}^{2}=\sigma _{2}^{2}=1,\sigma _{1}\sigma _{2}+\sigma _{2}\sigma _{1}=0}
, so
J
2
=
−
1
{\displaystyle J^{2}=-1}
). The Dirac operator in this Clifford algebra is defined as
∇
≡
σ
1
∂
x
+
σ
2
∂
y
{\displaystyle \nabla \equiv \sigma _{1}\partial _{x}+\sigma _{2}\partial _{y}}
. The function
f
=
u
+
J
v
{\displaystyle f=u+Jv}
is considered analytic if and only if
∇
f
=
0
{\displaystyle \nabla f=0}
, which can be calculated in the following way:
0
=
∇
f
=
(
σ
1
∂
x
+
σ
2
∂
y
)
(
u
+
σ
1
σ
2
v
)
=
σ
1
∂
x
u
+
σ
1
σ
1
σ
2
⏟
=
σ
2
∂
x
v
+
σ
2
∂
y
u
+
σ
2
σ
1
σ
2
⏟
=
−
σ
1
∂
y
v
=
0
{\displaystyle {\begin{aligned}0&=\nabla f=(\sigma _{1}\partial _{x}+\sigma _{2}\partial _{y})(u+\sigma _{1}\sigma _{2}v)\\[4pt]&=\sigma _{1}\partial _{x}u+\underbrace {\sigma _{1}\sigma _{1}\sigma _{2}} _{=\sigma _{2}}\partial _{x}v+\sigma _{2}\partial _{y}u+\underbrace {\sigma _{2}\sigma _{1}\sigma _{2}} _{=-\sigma _{1}}\partial _{y}v=0\end{aligned}}}
Grouping by
σ
1
{\displaystyle \sigma _{1}}
and
σ
2
{\displaystyle \sigma _{2}}
:
∇
f
=
σ
1
(
∂
x
u
−
∂
y
v
)
+
σ
2
(
∂
x
v
+
∂
y
u
)
=
0
⇔
{
∂
x
u
−
∂
y
v
=
0
∂
x
v
+
∂
y
u
=
0
{\displaystyle \nabla f=\sigma _{1}(\partial _{x}u-\partial _{y}v)+\sigma _{2}(\partial _{x}v+\partial _{y}u)=0\Leftrightarrow {\begin{cases}\partial _{x}u-\partial _{y}v=0\\[4pt]\partial _{x}v+\partial _{y}u=0\end{cases}}}
Hence, in traditional notation:
{
∂
u
∂
x
=
∂
v
∂
y
∂
u
∂
y
=
−
∂
v
∂
x
{\displaystyle {\begin{cases}{\dfrac {\partial u}{\partial x}}={\dfrac {\partial v}{\partial y}}\\[12pt]{\dfrac {\partial u}{\partial y}}=-{\dfrac {\partial v}{\partial x}}\end{cases}}}
=== Conformal mappings in higher dimensions ===
Let Ω be an open set in the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
. The equation for an orientation-preserving mapping
f
:
Ω
→
R
n
{\displaystyle f:\Omega \to \mathbb {R} ^{n}}
to be a conformal mapping (that is, angle-preserving) is that
D
f
T
D
f
=
(
det
(
D
f
)
)
2
/
n
I
{\displaystyle Df^{\mathsf {T}}Df=(\det(Df))^{2/n}I}
where Df is the Jacobian matrix, with transpose
D
f
T
{\displaystyle Df^{\mathsf {T}}}
, and I denotes the identity matrix. For n = 2, this system is equivalent to the standard Cauchy–Riemann equations of complex variables, and the solutions are holomorphic functions. In dimension n > 2, this is still sometimes called the Cauchy–Riemann system, and Liouville's theorem implies, under suitable smoothness assumptions, that any such mapping is a Möbius transformation.
=== Lie pseudogroups ===
One might seek to generalize the Cauchy-Riemann equations instead by asking more generally when are the solutions of a system of PDEs closed under composition. The theory of Lie Pseudogroups addresses these kinds of questions.
== See also ==
List of complex analysis topics
Cauchy integral theorem
Morera's theorem
Wirtinger derivatives
== References ==
== Sources ==
== Further reading ==
== External links ==
Weisstein, Eric W. "Cauchy–Riemann Equations". MathWorld.
Cauchy–Riemann Equations Module by John H. Mathews | Wikipedia/Cauchy–Riemann_equations |
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966).
This function satisfies the initial condition
f
(
0
)
=
0
{\displaystyle f(0)=0}
, the symmetry condition
f
(
1
−
x
)
=
1
−
f
(
x
)
{\displaystyle f(1-x)=1-f(x)}
for
0
≤
x
≤
1
,
{\displaystyle 0\leq x\leq 1,}
and the functional differential equation
f
′
(
x
)
=
2
f
(
2
x
)
{\displaystyle f'(x)=2f(2x)}
for
0
≤
x
≤
1
/
2.
{\displaystyle 0\leq x\leq 1/2.}
It follows that
f
(
x
)
{\displaystyle f(x)}
is monotone increasing for
0
≤
x
≤
1
,
{\displaystyle 0\leq x\leq 1,}
with
f
(
1
/
2
)
=
1
/
2
{\displaystyle f(1/2)=1/2}
and
f
(
1
)
=
1
{\displaystyle f(1)=1}
and
f
′
(
1
−
x
)
=
f
′
(
x
)
{\displaystyle f'(1-x)=f'(x)}
and
f
′
(
x
)
+
f
′
(
1
2
−
x
)
=
2.
{\displaystyle f'(x)+f'({\tfrac {1}{2}}-x)=2.}
It was also written down as the Fourier transform of
f
^
(
z
)
=
∏
m
=
1
∞
(
cos
π
z
2
m
)
m
{\displaystyle {\hat {f}}(z)=\prod _{m=1}^{\infty }\left(\cos {\frac {\pi z}{2^{m}}}\right)^{m}}
by Børge Jessen and Aurel Wintner (1935).
The Fabius function is defined on the unit interval, and is given by the cumulative distribution function of
∑
n
=
1
∞
2
−
n
ξ
n
,
{\displaystyle \sum _{n=1}^{\infty }2^{-n}\xi _{n},}
where the ξn are independent uniformly distributed random variables on the unit interval. That distribution has an expectation of
1
2
{\displaystyle {\tfrac {1}{2}}}
and a variance of
1
36
{\displaystyle {\tfrac {1}{36}}}
.
There is a unique extension of f to the real numbers that satisfies the same differential equation for all x. This extension can be defined by f (x) = 0 for x ≤ 0, f (x + 1) = 1 − f (x) for 0 ≤ x ≤ 1, and f (x + 2r) = −f (x) for 0 ≤ x ≤ 2r with r a positive integer. The sequence of intervals within which this function is positive or negative follows the same pattern as the Thue–Morse sequence.
The Rvachëv up function is closely related:
u
(
t
)
=
{
F
(
t
+
1
)
,
|
t
|
<
1
0
,
|
t
|
≥
1
{\displaystyle u(t)={\begin{cases}F(t+1),\quad |t|<1\\0,\quad |t|\geq 1\end{cases}}}
which fulfills the Delay differential equation
d
d
t
u
(
t
)
=
2
u
(
2
t
+
1
)
−
2
u
(
2
t
−
1
)
.
{\displaystyle {\frac {d}{dt}}u(t)=2u(2t+1)-2u(2t-1).}
(see Another example).
== Values ==
The Fabius function is constant zero for all non-positive arguments, and assumes rational values at positive dyadic rational arguments. For example:
f
(
1
)
=
1
{\displaystyle f(1)=1}
f
(
1
2
)
=
1
2
{\displaystyle f({\tfrac {1}{2}})={\tfrac {1}{2}}}
f
(
1
4
)
=
5
72
{\displaystyle f({\tfrac {1}{4}})={\tfrac {5}{72}}}
f
(
1
8
)
=
1
288
{\displaystyle f({\tfrac {1}{8}})={\tfrac {1}{288}}}
f
(
1
16
)
=
143
2073600
{\displaystyle f({\tfrac {1}{16}})={\tfrac {143}{2073600}}}
f
(
1
32
)
=
19
33177600
{\displaystyle f({\tfrac {1}{32}})={\tfrac {19}{33177600}}}
f
(
1
64
)
=
1153
561842749440
{\displaystyle f({\tfrac {1}{64}})={\tfrac {1153}{561842749440}}}
f
(
1
128
)
=
583
179789679820800
{\displaystyle f({\tfrac {1}{128}})={\tfrac {583}{179789679820800}}}
with the numerators listed in OEIS: A272755 and denominators in OEIS: A272757.
== References ==
Fabius, J. (1966), "A probabilistic example of a nowhere analytic C ∞-function", Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 5 (2): 173–174, doi:10.1007/bf00536652, MR 0197656, S2CID 122126180
Jessen, Børge; Wintner, Aurel (1935), "Distribution functions and the Riemann zeta function", Trans. Amer. Math. Soc., 38: 48–88, doi:10.1090/S0002-9947-1935-1501802-5, MR 1501802
Dimitrov, Youri (2006). Polynomially-divided solutions of bipartite self-differential functional equations (Thesis).
Arias de Reyna, Juan (2017). "Arithmetic of the Fabius function". arXiv:1702.06487 [math.NT].
Arias de Reyna, Juan (2017). "An infinitely differentiable function with compact support: Definition and properties". arXiv:1702.05442 [math.CA]. (an English translation of the author's paper published in Spanish in 1982)
Alkauskas, Giedrius (2001), "Dirichlet series associated with Thue-Morse sequence", preprint.
Rvachev, V. L., Rvachev, V. A., "Non-classical methods of the approximation theory in boundary value problems", Naukova Dumka, Kiev (1979) (in Russian). | Wikipedia/Fabius_function |
In mathematics, smooth functions (also called infinitely differentiable functions) and analytic functions are two very important types of functions. One can easily prove that any analytic function of a real argument is smooth. The converse is not true, as demonstrated with the counterexample below.
One of the most important applications of smooth functions with compact support is the construction of so-called mollifiers, which are important in theories of generalized functions, such as Laurent Schwartz's theory of distributions.
The existence of smooth but non-analytic functions represents one of the main differences between differential geometry and analytic geometry. In terms of sheaf theory, this difference can be stated as follows: the sheaf of differentiable functions on a differentiable manifold is fine, in contrast with the analytic case.
The functions below are generally used to build up partitions of unity on differentiable manifolds.
== An example function ==
=== Definition of the function ===
Consider the function
f
(
x
)
=
{
e
−
1
x
if
x
>
0
,
0
if
x
≤
0
,
{\displaystyle f(x)={\begin{cases}e^{-{\frac {1}{x}}}&{\text{if }}x>0,\\0&{\text{if }}x\leq 0,\end{cases}}}
defined for every real number x.
=== The function is smooth ===
The function f has continuous derivatives of all orders at every point x of the real line. The formula for these derivatives is
f
(
n
)
(
x
)
=
{
p
n
(
x
)
x
2
n
f
(
x
)
if
x
>
0
,
0
if
x
≤
0
,
{\displaystyle f^{(n)}(x)={\begin{cases}\displaystyle {\frac {p_{n}(x)}{x^{2n}}}\,f(x)&{\text{if }}x>0,\\0&{\text{if }}x\leq 0,\end{cases}}}
where pn(x) is a polynomial of degree n − 1 given recursively by p1(x) = 1 and
p
n
+
1
(
x
)
=
x
2
p
n
′
(
x
)
−
(
2
n
x
−
1
)
p
n
(
x
)
{\displaystyle p_{n+1}(x)=x^{2}p_{n}'(x)-(2nx-1)p_{n}(x)}
for any positive integer n. From this formula, it is not completely clear that the derivatives are continuous at 0; this follows from the one-sided limit
lim
x
↘
0
e
−
1
x
x
m
=
0
{\displaystyle \lim _{x\searrow 0}{\frac {e^{-{\frac {1}{x}}}}{x^{m}}}=0}
for any nonnegative integer m.
=== The function is not analytic ===
As seen earlier, the function f is smooth, and all its derivatives at the origin are 0. Therefore, the Taylor series of f at the origin converges everywhere to the zero function,
∑
n
=
0
∞
f
(
n
)
(
0
)
n
!
x
n
=
∑
n
=
0
∞
0
n
!
x
n
=
0
,
x
∈
R
,
{\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}x^{n}=\sum _{n=0}^{\infty }{\frac {0}{n!}}x^{n}=0,\qquad x\in \mathbb {R} ,}
and so the Taylor series does not equal f(x) for x > 0. Consequently, f is not analytic at the origin.
=== Smooth transition functions ===
The function
g
(
x
)
=
f
(
x
)
f
(
x
)
+
f
(
1
−
x
)
,
x
∈
R
,
{\displaystyle g(x)={\frac {f(x)}{f(x)+f(1-x)}},\qquad x\in \mathbb {R} ,}
has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x ≤ 0 and g(x) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function
R
∋
x
↦
g
(
x
−
a
b
−
a
)
.
{\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}.}
For real numbers a < b < c < d, the smooth function
R
∋
x
↦
g
(
x
−
a
b
−
a
)
g
(
d
−
x
d
−
c
)
{\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}\,g{\Bigl (}{\frac {d-x}{d-c}}{\Bigr )}}
equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function.
== A smooth function that is nowhere real analytic ==
A more pathological example is an infinitely differentiable function which is not analytic at any point. It can be constructed by means of a Fourier series as follows. Define for all
x
∈
R
{\displaystyle x\in \mathbb {R} }
F
(
x
)
:=
∑
k
∈
N
e
−
2
k
cos
(
2
k
x
)
.
{\displaystyle F(x):=\sum _{k\in \mathbb {N} }e^{-{\sqrt {2^{k}}}}\cos(2^{k}x)\ .}
Since the series
∑
k
∈
N
e
−
2
k
(
2
k
)
n
{\displaystyle \sum _{k\in \mathbb {N} }e^{-{\sqrt {2^{k}}}}{(2^{k})}^{n}}
converges for all
n
∈
N
{\displaystyle n\in \mathbb {N} }
, this function is easily seen to be of class C∞, by a standard inductive application of the Weierstrass M-test to demonstrate uniform convergence of each series of derivatives.
We now show that
F
(
x
)
{\displaystyle F(x)}
is not analytic at any dyadic rational multiple of π, that is, at any
x
:=
π
⋅
p
⋅
2
−
q
{\displaystyle x:=\pi \cdot p\cdot 2^{-q}}
with
p
∈
Z
{\displaystyle p\in \mathbb {Z} }
and
q
∈
N
{\displaystyle q\in \mathbb {N} }
. Since the sum of the first
q
{\displaystyle q}
terms is analytic, we need only consider
F
>
q
(
x
)
{\displaystyle F_{>q}(x)}
, the sum of the terms with
k
>
q
{\displaystyle k>q}
. For all orders of derivation
n
=
2
m
{\displaystyle n=2^{m}}
with
m
∈
N
{\displaystyle m\in \mathbb {N} }
,
m
≥
2
{\displaystyle m\geq 2}
and
m
>
q
/
2
{\displaystyle m>q/2}
we have
F
>
q
(
n
)
(
x
)
:=
∑
k
∈
N
k
>
q
e
−
2
k
(
2
k
)
n
cos
(
2
k
x
)
=
∑
k
∈
N
k
>
q
e
−
2
k
(
2
k
)
n
≥
e
−
n
n
2
n
(
a
s
n
→
∞
)
{\displaystyle F_{>q}^{(n)}(x):=\sum _{k\in \mathbb {N} \atop k>q}e^{-{\sqrt {2^{k}}}}{(2^{k})}^{n}\cos(2^{k}x)=\sum _{k\in \mathbb {N} \atop k>q}e^{-{\sqrt {2^{k}}}}{(2^{k})}^{n}\geq e^{-n}n^{2n}\quad (\mathrm {as} \;n\to \infty )}
where we used the fact that
cos
(
2
k
x
)
=
1
{\displaystyle \cos(2^{k}x)=1}
for all
2
k
>
2
q
{\displaystyle 2^{k}>2^{q}}
, and we bounded the first sum from below by the term with
2
k
=
2
2
m
=
n
2
{\displaystyle 2^{k}=2^{2m}=n^{2}}
. As a consequence, at any such
x
∈
R
{\displaystyle x\in \mathbb {R} }
lim sup
n
→
∞
(
|
F
>
q
(
n
)
(
x
)
|
n
!
)
1
/
n
=
+
∞
,
{\displaystyle \limsup _{n\to \infty }\left({\frac {|F_{>q}^{(n)}(x)|}{n!}}\right)^{1/n}=+\infty \,,}
so that the radius of convergence of the Taylor series of
F
>
q
{\displaystyle F_{>q}}
at
x
{\displaystyle x}
is 0 by the Cauchy-Hadamard formula. Since the set of analyticity of a function is an open set, and since dyadic rationals are dense, we conclude that
F
>
q
{\displaystyle F_{>q}}
, and hence
F
{\displaystyle F}
, is nowhere analytic in
R
{\displaystyle \mathbb {R} }
.
== Application to Taylor series ==
For every sequence α0, α1, α2, . . . of real or complex numbers, the following construction shows the existence of a smooth function F on the real line which has these numbers as derivatives at the origin. In particular, every sequence of numbers can appear as the coefficients of the Taylor series of a smooth function. This result is known as Borel's lemma, after Émile Borel.
With the smooth transition function g as above, define
h
(
x
)
=
g
(
2
+
x
)
g
(
2
−
x
)
,
x
∈
R
.
{\displaystyle h(x)=g(2+x)\,g(2-x),\qquad x\in \mathbb {R} .}
This function h is also smooth; it equals 1 on the closed interval [−1,1] and vanishes outside the open interval (−2,2). Using h, define for every natural number n (including zero) the smooth function
ψ
n
(
x
)
=
x
n
h
(
x
)
,
x
∈
R
,
{\displaystyle \psi _{n}(x)=x^{n}\,h(x),\qquad x\in \mathbb {R} ,}
which agrees with the monomial xn on [−1,1] and vanishes outside the interval (−2,2). Hence, the k-th derivative of ψn at the origin satisfies
ψ
n
(
k
)
(
0
)
=
{
n
!
if
k
=
n
,
0
otherwise,
k
,
n
∈
N
0
,
{\displaystyle \psi _{n}^{(k)}(0)={\begin{cases}n!&{\text{if }}k=n,\\0&{\text{otherwise,}}\end{cases}}\quad k,n\in \mathbb {N} _{0},}
and the boundedness theorem implies that ψn and every derivative of ψn is bounded. Therefore, the constants
λ
n
=
max
{
1
,
|
α
n
|
,
‖
ψ
n
‖
∞
,
‖
ψ
n
(
1
)
‖
∞
,
…
,
‖
ψ
n
(
n
)
‖
∞
}
,
n
∈
N
0
,
{\displaystyle \lambda _{n}=\max {\bigl \{}1,|\alpha _{n}|,\|\psi _{n}\|_{\infty },\|\psi _{n}^{(1)}\|_{\infty },\ldots ,\|\psi _{n}^{(n)}\|_{\infty }{\bigr \}},\qquad n\in \mathbb {N} _{0},}
involving the supremum norm of ψn and its first n derivatives, are well-defined real numbers. Define the scaled functions
f
n
(
x
)
=
α
n
n
!
λ
n
n
ψ
n
(
λ
n
x
)
,
n
∈
N
0
,
x
∈
R
.
{\displaystyle f_{n}(x)={\frac {\alpha _{n}}{n!\,\lambda _{n}^{n}}}\psi _{n}(\lambda _{n}x),\qquad n\in \mathbb {N} _{0},\;x\in \mathbb {R} .}
By repeated application of the chain rule,
f
n
(
k
)
(
x
)
=
α
n
n
!
λ
n
n
−
k
ψ
n
(
k
)
(
λ
n
x
)
,
k
,
n
∈
N
0
,
x
∈
R
,
{\displaystyle f_{n}^{(k)}(x)={\frac {\alpha _{n}}{n!\,\lambda _{n}^{n-k}}}\psi _{n}^{(k)}(\lambda _{n}x),\qquad k,n\in \mathbb {N} _{0},\;x\in \mathbb {R} ,}
and, using the previous result for the k-th derivative of ψn at zero,
f
n
(
k
)
(
0
)
=
{
α
n
if
k
=
n
,
0
otherwise,
k
,
n
∈
N
0
.
{\displaystyle f_{n}^{(k)}(0)={\begin{cases}\alpha _{n}&{\text{if }}k=n,\\0&{\text{otherwise,}}\end{cases}}\qquad k,n\in \mathbb {N} _{0}.}
It remains to show that the function
F
(
x
)
=
∑
n
=
0
∞
f
n
(
x
)
,
x
∈
R
,
{\displaystyle F(x)=\sum _{n=0}^{\infty }f_{n}(x),\qquad x\in \mathbb {R} ,}
is well defined and can be differentiated term-by-term infinitely many times. To this end, observe that for every k
∑
n
=
0
∞
‖
f
n
(
k
)
‖
∞
≤
∑
n
=
0
k
+
1
|
α
n
|
n
!
λ
n
n
−
k
‖
ψ
n
(
k
)
‖
∞
+
∑
n
=
k
+
2
∞
1
n
!
1
λ
n
n
−
k
−
2
⏟
≤
1
|
α
n
|
λ
n
⏟
≤
1
‖
ψ
n
(
k
)
‖
∞
λ
n
⏟
≤
1
<
∞
,
{\displaystyle \sum _{n=0}^{\infty }\|f_{n}^{(k)}\|_{\infty }\leq \sum _{n=0}^{k+1}{\frac {|\alpha _{n}|}{n!\,\lambda _{n}^{n-k}}}\|\psi _{n}^{(k)}\|_{\infty }+\sum _{n=k+2}^{\infty }{\frac {1}{n!}}\underbrace {\frac {1}{\lambda _{n}^{n-k-2}}} _{\leq \,1}\underbrace {\frac {|\alpha _{n}|}{\lambda _{n}}} _{\leq \,1}\underbrace {\frac {\|\psi _{n}^{(k)}\|_{\infty }}{\lambda _{n}}} _{\leq \,1}<\infty ,}
where the remaining infinite series converges by the ratio test.
== Application to higher dimensions ==
For every radius r > 0,
R
n
∋
x
↦
Ψ
r
(
x
)
=
f
(
r
2
−
‖
x
‖
2
)
{\displaystyle \mathbb {R} ^{n}\ni x\mapsto \Psi _{r}(x)=f(r^{2}-\|x\|^{2})}
with Euclidean norm ||x|| defines a smooth function on n-dimensional Euclidean space with support in the ball of radius r, but
Ψ
r
(
0
)
>
0
{\displaystyle \Psi _{r}(0)>0}
.
== Complex analysis ==
This pathology cannot occur with differentiable functions of a complex variable rather than of a real variable. Indeed, all holomorphic functions are analytic, so that the failure of the function f defined in this article to be analytic in spite of its being infinitely differentiable is an indication of one of the most dramatic differences between real-variable and complex-variable analysis.
Note that although the function f has derivatives of all orders over the real line, the analytic continuation of f from the positive half-line x > 0 to the complex plane, that is, the function
C
∖
{
0
}
∋
z
↦
e
−
1
z
∈
C
,
{\displaystyle \mathbb {C} \setminus \{0\}\ni z\mapsto e^{-{\frac {1}{z}}}\in \mathbb {C} ,}
has an essential singularity at the origin, and hence is not even continuous, much less analytic. By the great Picard theorem, it attains every complex value (with the exception of zero) infinitely many times in every neighbourhood of the origin.
== See also ==
Bump function
Fabius function
Flat function
Mollifier
== Notes ==
== External links ==
"Infinitely-differentiable function that is not analytic". PlanetMath. | Wikipedia/Non-analytic_smooth_function |
In physics and chemistry, a degree of freedom is an independent physical parameter in the chosen parameterization of a physical system. More formally, given a parameterization of a physical system, the number of degrees of freedom is the smallest number
n
{\textstyle n}
of parameters whose values need to be known in order to always be possible to determine the values of all parameters in the chosen parameterization. In this case, any set of
n
{\textstyle n}
such parameters are called degrees of freedom.
The location of a particle in three-dimensional space requires three position coordinates. Similarly, the direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. So, if the time evolution of the system is deterministic (where the state at one instant uniquely determines its past and future position and velocity as a function of time), such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions – for example, the particle must move along a wire or on a fixed surface – then the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom.
In classical mechanics, the state of a point particle at any given time is often described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism.
In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system. The specification of all microstates of a system is a point in the system's phase space.
In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer.
It is often useful to specify quadratic degrees of freedom. These are degrees of freedom that contribute in a quadratic function to the energy of the system.
Depending on what one is counting, there are several different ways that degrees of freedom can be defined,
each with a different value.
== Thermodynamic degrees of freedom for gases ==
By the equipartition theorem, internal energy per mole of gas equals cv T, where T is absolute temperature and the specific heat at constant volume is cv = (f)(R/2).
R = 8.314 J/(K mol) is the universal gas constant, and "f" is the number of thermodynamic (quadratic) degrees of freedom,
counting the number of ways in which energy can occur.
Any atom or molecule has three degrees of freedom associated with translational motion (kinetic energy) of the center of mass with respect to the x, y, and z axes. These are the only degrees of freedom for a monoatomic species, such as noble gas atoms.
For a structure consisting of two or more atoms, the whole structure also has rotational kinetic energy, where the whole structure turns about an axis.
A linear molecule, where all atoms lie along a single axis,
such as any diatomic molecule and some other molecules like carbon dioxide (CO2),
has two rotational degrees of freedom, because it can rotate about either of two axes perpendicular to the molecular axis.
A nonlinear molecule, where the atoms do not lie along a single axis, like water (H2O), has three rotational degrees of freedom, because it can rotate around any of three perpendicular axes.
In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one.
A structure consisting of two or more atoms also has vibrational energy, where the individual atoms move with respect to one another. A diatomic molecule has one molecular vibration mode: the two atoms oscillate back and forth with the chemical bond between them acting as a spring. A molecule with N atoms has more complicated modes of molecular vibration, with 3N − 5 vibrational modes for a linear molecule and 3N − 6 modes for a nonlinear molecule.
As specific examples, the linear CO2 molecule has 4 modes of oscillation, and the nonlinear water molecule has 3 modes of oscillation
Each vibrational mode has two energy terms: the kinetic energy of the moving atoms and the potential energy of the spring-like chemical bond(s).
Therefore, the number of vibrational energy terms is 2(3N − 5) modes for a linear molecule and is 2(3N − 6) modes for a nonlinear molecule.
Both the rotational and vibrational modes are quantized, requiring a minimum temperature to be activated. The "rotational temperature" to activate the rotational degrees of freedom is less than 100 K for many gases. For N2 and O2, it is less than 3 K.
The "vibrational temperature" necessary for substantial vibration is between 103 K and 104 K, 3521 K for N2 and 2156 K for O2. Typical atmospheric temperatures are not high enough to activate vibration in N2 and O2, which comprise most of the atmosphere. (See the next figure.) However, the much less abundant greenhouse gases keep the troposphere warm by absorbing infrared from the Earth's surface, which excites their vibrational modes.
Much of this energy is reradiated back to the surface in the infrared through the "greenhouse effect."
Because room temperature (≈298 K) is over the typical rotational temperature but lower than the typical vibrational temperature, only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio. This is why γ≈5/3 for monatomic gases and γ≈7/5 for diatomic gases at room temperature.
Since the air is dominated by diatomic gases (with nitrogen and oxygen contributing about 99%), its molar internal energy is close to cv T = (5/2)RT, determined by the 5 degrees of freedom exhibited by diatomic gases.
See the graph at right. For 140 K < T < 380 K, cv differs from (5/2) Rd by less than 1%.
Only at temperatures well above temperatures in the troposphere and stratosphere do some molecules have enough energy to activate the vibrational modes of N2 and O2. The specific heat at constant volume, cv, increases slowly toward (7/2) R as temperature increases above T = 400 K, where cv is 1.3% above (5/2) Rd = 717.5 J/(K kg).
=== Counting the minimum number of co-ordinates to specify a position ===
One can also count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows:
For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space. Thus its degree of freedom in a 3-D space is 3.
For a body consisting of 2 particles (ex. a diatomic molecule) in a 3-D space with constant distance between them (let's say d) we can show (below) its degrees of freedom to be 5.
Let's say one particle in this body has coordinate (x1, y1, z1) and the other has coordinate (x2, y2, z2) with z2 unknown. Application of the formula for distance between two coordinates
d
=
(
x
2
−
x
1
)
2
+
(
y
2
−
y
1
)
2
+
(
z
2
−
z
1
)
2
{\displaystyle d={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}+(z_{2}-z_{1})^{2}}}}
results in one equation with one unknown, in which we can solve for z2.
One of x1, x2, y1, y2, z1, or z2 can be unknown.
Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules typically makes negligible contributions to the heat capacity. This is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures (kBT).
== Independent degrees of freedom ==
The set of degrees of freedom X1, ... , XN of a system is independent if the energy associated with the set can be written in the following form:
E
=
∑
i
=
1
N
E
i
(
X
i
)
,
{\displaystyle E=\sum _{i=1}^{N}E_{i}(X_{i}),}
where Ei is a function of the sole variable Xi.
example: if X1 and X2 are two degrees of freedom, and E is the associated energy:
If
E
=
X
1
4
+
X
2
4
{\displaystyle E=X_{1}^{4}+X_{2}^{4}}
, then the two degrees of freedom are independent.
If
E
=
X
1
4
+
X
1
X
2
+
X
2
4
{\displaystyle E=X_{1}^{4}+X_{1}X_{2}+X_{2}^{4}}
, then the two degrees of freedom are not independent. The term involving the product of X1 and X2 is a coupling term that describes an interaction between the two degrees of freedom.
For i from 1 to N, the value of the ith degree of freedom Xi is distributed according to the Boltzmann distribution. Its probability density function is the following:
p
i
(
X
i
)
=
e
−
E
i
k
B
T
∫
d
X
i
e
−
E
i
k
B
T
,
{\displaystyle p_{i}(X_{i})={\frac {e^{-{\frac {E_{i}}{k_{\text{B}}T}}}}{\displaystyle \int dX_{i}\,e^{-{\frac {E_{i}}{k_{\text{B}}T}}}}},}
In this section, and throughout the article the brackets
⟨
⟩
{\displaystyle \langle \rangle }
denote the mean of the quantity they enclose.
The internal energy of the system is the sum of the average energies associated with each of the degrees of freedom:
⟨
E
⟩
=
∑
i
=
1
N
⟨
E
i
⟩
.
{\displaystyle \langle E\rangle =\sum _{i=1}^{N}\langle E_{i}\rangle .}
== Quadratic degrees of freedom ==
A degree of freedom Xi is quadratic if the energy terms associated with this degree of freedom can be written as
E
=
α
i
X
i
2
+
β
i
X
i
Y
,
{\displaystyle E=\alpha _{i}\,X_{i}^{2}+\beta _{i}\,X_{i}Y,}
where Y is a linear combination of other quadratic degrees of freedom.
example: if X1 and X2 are two degrees of freedom, and E is the associated energy:
If
E
=
X
1
4
+
X
1
3
X
2
+
X
2
4
{\displaystyle E=X_{1}^{4}+X_{1}^{3}X_{2}+X_{2}^{4}}
, then the two degrees of freedom are not independent and non-quadratic.
If
E
=
X
1
4
+
X
2
4
{\displaystyle E=X_{1}^{4}+X_{2}^{4}}
, then the two degrees of freedom are independent and non-quadratic.
If
E
=
X
1
2
+
X
1
X
2
+
2
X
2
2
{\displaystyle E=X_{1}^{2}+X_{1}X_{2}+2X_{2}^{2}}
, then the two degrees of freedom are not independent but are quadratic.
If
E
=
X
1
2
+
2
X
2
2
{\displaystyle E=X_{1}^{2}+2X_{2}^{2}}
, then the two degrees of freedom are independent and quadratic.
For example, in Newtonian mechanics, the dynamics of a system of quadratic degrees of freedom are controlled by a set of homogeneous linear differential equations with constant coefficients.
=== Quadratic and independent degree of freedom ===
X1, ... , XN are quadratic and independent degrees of freedom if the energy associated with a microstate of the system they represent can be written as:
E
=
∑
i
=
1
N
α
i
X
i
2
{\displaystyle E=\sum _{i=1}^{N}\alpha _{i}X_{i}^{2}}
=== Equipartition theorem ===
In the classical limit of statistical mechanics, at thermodynamic equilibrium, the internal energy of a system of N quadratic and independent degrees of freedom is:
U
=
⟨
E
⟩
=
N
k
B
T
2
{\displaystyle U=\langle E\rangle =N\,{\frac {k_{\text{B}}T}{2}}}
Here, the mean energy associated with a degree of freedom is:
⟨
E
i
⟩
=
∫
d
X
i
α
i
X
i
2
p
i
(
X
i
)
=
∫
d
X
i
α
i
X
i
2
e
−
α
i
X
i
2
k
B
T
∫
d
X
i
e
−
α
i
X
i
2
k
B
T
{\displaystyle \langle E_{i}\rangle =\int dX_{i}\,\alpha _{i}X_{i}^{2}\,p_{i}(X_{i})={\frac {\displaystyle \int dX_{i}\,\alpha _{i}X_{i}^{2}\,e^{-{\frac {\alpha _{i}X_{i}^{2}}{k_{\text{B}}T}}}}{\displaystyle \int dX_{i}\,e^{-{\frac {\alpha _{i}X_{i}^{2}}{k_{\text{B}}T}}}}}}
⟨
E
i
⟩
=
k
B
T
2
∫
d
x
x
2
e
−
x
2
2
∫
d
x
e
−
x
2
2
=
k
B
T
2
{\displaystyle \langle E_{i}\rangle ={\frac {k_{\text{B}}T}{2}}{\frac {\displaystyle \int dx\,x^{2}\,e^{-{\frac {x^{2}}{2}}}}{\displaystyle \int dx\,e^{-{\frac {x^{2}}{2}}}}}={\frac {k_{\text{B}}T}{2}}}
Since the degrees of freedom are independent, the internal energy of the system is equal to the sum of the mean energy associated with each degree of freedom, which demonstrates the result.
== Generalizations ==
The description of a system's state as a point in its phase space, although mathematically convenient, is thought to be fundamentally inaccurate. In quantum mechanics, the motion degrees of freedom are superseded with the concept of wave function, and operators which correspond to other degrees of freedom have discrete spectra. For example, intrinsic angular momentum operator (which corresponds to the rotational freedom) for an electron or photon has only two eigenvalues. This discreteness becomes apparent when action has an order of magnitude of the Planck constant, and individual degrees of freedom can be distinguished.
== References == | Wikipedia/Degrees_of_freedom_(physics_and_chemistry) |
In mathematics, physics and engineering, the sinc function ( SINK), denoted by sinc(x), has two forms, normalized and unnormalized.
In mathematics, the historical unnormalized sinc function is defined for x ≠ 0 by
sinc
(
x
)
=
sin
x
x
.
{\displaystyle \operatorname {sinc} (x)={\frac {\sin x}{x}}.}
Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).
In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by
sinc
(
x
)
=
sin
(
π
x
)
π
x
.
{\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}.}
In either case, the value at x = 0 is defined to be the limiting value
sinc
(
0
)
:=
lim
x
→
0
sin
(
a
x
)
a
x
=
1
{\displaystyle \operatorname {sinc} (0):=\lim _{x\to 0}{\frac {\sin(ax)}{ax}}=1}
for all real a ≠ 0 (the limit can be proven using the squeeze theorem).
The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of π). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of x.
The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal.
The only difference between the two definitions is in the scaling of the independent variable (the x axis) by a factor of π. In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function.
The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book Probability and Information Theory, with Applications to Radar.
The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's formula) for the zeroth-order spherical Bessel function of the first kind.
== Properties ==
The zero crossings of the unnormalized sinc are at non-zero integer multiples of π, while zero crossings of the normalized sinc occur at non-zero integers.
The local maxima and minima of the unnormalized sinc correspond to its intersections with the cosine function. That is, sin(ξ)/ξ = cos(ξ) for all points ξ where the derivative of sin(x)/x is zero and thus a local extremum is reached. This follows from the derivative of the sinc function:
d
d
x
sinc
(
x
)
=
{
cos
(
x
)
−
sinc
(
x
)
x
,
x
≠
0
0
,
x
=
0
.
{\displaystyle {\frac {d}{dx}}\operatorname {sinc} (x)={\begin{cases}{\dfrac {\cos(x)-\operatorname {sinc} (x)}{x}},&x\neq 0\\0,&x=0\end{cases}}.}
The first few terms of the infinite series for the x coordinate of the n-th extremum with positive x coordinate are
x
n
=
q
−
q
−
1
−
2
3
q
−
3
−
13
15
q
−
5
−
146
105
q
−
7
−
⋯
,
{\displaystyle x_{n}=q-q^{-1}-{\frac {2}{3}}q^{-3}-{\frac {13}{15}}q^{-5}-{\frac {146}{105}}q^{-7}-\cdots ,}
where
q
=
(
n
+
1
2
)
π
,
{\displaystyle q=\left(n+{\frac {1}{2}}\right)\pi ,}
and where odd n lead to a local minimum, and even n to a local maximum. Because of symmetry around the y axis, there exist extrema with x coordinates −xn. In addition, there is an absolute maximum at ξ0 = (0, 1).
The normalized sinc function has a simple representation as the infinite product:
sin
(
π
x
)
π
x
=
∏
n
=
1
∞
(
1
−
x
2
n
2
)
{\displaystyle {\frac {\sin(\pi x)}{\pi x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}}}\right)}
and is related to the gamma function Γ(x) through Euler's reflection formula:
sin
(
π
x
)
π
x
=
1
Γ
(
1
+
x
)
Γ
(
1
−
x
)
.
{\displaystyle {\frac {\sin(\pi x)}{\pi x}}={\frac {1}{\Gamma (1+x)\Gamma (1-x)}}.}
Euler discovered that
sin
(
x
)
x
=
∏
n
=
1
∞
cos
(
x
2
n
)
,
{\displaystyle {\frac {\sin(x)}{x}}=\prod _{n=1}^{\infty }\cos \left({\frac {x}{2^{n}}}\right),}
and because of the product-to-sum identity
∏
n
=
1
k
cos
(
x
2
n
)
=
1
2
k
−
1
∑
n
=
1
2
k
−
1
cos
(
n
−
1
/
2
2
k
−
1
x
)
,
∀
k
≥
1
,
{\displaystyle \prod _{n=1}^{k}\cos \left({\frac {x}{2^{n}}}\right)={\frac {1}{2^{k-1}}}\sum _{n=1}^{2^{k-1}}\cos \left({\frac {n-1/2}{2^{k-1}}}x\right),\quad \forall k\geq 1,}
Euler's product can be recast as a sum
sin
(
x
)
x
=
lim
N
→
∞
1
N
∑
n
=
1
N
cos
(
n
−
1
/
2
N
x
)
.
{\displaystyle {\frac {\sin(x)}{x}}=\lim _{N\to \infty }{\frac {1}{N}}\sum _{n=1}^{N}\cos \left({\frac {n-1/2}{N}}x\right).}
The continuous Fourier transform of the normalized sinc (to ordinary frequency) is rect(f):
∫
−
∞
∞
sinc
(
t
)
e
−
i
2
π
f
t
d
t
=
rect
(
f
)
,
{\displaystyle \int _{-\infty }^{\infty }\operatorname {sinc} (t)\,e^{-i2\pi ft}\,dt=\operatorname {rect} (f),}
where the rectangular function is 1 for argument between −1/2 and 1/2, and zero otherwise. This corresponds to the fact that the sinc filter is the ideal (brick-wall, meaning rectangular frequency response) low-pass filter.
This Fourier integral, including the special case
∫
−
∞
∞
sin
(
π
x
)
π
x
d
x
=
rect
(
0
)
=
1
{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\pi x)}{\pi x}}\,dx=\operatorname {rect} (0)=1}
is an improper integral (see Dirichlet integral) and not a convergent Lebesgue integral, as
∫
−
∞
∞
|
sin
(
π
x
)
π
x
|
d
x
=
+
∞
.
{\displaystyle \int _{-\infty }^{\infty }\left|{\frac {\sin(\pi x)}{\pi x}}\right|\,dx=+\infty .}
The normalized sinc function has properties that make it ideal in relationship to interpolation of sampled bandlimited functions:
It is an interpolating function, i.e., sinc(0) = 1, and sinc(k) = 0 for nonzero integer k.
The functions xk(t) = sinc(t − k) (k integer) form an orthonormal basis for bandlimited functions in the function space L2(R), with highest angular frequency ωH = π (that is, highest cycle frequency fH = 1/2).
Other properties of the two sinc functions include:
The unnormalized sinc is the zeroth-order spherical Bessel function of the first kind, j0(x). The normalized sinc is j0(πx).
where Si(x) is the sine integral,
∫
0
x
sin
(
θ
)
θ
d
θ
=
Si
(
x
)
.
{\displaystyle \int _{0}^{x}{\frac {\sin(\theta )}{\theta }}\,d\theta =\operatorname {Si} (x).}
λ sinc(λx) (not normalized) is one of two linearly independent solutions to the linear ordinary differential equation
x
d
2
y
d
x
2
+
2
d
y
d
x
+
λ
2
x
y
=
0.
{\displaystyle x{\frac {d^{2}y}{dx^{2}}}+2{\frac {dy}{dx}}+\lambda ^{2}xy=0.}
The other is cos(λx)/x, which is not bounded at x = 0, unlike its sinc function counterpart.
Using normalized sinc,
∫
−
∞
∞
sin
2
(
θ
)
θ
2
d
θ
=
π
⇒
∫
−
∞
∞
sinc
2
(
x
)
d
x
=
1
,
{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin ^{2}(\theta )}{\theta ^{2}}}\,d\theta =\pi \quad \Rightarrow \quad \int _{-\infty }^{\infty }\operatorname {sinc} ^{2}(x)\,dx=1,}
∫
−
∞
∞
sin
(
θ
)
θ
d
θ
=
∫
−
∞
∞
(
sin
(
θ
)
θ
)
2
d
θ
=
π
.
{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin(\theta )}{\theta }}\,d\theta =\int _{-\infty }^{\infty }\left({\frac {\sin(\theta )}{\theta }}\right)^{2}\,d\theta =\pi .}
∫
−
∞
∞
sin
3
(
θ
)
θ
3
d
θ
=
3
π
4
.
{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin ^{3}(\theta )}{\theta ^{3}}}\,d\theta ={\frac {3\pi }{4}}.}
∫
−
∞
∞
sin
4
(
θ
)
θ
4
d
θ
=
2
π
3
.
{\displaystyle \int _{-\infty }^{\infty }{\frac {\sin ^{4}(\theta )}{\theta ^{4}}}\,d\theta ={\frac {2\pi }{3}}.}
The following improper integral involves the (not normalized) sinc function:
∫
0
∞
d
x
x
n
+
1
=
1
+
2
∑
k
=
1
∞
(
−
1
)
k
+
1
(
k
n
)
2
−
1
=
1
sinc
(
π
n
)
.
{\displaystyle \int _{0}^{\infty }{\frac {dx}{x^{n}+1}}=1+2\sum _{k=1}^{\infty }{\frac {(-1)^{k+1}}{(kn)^{2}-1}}={\frac {1}{\operatorname {sinc} ({\frac {\pi }{n}})}}.}
== Relationship to the Dirac delta distribution ==
The normalized sinc function can be used as a nascent delta function, meaning that the following weak limit holds:
lim
a
→
0
sin
(
π
x
a
)
π
x
=
lim
a
→
0
1
a
sinc
(
x
a
)
=
δ
(
x
)
.
{\displaystyle \lim _{a\to 0}{\frac {\sin \left({\frac {\pi x}{a}}\right)}{\pi x}}=\lim _{a\to 0}{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)=\delta (x).}
This is not an ordinary limit, since the left side does not converge. Rather, it means that
lim
a
→
0
∫
−
∞
∞
1
a
sinc
(
x
a
)
φ
(
x
)
d
x
=
φ
(
0
)
{\displaystyle \lim _{a\to 0}\int _{-\infty }^{\infty }{\frac {1}{a}}\operatorname {sinc} \left({\frac {x}{a}}\right)\varphi (x)\,dx=\varphi (0)}
for every Schwartz function, as can be seen from the Fourier inversion theorem.
In the above expression, as a → 0, the number of oscillations per unit length of the sinc function approaches infinity. Nevertheless, the expression always oscillates inside an envelope of ±1/πx, regardless of the value of a.
This complicates the informal picture of δ(x) as being zero for all x except at the point x = 0, and illustrates the problem of thinking of the delta function as a function rather than as a distribution. A similar situation is found in the Gibbs phenomenon.
We can also make an immediate connection with the standard Dirac representation of
δ
(
x
)
{\displaystyle \delta (x)}
by writing
b
=
1
/
a
{\displaystyle b=1/a}
and
lim
b
→
∞
sin
(
b
π
x
)
π
x
=
lim
b
→
∞
1
2
π
∫
−
b
π
b
π
e
i
k
x
d
k
=
1
2
π
∫
−
∞
∞
e
i
k
x
d
k
=
δ
(
x
)
,
{\displaystyle \lim _{b\to \infty }{\frac {\sin \left(b\pi x\right)}{\pi x}}=\lim _{b\to \infty }{\frac {1}{2\pi }}\int _{-b\pi }^{b\pi }e^{ikx}dk={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{ikx}dk=\delta (x),}
which makes clear the recovery of the delta as an infinite bandwidth limit of the integral.
== Summation ==
All sums in this section refer to the unnormalized sinc function.
The sum of sinc(n) over integer n from 1 to ∞ equals π − 1/2:
∑
n
=
1
∞
sinc
(
n
)
=
sinc
(
1
)
+
sinc
(
2
)
+
sinc
(
3
)
+
sinc
(
4
)
+
⋯
=
π
−
1
2
.
{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} (n)=\operatorname {sinc} (1)+\operatorname {sinc} (2)+\operatorname {sinc} (3)+\operatorname {sinc} (4)+\cdots ={\frac {\pi -1}{2}}.}
The sum of the squares also equals π − 1/2:
∑
n
=
1
∞
sinc
2
(
n
)
=
sinc
2
(
1
)
+
sinc
2
(
2
)
+
sinc
2
(
3
)
+
sinc
2
(
4
)
+
⋯
=
π
−
1
2
.
{\displaystyle \sum _{n=1}^{\infty }\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)+\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)+\operatorname {sinc} ^{2}(4)+\cdots ={\frac {\pi -1}{2}}.}
When the signs of the addends alternate and begin with +, the sum equals 1/2:
∑
n
=
1
∞
(
−
1
)
n
+
1
sinc
(
n
)
=
sinc
(
1
)
−
sinc
(
2
)
+
sinc
(
3
)
−
sinc
(
4
)
+
⋯
=
1
2
.
{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} (n)=\operatorname {sinc} (1)-\operatorname {sinc} (2)+\operatorname {sinc} (3)-\operatorname {sinc} (4)+\cdots ={\frac {1}{2}}.}
The alternating sums of the squares and cubes also equal 1/2:
∑
n
=
1
∞
(
−
1
)
n
+
1
sinc
2
(
n
)
=
sinc
2
(
1
)
−
sinc
2
(
2
)
+
sinc
2
(
3
)
−
sinc
2
(
4
)
+
⋯
=
1
2
,
{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{2}(n)=\operatorname {sinc} ^{2}(1)-\operatorname {sinc} ^{2}(2)+\operatorname {sinc} ^{2}(3)-\operatorname {sinc} ^{2}(4)+\cdots ={\frac {1}{2}},}
∑
n
=
1
∞
(
−
1
)
n
+
1
sinc
3
(
n
)
=
sinc
3
(
1
)
−
sinc
3
(
2
)
+
sinc
3
(
3
)
−
sinc
3
(
4
)
+
⋯
=
1
2
.
{\displaystyle \sum _{n=1}^{\infty }(-1)^{n+1}\,\operatorname {sinc} ^{3}(n)=\operatorname {sinc} ^{3}(1)-\operatorname {sinc} ^{3}(2)+\operatorname {sinc} ^{3}(3)-\operatorname {sinc} ^{3}(4)+\cdots ={\frac {1}{2}}.}
== Series expansion ==
The Taylor series of the unnormalized sinc function can be obtained from that of the sine (which also yields its value of 1 at x = 0):
sin
x
x
=
∑
n
=
0
∞
(
−
1
)
n
x
2
n
(
2
n
+
1
)
!
=
1
−
x
2
3
!
+
x
4
5
!
−
x
6
7
!
+
⋯
{\displaystyle {\frac {\sin x}{x}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}x^{2n}}{(2n+1)!}}=1-{\frac {x^{2}}{3!}}+{\frac {x^{4}}{5!}}-{\frac {x^{6}}{7!}}+\cdots }
The series converges for all x. The normalized version follows easily:
sin
π
x
π
x
=
1
−
π
2
x
2
3
!
+
π
4
x
4
5
!
−
π
6
x
6
7
!
+
⋯
{\displaystyle {\frac {\sin \pi x}{\pi x}}=1-{\frac {\pi ^{2}x^{2}}{3!}}+{\frac {\pi ^{4}x^{4}}{5!}}-{\frac {\pi ^{6}x^{6}}{7!}}+\cdots }
Euler famously compared this series to the expansion of the infinite product form to solve the Basel problem.
== Higher dimensions ==
The product of 1-D sinc functions readily provides a multivariate sinc function for the square Cartesian grid (lattice): sincC(x, y) = sinc(x) sinc(y), whose Fourier transform is the indicator function of a square in the frequency space (i.e., the brick wall defined in 2-D space). The sinc function for a non-Cartesian lattice (e.g., hexagonal lattice) is a function whose Fourier transform is the indicator function of the Brillouin zone of that lattice. For example, the sinc function for the hexagonal lattice is a function whose Fourier transform is the indicator function of the unit hexagon in the frequency space. For a non-Cartesian lattice this function can not be obtained by a simple tensor product. However, the explicit formula for the sinc function for the hexagonal, body-centered cubic, face-centered cubic and other higher-dimensional lattices can be explicitly derived using the geometric properties of Brillouin zones and their connection to zonotopes.
For example, a hexagonal lattice can be generated by the (integer) linear span of the vectors
u
1
=
[
1
2
3
2
]
and
u
2
=
[
1
2
−
3
2
]
.
{\displaystyle \mathbf {u} _{1}={\begin{bmatrix}{\frac {1}{2}}\\{\frac {\sqrt {3}}{2}}\end{bmatrix}}\quad {\text{and}}\quad \mathbf {u} _{2}={\begin{bmatrix}{\frac {1}{2}}\\-{\frac {\sqrt {3}}{2}}\end{bmatrix}}.}
Denoting
ξ
1
=
2
3
u
1
,
ξ
2
=
2
3
u
2
,
ξ
3
=
−
2
3
(
u
1
+
u
2
)
,
x
=
[
x
y
]
,
{\displaystyle {\boldsymbol {\xi }}_{1}={\tfrac {2}{3}}\mathbf {u} _{1},\quad {\boldsymbol {\xi }}_{2}={\tfrac {2}{3}}\mathbf {u} _{2},\quad {\boldsymbol {\xi }}_{3}=-{\tfrac {2}{3}}(\mathbf {u} _{1}+\mathbf {u} _{2}),\quad \mathbf {x} ={\begin{bmatrix}x\\y\end{bmatrix}},}
one can derive the sinc function for this hexagonal lattice as
sinc
H
(
x
)
=
1
3
(
cos
(
π
ξ
1
⋅
x
)
sinc
(
ξ
2
⋅
x
)
sinc
(
ξ
3
⋅
x
)
+
cos
(
π
ξ
2
⋅
x
)
sinc
(
ξ
3
⋅
x
)
sinc
(
ξ
1
⋅
x
)
+
cos
(
π
ξ
3
⋅
x
)
sinc
(
ξ
1
⋅
x
)
sinc
(
ξ
2
⋅
x
)
)
.
{\displaystyle {\begin{aligned}\operatorname {sinc} _{\text{H}}(\mathbf {x} )={\tfrac {1}{3}}{\big (}&\cos \left(\pi {\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\\&{}+\cos \left(\pi {\boldsymbol {\xi }}_{3}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{1}\cdot \mathbf {x} \right)\operatorname {sinc} \left({\boldsymbol {\xi }}_{2}\cdot \mathbf {x} \right){\big )}.\end{aligned}}}
This construction can be used to design Lanczos window for general multidimensional lattices.
== Sinhc ==
Some authors, by analogy, define the hyperbolic sine cardinal function.
s
i
n
h
c
(
x
)
=
{
sinh
(
x
)
x
,
if
x
≠
0
1
,
if
x
=
0
{\displaystyle \mathrm {sinhc} (x)={\begin{cases}{\displaystyle {\frac {\sinh(x)}{x}},}&{\text{if }}x\neq 0\\{\displaystyle 1,}&{\text{if }}x=0\end{cases}}}
== See also ==
Anti-aliasing filter – Mathematical transformation reducing the damage caused by aliasing
Borwein integral – Type of mathematical integrals
Dirichlet integral – Integral of sin(x)/x from 0 to infinity
Lanczos resampling – Application of a mathematical formula
List of mathematical functions
Shannon wavelet
Sinc filter – Ideal low-pass filter or averaging filter
Sinc numerical methods
Trigonometric functions of matrices – Important functions in solving differential equations
Trigonometric integral – Special function defined by an integral
Whittaker–Shannon interpolation formula – Signal (re-)construction algorithm
Winkel tripel projection – Pseudoazimuthal compromise map projection (cartography)
== References ==
== Further reading ==
Stenger, Frank (1993). Numerical Methods Based on Sinc and Analytic Functions. Springer Series on Computational Mathematics. Vol. 20. Springer-Verlag New York, Inc. doi:10.1007/978-1-4612-2706-9. ISBN 9781461276371.
== External links ==
Weisstein, Eric W. "Sinc Function". MathWorld. | Wikipedia/Sinc_function |
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus, the other being differentiation. Integration was initially used to solve problems in mathematics and physics, such as finding the area under a curve, or determining displacement from velocity. Usage of integration expanded to a wide variety of scientific fields thereafter.
A definite integral computes the signed area of the region in the plane that is bounded by the graph of a given function between two points in the real line. Conventionally, areas above the horizontal axis of the plane are positive while areas below are negative. Integrals also refer to the concept of an antiderivative, a function whose derivative is the given function; in this case, they are also called indefinite integrals. The fundamental theorem of calculus relates definite integration to differentiation and provides a method to compute the definite integral of a function when its antiderivative is known; differentiation and integration are inverse operations.
Although methods of calculating areas and volumes dated from ancient Greek mathematics, the principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the area under a curve as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann later gave a rigorous definition of integrals, which is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into infinitesimally thin vertical slabs. In the early 20th century, Henri Lebesgue generalized Riemann's formulation by introducing what is now referred to as the Lebesgue integral; it is more general than Riemann's in the sense that a wider class of functions are Lebesgue-integrable.
Integrals may be generalized depending on the type of the function as well as the domain over which the integration is performed. For example, a line integral is defined for functions of two or more variables, and the interval of integration is replaced by a curve connecting two points in space. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.
== History ==
=== Pre-calculus integration ===
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus and philosopher Democritus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate the area of a circle, the surface area and volume of a sphere, area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere.
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040 AD) derived a formula for the sum of fourth powers. Alhazen determined the equations to calculate the area enclosed by the curve represented by
y
=
x
k
{\displaystyle y=x^{k}}
(which translates to the integral
∫
x
k
d
x
{\displaystyle \int x^{k}\,dx}
in contemporary notation), for any given non-negative integer value of
k
{\displaystyle k}
. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.
The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. The case n = −1 required the invention of a function, the hyperbolic logarithm, achieved by quadrature of the hyperbola in 1647.
Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
=== Leibniz and Newton ===
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions with continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
=== Formalization ===
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them "ghosts of departed quantities". Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
=== Historical notation ===
The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675. He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for "sum" or "total"). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–1820, reprinted in his book of 1822.
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with .x or x′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
=== First use of the term ===
The term was first printed in Latin by Jacob Bernoulli in 1690: "Ergo et horum Integralia aequantur".
== Terminology and notation ==
In general, the integral of a real-valued function f(x) with respect to a real variable x on an interval [a, b] is written as
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx.}
The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) is called the integrand, the points a and b are called the limits (or bounds) of integration, and the integral is said to be over the interval [a, b], called the interval of integration.
A function is said to be integrable if its integral over its domain is finite. If limits are specified, the integral is called a definite integral.
When the limits are omitted, as in
∫
f
(
x
)
d
x
,
{\displaystyle \int f(x)\,dx,}
the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).
In advanced settings, it is not uncommon to leave out dx when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write
∫
a
b
(
c
1
f
+
c
2
g
)
=
c
1
∫
a
b
f
+
c
2
∫
a
b
g
{\textstyle \int _{a}^{b}(c_{1}f+c_{2}g)=c_{1}\int _{a}^{b}f+c_{2}\int _{a}^{b}g}
to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.
== Interpretations ==
Integrals appear in many practical situations. For instance, from the length, width and depth of a swimming pool which is rectangular with a flat bottom, one can determine the volume of water it can contain, the area of its surface, and the length of its edge. But if it is oval with a rounded bottom, integrals are required to find exact and rigorous values for these quantities. In each case, one may divide the sought quantity into infinitely many infinitesimal pieces, then sum the pieces to achieve an accurate approximation.
As another example, to find the area of the region bounded by the graph of the function f(x) =
x
{\textstyle {\sqrt {x}}}
between x = 0 and x = 1, one can divide the interval into five pieces (0, 1/5, 2/5, ..., 1), then construct rectangles using the right end height of each piece (thus √0, √1/5, √2/5, ..., √1) and sum their areas to get the approximation
1
5
(
1
5
−
0
)
+
2
5
(
2
5
−
1
5
)
+
⋯
+
5
5
(
5
5
−
4
5
)
≈
0.7497
,
{\displaystyle \textstyle {\sqrt {\frac {1}{5}}}\left({\frac {1}{5}}-0\right)+{\sqrt {\frac {2}{5}}}\left({\frac {2}{5}}-{\frac {1}{5}}\right)+\cdots +{\sqrt {\frac {5}{5}}}\left({\frac {5}{5}}-{\frac {4}{5}}\right)\approx 0.7497,}
which is larger than the exact value. Alternatively, when replacing these subintervals by ones with the left end height of each piece, the approximation one gets is too low: with twelve such subintervals the approximated area is only 0.6203. However, when the number of pieces increases to infinity, it will reach a limit which is the exact value of the area sought (in this case, 2/3). One writes
∫
0
1
x
d
x
=
2
3
,
{\displaystyle \int _{0}^{1}{\sqrt {x}}\,dx={\frac {2}{3}},}
which means 2/3 is the result of a weighted sum of function values, √x, multiplied by the infinitesimal step widths, denoted by dx, on the interval [0, 1].
== Formal definitions ==
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but are also occasionally for pedagogical reasons. The most commonly used definitions are Riemann integrals and Lebesgue integrals.
=== Riemann integral ===
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. A tagged partition of a closed interval [a, b] on the real line is a finite sequence
a
=
x
0
≤
t
1
≤
x
1
≤
t
2
≤
x
2
≤
⋯
≤
x
n
−
1
≤
t
n
≤
x
n
=
b
.
{\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!}
This partitions the interval [a, b] into n sub-intervals [xi−1, xi] indexed by i, each of which is "tagged" with a specific point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
∑
i
=
1
n
f
(
t
i
)
Δ
i
;
{\displaystyle \sum _{i=1}^{n}f(t_{i})\,\Delta _{i};}
thus each term of the sum is the area of a rectangle with height equal to the function value at the chosen point of the given sub-interval, and width the same as the width of sub-interval, Δi = xi−xi−1. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1...n Δi. The Riemann integral of a function f over the interval [a, b] is equal to S if:
For all
ε
>
0
{\displaystyle \varepsilon >0}
there exists
δ
>
0
{\displaystyle \delta >0}
such that, for any tagged partition
[
a
,
b
]
{\displaystyle [a,b]}
with mesh less than
δ
{\displaystyle \delta }
,
|
S
−
∑
i
=
1
n
f
(
t
i
)
Δ
i
|
<
ε
.
{\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\,\Delta _{i}\right|<\varepsilon .}
When the chosen tags are the maximum (respectively, minimum) value of the function in each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
=== Lebesgue integral ===
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated.
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
As Folland puts it, "To compute the Riemann integral of f, one partitions the domain [a, b] into subintervals", while in the Lebesgue integral, "one is in effect partitioning the range of f ". The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measure μ(A) of an interval A = [a, b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the "partitioning the range of f " philosophy, the integral of a non-negative function f : R → R should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f∗(t) = μ{ x : f(x) > t }. The Lebesgue integral of f is then defined by
∫
f
=
∫
0
∞
f
∗
(
t
)
d
t
{\displaystyle \int f=\int _{0}^{\infty }f^{*}(t)\,dt}
where the integral on the right is an ordinary improper Riemann integral (f∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:
∫
E
|
f
|
d
μ
<
+
∞
.
{\displaystyle \int _{E}|f|\,d\mu <+\infty .}
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
∫
E
f
d
μ
=
∫
E
f
+
d
μ
−
∫
E
f
−
d
μ
{\displaystyle \int _{E}f\,d\mu =\int _{E}f^{+}\,d\mu -\int _{E}f^{-}\,d\mu }
where
f
+
(
x
)
=
max
{
f
(
x
)
,
0
}
=
{
f
(
x
)
,
if
f
(
x
)
>
0
,
0
,
otherwise,
f
−
(
x
)
=
max
{
−
f
(
x
)
,
0
}
=
{
−
f
(
x
)
,
if
f
(
x
)
<
0
,
0
,
otherwise.
{\displaystyle {\begin{alignedat}{3}&f^{+}(x)&&{}={}\max\{f(x),0\}&&{}={}{\begin{cases}f(x),&{\text{if }}f(x)>0,\\0,&{\text{otherwise,}}\end{cases}}\\&f^{-}(x)&&{}={}\max\{-f(x),0\}&&{}={}{\begin{cases}-f(x),&{\text{if }}f(x)<0,\\0,&{\text{otherwise.}}\end{cases}}\end{alignedat}}}
=== Other integrals ===
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
The Darboux integral, which is defined by Darboux sums (restricted Riemann sums) yet is equivalent to the Riemann integral. A function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being easier to define than Riemann integrals.
The Riemann–Stieltjes integral, an extension of the Riemann integral which integrates with respect to a function as opposed to a variable.
The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes both the Riemann–Stieltjes and Lebesgue integrals.
The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without depending on measures.
The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933.
The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
The Khinchin integral, named after Aleksandr Khinchin.
The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion.
The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation.
The rough path integral, which is defined for functions equipped with some additional "rough path" structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion.
The Choquet integral, a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953.
The Bochner integral, a generalization of the Lebesgue integral to functions that take values in a Banach space.
== Properties ==
=== Linearity ===
The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
f
↦
∫
a
b
f
(
x
)
d
x
{\displaystyle f\mapsto \int _{a}^{b}f(x)\;dx}
is a linear functional on this vector space. Thus, the collection of integrable functions is closed under taking linear combinations, and the integral of a linear combination is the linear combination of the integrals:
∫
a
b
(
α
f
+
β
g
)
(
x
)
d
x
=
α
∫
a
b
f
(
x
)
d
x
+
β
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}(\alpha f+\beta g)(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.\,}
Similarly, the set of real-valued Lebesgue-integrable functions on a given measure space E with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
f
↦
∫
E
f
d
μ
{\displaystyle f\mapsto \int _{E}f\,d\mu }
is a linear functional on this vector space, so that:
∫
E
(
α
f
+
β
g
)
d
μ
=
α
∫
E
f
d
μ
+
β
∫
E
g
d
μ
.
{\displaystyle \int _{E}(\alpha f+\beta g)\,d\mu =\alpha \int _{E}f\,d\mu +\beta \int _{E}g\,d\mu .}
More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compact complete topological vector space V over a locally compact topological field K, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
f
↦
∫
E
f
d
μ
,
{\displaystyle f\mapsto \int _{E}f\,d\mu ,\,}
that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. "finite"). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K = C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalization for a certain class of "simple" functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See Hildebrandt 1953 for an axiomatic characterization of the integral.
=== Inequalities ===
A number of general inequalities hold for Riemann-integrable functions defined on a closed and bounded interval [a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbers m and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
m
(
b
−
a
)
≤
∫
a
b
f
(
x
)
d
x
≤
M
(
b
−
a
)
.
{\displaystyle m(b-a)\leq \int _{a}^{b}f(x)\,dx\leq M(b-a).}
Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
∫
a
b
f
(
x
)
d
x
≤
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx\leq \int _{a}^{b}g(x)\,dx.}
This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b]. In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
∫
a
b
f
(
x
)
d
x
<
∫
a
b
g
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx<\int _{a}^{b}g(x)\,dx.}
Subintervals. If [c, d] is a subinterval of [a, b] and f (x) is non-negative for all x, then
∫
c
d
f
(
x
)
d
x
≤
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{c}^{d}f(x)\,dx\leq \int _{a}^{b}f(x)\,dx.}
Products and absolute values of functions. If f and g are two functions, then we may consider their pointwise products and powers, and absolute values:
(
f
g
)
(
x
)
=
f
(
x
)
g
(
x
)
,
f
2
(
x
)
=
(
f
(
x
)
)
2
,
|
f
|
(
x
)
=
|
f
(
x
)
|
.
{\displaystyle (fg)(x)=f(x)g(x),\;f^{2}(x)=(f(x))^{2},\;|f|(x)=|f(x)|.}
If f is Riemann-integrable on [a, b] then the same is true for |f|, and
|
∫
a
b
f
(
x
)
d
x
|
≤
∫
a
b
|
f
(
x
)
|
d
x
.
{\displaystyle \left|\int _{a}^{b}f(x)\,dx\right|\leq \int _{a}^{b}|f(x)|\,dx.}
Moreover, if f and g are both Riemann-integrable then fg is also Riemann-integrable, and
(
∫
a
b
(
f
g
)
(
x
)
d
x
)
2
≤
(
∫
a
b
f
(
x
)
2
d
x
)
(
∫
a
b
g
(
x
)
2
d
x
)
.
{\displaystyle \left(\int _{a}^{b}(fg)(x)\,dx\right)^{2}\leq \left(\int _{a}^{b}f(x)^{2}\,dx\right)\left(\int _{a}^{b}g(x)^{2}\,dx\right).}
This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
|
∫
f
(
x
)
g
(
x
)
d
x
|
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
(
∫
|
g
(
x
)
|
q
d
x
)
1
/
q
.
{\displaystyle \left|\int f(x)g(x)\,dx\right|\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}\left(\int \left|g(x)\right|^{q}\,dx\right)^{1/q}.}
For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then | f |p, | g |p and | f + g |p are also Riemann-integrable and the following Minkowski inequality holds:
(
∫
|
f
(
x
)
+
g
(
x
)
|
p
d
x
)
1
/
p
≤
(
∫
|
f
(
x
)
|
p
d
x
)
1
/
p
+
(
∫
|
g
(
x
)
|
p
d
x
)
1
/
p
.
{\displaystyle \left(\int \left|f(x)+g(x)\right|^{p}\,dx\right)^{1/p}\leq \left(\int \left|f(x)\right|^{p}\,dx\right)^{1/p}+\left(\int \left|g(x)\right|^{p}\,dx\right)^{1/p}.}
An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
=== Conventions ===
In this section, f is a real-valued Riemann-integrable function. The integral
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0 ≤ x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place "left to right", evaluating f within intervals [x i , x i +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
∫
a
b
f
(
x
)
d
x
=
−
∫
b
a
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=-\int _{b}^{a}f(x)\,dx.}
With a = b, this implies:
∫
a
a
f
(
x
)
d
x
=
0.
{\displaystyle \int _{a}^{a}f(x)\,dx=0.}
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that if c is any element of [a, b], then:
∫
a
b
f
(
x
)
d
x
=
∫
a
c
f
(
x
)
d
x
+
∫
c
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\int _{a}^{c}f(x)\,dx+\int _{c}^{b}f(x)\,dx.}
With the first convention, the resulting relation
∫
a
c
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
−
∫
c
b
f
(
x
)
d
x
=
∫
a
b
f
(
x
)
d
x
+
∫
b
c
f
(
x
)
d
x
{\displaystyle {\begin{aligned}\int _{a}^{c}f(x)\,dx&{}=\int _{a}^{b}f(x)\,dx-\int _{c}^{b}f(x)\,dx\\&{}=\int _{a}^{b}f(x)\,dx+\int _{b}^{c}f(x)\,dx\end{aligned}}}
is then well-defined for any cyclic permutation of a, b, and c.
== Fundamental theorem of calculus ==
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
=== First theorem ===
Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, for all x in [a, b], by
F
(
x
)
=
∫
a
x
f
(
t
)
d
t
.
{\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.}
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
F
′
(
x
)
=
f
(
x
)
{\displaystyle F'(x)=f(x)}
for all x in (a, b).
=== Second theorem ===
Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivative F on [a, b]. That is, f and F are functions such that for all x in [a, b],
f
(
x
)
=
F
′
(
x
)
.
{\displaystyle f(x)=F'(x).}
If f is integrable on [a, b] then
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
== Extensions ==
=== Improper integrals ===
A "proper" Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity:
∫
a
∞
f
(
x
)
d
x
=
lim
b
→
∞
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{\infty }f(x)\,dx=\lim _{b\to \infty }\int _{a}^{b}f(x)\,dx.}
If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result:
∫
a
b
f
(
x
)
d
x
=
lim
ε
→
0
∫
a
+
ϵ
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\varepsilon \to 0}\int _{a+\epsilon }^{b}f(x)\,dx.}
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
=== Multiple integration ===
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals
R
=
[
a
,
b
]
×
[
c
,
d
]
{\displaystyle R=[a,b]\times [c,d]}
can be written
∫
R
f
(
x
,
y
)
d
A
{\displaystyle \int _{R}f(x,y)\,dA}
where the differential dA indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of z = f(x,y) over the domain R. Under suitable conditions (e.g., if f is continuous), Fubini's theorem states that this integral can be expressed as an equivalent iterated integral
∫
a
b
[
∫
c
d
f
(
x
,
y
)
d
y
]
d
x
.
{\displaystyle \int _{a}^{b}\left[\int _{c}^{d}f(x,y)\,dy\right]\,dx.}
This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign:
∬
R
f
(
x
,
y
)
d
A
.
{\displaystyle \iint _{R}f(x,y)\,dA.}
Integration over more general domains is possible. The integral of a function f, with respect to volume, over an n-dimensional region D of
R
n
{\displaystyle \mathbb {R} ^{n}}
is denoted by symbols such as:
∫
D
f
(
x
)
d
n
x
=
∫
D
f
d
V
.
{\displaystyle \int _{D}f(\mathbf {x} )d^{n}\mathbf {x} \ =\int _{D}f\,dV.}
=== Line integrals and surface integrals ===
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces inside higher-dimensional spaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
W
=
F
⋅
s
.
{\displaystyle W=\mathbf {F} \cdot \mathbf {s} .}
For an object moving along a path C in a vector field F such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + ds. This gives the line integral
W
=
∫
C
F
⋅
d
s
.
{\displaystyle W=\int _{C}\mathbf {F} \cdot d\mathbf {s} .}
A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that a fluid flows through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, one need to take the dot product of v with the unit surface normal to S at each point, which will give a scalar field, which is integrated over the surface:
∫
S
v
⋅
d
S
.
{\displaystyle \int _{S}{\mathbf {v} }\cdot \,d{\mathbf {S} }.}
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
=== Contour integrals ===
In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve
γ
{\displaystyle \gamma }
in the complex plane, the integral is denoted as follows
∫
γ
f
(
z
)
d
z
.
{\displaystyle \int _{\gamma }f(z)\,dz.}
This is known as a contour integral.
=== Integrals of differential forms ===
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:
E
(
x
,
y
,
z
)
d
x
+
F
(
x
,
y
,
z
)
d
y
+
G
(
x
,
y
,
z
)
d
z
{\displaystyle E(x,y,z)\,dx+F(x,y,z)\,dy+G(x,y,z)\,dz}
where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes.
A differential two-form is a sum of the form
G
(
x
,
y
,
z
)
d
x
∧
d
y
+
E
(
x
,
y
,
z
)
d
y
∧
d
z
+
F
(
x
,
y
,
z
)
d
z
∧
d
x
.
{\displaystyle G(x,y,z)\,dx\wedge dy+E(x,y,z)\,dy\wedge dz+F(x,y,z)\,dz\wedge dx.}
Here the basic two-forms
d
x
∧
d
y
,
d
z
∧
d
x
,
d
y
∧
d
z
{\displaystyle dx\wedge dy,dz\wedge dx,dy\wedge dz}
measure oriented areas parallel to the coordinate two-planes. The symbol
∧
{\displaystyle \wedge }
denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of
E
i
+
F
j
+
G
k
{\displaystyle E\mathbf {i} +F\mathbf {j} +G\mathbf {k} }
.
Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.
=== Summations ===
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time-scale calculus.
=== Functional integrals ===
An integration that is performed not over a variable (or, in physics, over a space or time dimension), but over a space of functions, is referred to as a functional integral.
== Applications ==
Integrals are used extensively in many areas. For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.
Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral. The volume of a three-dimensional object such as a disc or washer can be computed by disc integration using the equation for the volume of a cylinder,
π
r
2
h
{\displaystyle \pi r^{2}h}
, where
r
{\displaystyle r}
is the radius. In the case of a simple disc created by rotating a curve about the x-axis, the radius is given by f(x), and its height is the differential dx. Using an integral with bounds a and b, the volume of the disc is equal to:
π
∫
a
b
f
2
(
x
)
d
x
.
{\displaystyle \pi \int _{a}^{b}f^{2}(x)\,dx.}
Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval
[
a
,
b
]
{\displaystyle [a,b]}
is given by
x
(
b
)
−
x
(
a
)
=
∫
a
b
v
(
t
)
d
t
,
{\displaystyle x(b)-x(a)=\int _{a}^{b}v(t)\,dt,}
where
v
(
t
)
{\displaystyle v(t)}
is the velocity expressed as a function of time. The work done by a force
F
(
x
)
{\displaystyle F(x)}
(given as a function of position) from an initial position
A
{\displaystyle A}
to a final position
B
{\displaystyle B}
is:
W
A
→
B
=
∫
A
B
F
(
x
)
d
x
.
{\displaystyle W_{A\rightarrow B}=\int _{A}^{B}F(x)\,dx.}
Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.
== Computation ==
=== Analytical ===
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F′ = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,
∫
a
b
f
(
x
)
d
x
=
F
(
b
)
−
F
(
a
)
.
{\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).}
Sometimes it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include integration by substitution, integration by parts, integration by trigonometric substitution, and integration by partial fractions.
Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
=== Symbolic ===
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma and Maple.
A major mathematical difficulty in symbolic integration is that in many cases, a relatively simple function does not have integrals that can be expressed in closed form involving only elementary functions, include rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary and to compute the integral if is elementary. However, functions with closed expressions of antiderivatives are the exception, and consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica, Maple and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on). Extending Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation. This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients and provides an algorithm to compute any coefficient.
Rule-based integration systems facilitate integration. Rubi, a computer algebra system rule-based integrator, pattern matches an extensive system of symbolic integration rules to integrate a wide variety of integrands. This system uses over 6600 integration rules to compute integrals. The method of brackets is a generalization of Ramanujan's master theorem that can be applied to a wide range of univariate and multivariate integrals. A set of rules are applied to the coefficients and exponential terms of the integrand's power series expansion to determine the integral. The method is closely related to the Mellin transform.
=== Numerical ===
Definite integrals may be approximated using several methods of numerical integration. The rectangle method relies on dividing the region under the function into a series of rectangles corresponding to function values and multiplies by the step width to find the sum. A better approach, the trapezoidal rule, replaces the rectangles used in a Riemann sum with trapezoids. The trapezoidal rule weights the first and last values by one half, then multiplies by the step width to obtain a better approximation. The idea behind the trapezoidal rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further: Simpson's rule approximates the integrand by a piecewise quadratic function.
Riemann sums, the trapezoidal rule, and Simpson's rule are examples of a family of quadrature rules called the Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree n polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton–Cotes approximations can be more accurate, but they require more function evaluations, and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials.
Romberg's method halves the step widths incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size. It then interpolate a polynomial through the approximations, and extrapolate to T(0). Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2n − 1.
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
=== Mechanical ===
The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.
=== Geometrical ===
Area can sometimes be found via geometrical compass-and-straightedge constructions of an equivalent square.
=== Integration by differentiation ===
Kempf, Jackson and Morales demonstrated mathematical relations that allow an integral to be calculated by means of differentiation. Their calculus involves the Dirac delta function and the partial derivative operator
∂
x
{\displaystyle \partial _{x}}
. This can also be applied to functional integrals, allowing them to be computed by functional differentiation.
== Examples ==
=== Using the fundamental theorem of calculus ===
The fundamental theorem of calculus allows straightforward calculations of basic functions:
∫
0
π
sin
(
x
)
d
x
=
−
cos
(
x
)
|
x
=
0
x
=
π
=
−
cos
(
π
)
−
(
−
cos
(
0
)
)
=
2.
{\displaystyle \int _{0}^{\pi }\sin(x)\,dx=-\cos(x){\big |}_{x=0}^{x=\pi }=-\cos(\pi )-{\big (}-\cos(0){\big )}=2.}
== See also ==
Integral equation – Equations with an unknown function under an integral sign
Integral symbol – Mathematical symbol used to denote integrals and antiderivatives
Lists of integrals
== Notes ==
== References ==
== Bibliography ==
== External links ==
"Integral", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Online Integral Calculator, Wolfram Alpha.
=== Online books ===
Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus
Crowell, Benjamin, Calculus, Fullerton College, an online textbook
Garrett, Paul, Notes on First-Year Calculus
Hussain, Faraz, Understanding Calculus, an online textbook
Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust.
Kowalk, W. P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook
Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
Numerical Methods of Integration at Holistic Numerical Methods Institute
P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques | Wikipedia/Integrable_function |
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis.
The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions.
The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed.
== Notation ==
Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example sin(x). Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression
sin
x
+
y
{\displaystyle \sin x+y}
would typically be interpreted to mean
(
sin
x
)
+
y
,
{\displaystyle (\sin x)+y,}
so parentheses are required to express
sin
(
x
+
y
)
.
{\displaystyle \sin(x+y).}
A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example
sin
2
x
{\displaystyle \sin ^{2}x}
and
sin
2
(
x
)
{\displaystyle \sin ^{2}(x)}
denote
(
sin
x
)
2
,
{\displaystyle (\sin x)^{2},}
not
sin
(
sin
x
)
.
{\displaystyle \sin(\sin x).}
This differs from the (historically later) general functional notation in which
f
2
(
x
)
=
(
f
∘
f
)
(
x
)
=
f
(
f
(
x
)
)
.
{\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).}
In contrast, the superscript
−
1
{\displaystyle -1}
is commonly used to denote the inverse function, not the reciprocal. For example
sin
−
1
x
{\displaystyle \sin ^{-1}x}
and
sin
−
1
(
x
)
{\displaystyle \sin ^{-1}(x)}
denote the inverse trigonometric function alternatively written
arcsin
x
.
{\displaystyle \arcsin x\,.}
The equation
θ
=
sin
−
1
x
{\displaystyle \theta =\sin ^{-1}x}
implies
sin
θ
=
x
,
{\displaystyle \sin \theta =x,}
not
θ
⋅
sin
x
=
1.
{\displaystyle \theta \cdot \sin x=1.}
In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than
−
1
{\displaystyle {-1}}
are not in common use.
== Right-angled triangle definitions ==
If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ. Thus these six ratios define six functions of θ, which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ, and adjacent represents the side between the angle θ and the right angle.
Various mnemonics can be used to remember these definitions.
In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or π/2 radians. Therefore
sin
(
θ
)
{\displaystyle \sin(\theta )}
and
cos
(
90
∘
−
θ
)
{\displaystyle \cos(90^{\circ }-\theta )}
represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table.
== Radians versus degrees ==
In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics).
However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (tan, cot, sec, csc) can be defined as quotients and reciprocals of sin and cos, except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures.
When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2π (≈ 6.28) rad. For real number x, the notation sin x, cos x, etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (sin x°, cos x°, etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/π)°, so that, for example, sin π = sin 180° when we take x = π. In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π/180 ≈ 0.0175.
== Unit-circle definitions ==
The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and
π
2
{\textstyle {\frac {\pi }{2}}}
radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers.
Let
L
{\displaystyle {\mathcal {L}}}
be the ray obtained by rotating by an angle θ the positive half of the x-axis (counterclockwise rotation for
θ
>
0
,
{\displaystyle \theta >0,}
and clockwise rotation for
θ
<
0
{\displaystyle \theta <0}
). This ray intersects the unit circle at the point
A
=
(
x
A
,
y
A
)
.
{\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).}
The ray
L
,
{\displaystyle {\mathcal {L}},}
extended to a line if necessary, intersects the line of equation
x
=
1
{\displaystyle x=1}
at point
B
=
(
1
,
y
B
)
,
{\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),}
and the line of equation
y
=
1
{\displaystyle y=1}
at point
C
=
(
x
C
,
1
)
.
{\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).}
The tangent line to the unit circle at the point A, is perpendicular to
L
,
{\displaystyle {\mathcal {L}},}
and intersects the y- and x-axes at points
D
=
(
0
,
y
D
)
{\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })}
and
E
=
(
x
E
,
0
)
.
{\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).}
The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner.
The trigonometric functions cos and sin are defined, respectively, as the x- and y-coordinate values of point A. That is,
cos
θ
=
x
A
{\displaystyle \cos \theta =x_{\mathrm {A} }\quad }
and
sin
θ
=
y
A
.
{\displaystyle \quad \sin \theta =y_{\mathrm {A} }.}
In the range
0
≤
θ
≤
π
/
2
{\displaystyle 0\leq \theta \leq \pi /2}
, this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse. And since the equation
x
2
+
y
2
=
1
{\displaystyle x^{2}+y^{2}=1}
holds for all points
P
=
(
x
,
y
)
{\displaystyle \mathrm {P} =(x,y)}
on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity.
cos
2
θ
+
sin
2
θ
=
1.
{\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1.}
The other trigonometric functions can be found along the unit circle as
tan
θ
=
y
B
{\displaystyle \tan \theta =y_{\mathrm {B} }\quad }
and
cot
θ
=
x
C
,
{\displaystyle \quad \cot \theta =x_{\mathrm {C} },}
csc
θ
=
y
D
{\displaystyle \csc \theta \ =y_{\mathrm {D} }\quad }
and
sec
θ
=
x
E
.
{\displaystyle \quad \sec \theta =x_{\mathrm {E} }.}
By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is
tan
θ
=
sin
θ
cos
θ
,
cot
θ
=
cos
θ
sin
θ
,
sec
θ
=
1
cos
θ
,
csc
θ
=
1
sin
θ
.
{\displaystyle \tan \theta ={\frac {\sin \theta }{\cos \theta }},\quad \cot \theta ={\frac {\cos \theta }{\sin \theta }},\quad \sec \theta ={\frac {1}{\cos \theta }},\quad \csc \theta ={\frac {1}{\sin \theta }}.}
Since a rotation of an angle of
±
2
π
{\displaystyle \pm 2\pi }
does not change the position or size of a shape, the points A, B, C, D, and E are the same for two angles whose difference is an integer multiple of
2
π
{\displaystyle 2\pi }
. Thus trigonometric functions are periodic functions with period
2
π
{\displaystyle 2\pi }
. That is, the equalities
sin
θ
=
sin
(
θ
+
2
k
π
)
{\displaystyle \sin \theta =\sin \left(\theta +2k\pi \right)\quad }
and
cos
θ
=
cos
(
θ
+
2
k
π
)
{\displaystyle \quad \cos \theta =\cos \left(\theta +2k\pi \right)}
hold for any angle θ and any integer k. The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that
2
π
{\displaystyle 2\pi }
is the smallest value for which they are periodic (i.e.,
2
π
{\displaystyle 2\pi }
is the fundamental period of these functions). However, after a rotation by an angle
π
{\displaystyle \pi }
, the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of
π
{\displaystyle \pi }
. That is, the equalities
tan
θ
=
tan
(
θ
+
k
π
)
{\displaystyle \tan \theta =\tan(\theta +k\pi )\quad }
and
cot
θ
=
cot
(
θ
+
k
π
)
{\displaystyle \quad \cot \theta =\cot(\theta +k\pi )}
hold for any angle θ and any integer k.
== Algebraic values ==
The algebraic expressions for the most important angles are as follows:
sin
0
=
sin
0
∘
=
0
2
=
0
{\displaystyle \sin 0=\sin 0^{\circ }\quad ={\frac {\sqrt {0}}{2}}=0}
(zero angle)
sin
π
6
=
sin
30
∘
=
1
2
=
1
2
{\displaystyle \sin {\frac {\pi }{6}}=\sin 30^{\circ }={\frac {\sqrt {1}}{2}}={\frac {1}{2}}}
sin
π
4
=
sin
45
∘
=
2
2
=
1
2
{\displaystyle \sin {\frac {\pi }{4}}=\sin 45^{\circ }={\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}}
sin
π
3
=
sin
60
∘
=
3
2
{\displaystyle \sin {\frac {\pi }{3}}=\sin 60^{\circ }={\frac {\sqrt {3}}{2}}}
sin
π
2
=
sin
90
∘
=
4
2
=
1
{\displaystyle \sin {\frac {\pi }{2}}=\sin 90^{\circ }={\frac {\sqrt {4}}{2}}=1}
(right angle)
Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values.
Such simple expressions generally do not exist for other angles which are rational multiples of a right angle.
For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass.
For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable.
For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of nth roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic.
For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966.
If the sine of an angle is a rational number then the cosine is not necessarily a rational number, and vice-versa. However if the tangent of an angle is rational then both the sine and cosine of the double angle will be rational.
=== Simple algebraic values ===
The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees.
== Definitions in analysis ==
G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry.
Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include:
Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically.
By a power series, which is particularly well-suited to complex variables.
By using an infinite product expansion.
By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions.
As solutions of a differential equation.
=== Definition by differential equations ===
Sine and cosine can be defined as the unique solution to the initial value problem:
d
d
x
sin
x
=
cos
x
,
d
d
x
cos
x
=
−
sin
x
,
sin
(
0
)
=
0
,
cos
(
0
)
=
1.
{\displaystyle {\frac {d}{dx}}\sin x=\cos x,\ {\frac {d}{dx}}\cos x=-\sin x,\ \sin(0)=0,\ \cos(0)=1.}
Differentiating again,
d
2
d
x
2
sin
x
=
d
d
x
cos
x
=
−
sin
x
{\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x}
and
d
2
d
x
2
cos
x
=
−
d
d
x
sin
x
=
−
cos
x
{\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x}
, so both sine and cosine are solutions of the same ordinary differential equation
y
″
+
y
=
0
.
{\displaystyle y''+y=0\,.}
Sine is the unique solution with y(0) = 0 and y′(0) = 1; cosine is the unique solution with y(0) = 1 and y′(0) = 0.
One can then prove, as a theorem, that solutions
cos
,
sin
{\displaystyle \cos ,\sin }
are periodic, having the same period. Writing this period as
2
π
{\displaystyle 2\pi }
is then a definition of the real number
π
{\displaystyle \pi }
which is independent of geometry.
Applying the quotient rule to the tangent
tan
x
=
sin
x
/
cos
x
{\displaystyle \tan x=\sin x/\cos x}
,
d
d
x
tan
x
=
cos
2
x
+
sin
2
x
cos
2
x
=
1
+
tan
2
x
,
{\displaystyle {\frac {d}{dx}}\tan x={\frac {\cos ^{2}x+\sin ^{2}x}{\cos ^{2}x}}=1+\tan ^{2}x\,,}
so the tangent function satisfies the ordinary differential equation
y
′
=
1
+
y
2
.
{\displaystyle y'=1+y^{2}\,.}
It is the unique solution with y(0) = 0.
=== Power series expansion ===
The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions:
sin
x
=
x
−
x
3
3
!
+
x
5
5
!
−
x
7
7
!
+
⋯
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
!
x
2
n
+
1
cos
x
=
1
−
x
2
2
!
+
x
4
4
!
−
x
6
6
!
+
⋯
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
)
!
x
2
n
.
{\displaystyle {\begin{aligned}\sin x&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\\[8pt]\cos x&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}}
The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane.
Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation.
Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form
(
2
k
+
1
)
π
2
{\textstyle (2k+1){\frac {\pi }{2}}}
for the tangent and the secant, or
k
π
{\displaystyle k\pi }
for the cotangent and the cosecant, where k is an arbitrary integer.
Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets.
More precisely, defining
Un, the nth up/down number,
Bn, the nth Bernoulli number, and
En, is the nth Euler number,
one has the following series expansions:
tan
x
=
∑
n
=
0
∞
U
2
n
+
1
(
2
n
+
1
)
!
x
2
n
+
1
=
∑
n
=
1
∞
(
−
1
)
n
−
1
2
2
n
(
2
2
n
−
1
)
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
+
1
3
x
3
+
2
15
x
5
+
17
315
x
7
+
⋯
,
for
|
x
|
<
π
2
.
{\displaystyle {\begin{aligned}\tan x&{}=\sum _{n=0}^{\infty }{\frac {U_{2n+1}}{(2n+1)!}}x^{2n+1}\\[8mu]&{}=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}2^{2n}\left(2^{2n}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&{}=x+{\frac {1}{3}}x^{3}+{\frac {2}{15}}x^{5}+{\frac {17}{315}}x^{7}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}}
csc
x
=
∑
n
=
0
∞
(
−
1
)
n
+
1
2
(
2
2
n
−
1
−
1
)
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
−
1
+
1
6
x
+
7
360
x
3
+
31
15120
x
5
+
⋯
,
for
0
<
|
x
|
<
π
.
{\displaystyle {\begin{aligned}\csc x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n+1}2\left(2^{2n-1}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}+{\frac {1}{6}}x+{\frac {7}{360}}x^{3}+{\frac {31}{15120}}x^{5}+\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}}
sec
x
=
∑
n
=
0
∞
U
2
n
(
2
n
)
!
x
2
n
=
∑
n
=
0
∞
(
−
1
)
n
E
2
n
(
2
n
)
!
x
2
n
=
1
+
1
2
x
2
+
5
24
x
4
+
61
720
x
6
+
⋯
,
for
|
x
|
<
π
2
.
{\displaystyle {\begin{aligned}\sec x&=\sum _{n=0}^{\infty }{\frac {U_{2n}}{(2n)!}}x^{2n}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}\\[5mu]&=1+{\frac {1}{2}}x^{2}+{\frac {5}{24}}x^{4}+{\frac {61}{720}}x^{6}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}}
cot
x
=
∑
n
=
0
∞
(
−
1
)
n
2
2
n
B
2
n
(
2
n
)
!
x
2
n
−
1
=
x
−
1
−
1
3
x
−
1
45
x
3
−
2
945
x
5
−
⋯
,
for
0
<
|
x
|
<
π
.
{\displaystyle {\begin{aligned}\cot x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}2^{2n}B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}-{\frac {1}{3}}x-{\frac {1}{45}}x^{3}-{\frac {2}{945}}x^{5}-\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}}
=== Continued fraction expansion ===
The following continued fractions are valid in the whole complex plane:
sin
x
=
x
1
+
x
2
2
⋅
3
−
x
2
+
2
⋅
3
x
2
4
⋅
5
−
x
2
+
4
⋅
5
x
2
6
⋅
7
−
x
2
+
⋱
{\displaystyle \sin x={\cfrac {x}{1+{\cfrac {x^{2}}{2\cdot 3-x^{2}+{\cfrac {2\cdot 3x^{2}}{4\cdot 5-x^{2}+{\cfrac {4\cdot 5x^{2}}{6\cdot 7-x^{2}+\ddots }}}}}}}}}
cos
x
=
1
1
+
x
2
1
⋅
2
−
x
2
+
1
⋅
2
x
2
3
⋅
4
−
x
2
+
3
⋅
4
x
2
5
⋅
6
−
x
2
+
⋱
{\displaystyle \cos x={\cfrac {1}{1+{\cfrac {x^{2}}{1\cdot 2-x^{2}+{\cfrac {1\cdot 2x^{2}}{3\cdot 4-x^{2}+{\cfrac {3\cdot 4x^{2}}{5\cdot 6-x^{2}+\ddots }}}}}}}}}
tan
x
=
x
1
−
x
2
3
−
x
2
5
−
x
2
7
−
⋱
=
1
1
x
−
1
3
x
−
1
5
x
−
1
7
x
−
⋱
{\displaystyle \tan x={\cfrac {x}{1-{\cfrac {x^{2}}{3-{\cfrac {x^{2}}{5-{\cfrac {x^{2}}{7-\ddots }}}}}}}}={\cfrac {1}{{\cfrac {1}{x}}-{\cfrac {1}{{\cfrac {3}{x}}-{\cfrac {1}{{\cfrac {5}{x}}-{\cfrac {1}{{\cfrac {7}{x}}-\ddots }}}}}}}}}
The last one was used in the historically first proof that π is irrational.
=== Partial fraction expansion ===
There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match:
π
cot
π
x
=
lim
N
→
∞
∑
n
=
−
N
N
1
x
+
n
.
{\displaystyle \pi \cot \pi x=\lim _{N\to \infty }\sum _{n=-N}^{N}{\frac {1}{x+n}}.}
This identity can be proved with the Herglotz trick.
Combining the (–n)th with the nth term lead to absolutely convergent series:
π
cot
π
x
=
1
x
+
2
x
∑
n
=
1
∞
1
x
2
−
n
2
.
{\displaystyle \pi \cot \pi x={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {1}{x^{2}-n^{2}}}.}
Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions:
π
csc
π
x
=
∑
n
=
−
∞
∞
(
−
1
)
n
x
+
n
=
1
x
+
2
x
∑
n
=
1
∞
(
−
1
)
n
x
2
−
n
2
,
{\displaystyle \pi \csc \pi x=\sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{x+n}}={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{x^{2}-n^{2}}},}
π
2
csc
2
π
x
=
∑
n
=
−
∞
∞
1
(
x
+
n
)
2
,
{\displaystyle \pi ^{2}\csc ^{2}\pi x=\sum _{n=-\infty }^{\infty }{\frac {1}{(x+n)^{2}}},}
π
sec
π
x
=
∑
n
=
0
∞
(
−
1
)
n
(
2
n
+
1
)
(
n
+
1
2
)
2
−
x
2
,
{\displaystyle \pi \sec \pi x=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n+1)}{(n+{\tfrac {1}{2}})^{2}-x^{2}}},}
π
tan
π
x
=
2
x
∑
n
=
0
∞
1
(
n
+
1
2
)
2
−
x
2
.
{\displaystyle \pi \tan \pi x=2x\sum _{n=0}^{\infty }{\frac {1}{(n+{\tfrac {1}{2}})^{2}-x^{2}}}.}
=== Infinite product expansion ===
The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis:
sin
z
=
z
∏
n
=
1
∞
(
1
−
z
2
n
2
π
2
)
,
z
∈
C
.
{\displaystyle \sin z=z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .}
This may be obtained from the partial fraction decomposition of
cot
z
{\displaystyle \cot z}
given above, which is the logarithmic derivative of
sin
z
{\displaystyle \sin z}
. From this, it can be deduced also that
cos
z
=
∏
n
=
1
∞
(
1
−
z
2
(
n
−
1
/
2
)
2
π
2
)
,
z
∈
C
.
{\displaystyle \cos z=\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{(n-1/2)^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .}
=== Euler's formula and the exponential function ===
Euler's formula relates sine and cosine to the exponential function:
e
i
x
=
cos
x
+
i
sin
x
.
{\displaystyle e^{ix}=\cos x+i\sin x.}
This formula is commonly considered for real values of x, but it remains true for all complex values.
Proof: Let
f
1
(
x
)
=
cos
x
+
i
sin
x
,
{\displaystyle f_{1}(x)=\cos x+i\sin x,}
and
f
2
(
x
)
=
e
i
x
.
{\displaystyle f_{2}(x)=e^{ix}.}
One has
d
f
j
(
x
)
/
d
x
=
i
f
j
(
x
)
{\displaystyle df_{j}(x)/dx=if_{j}(x)}
for j = 1, 2. The quotient rule implies thus that
d
/
d
x
(
f
1
(
x
)
/
f
2
(
x
)
)
=
0
{\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0}
. Therefore,
f
1
(
x
)
/
f
2
(
x
)
{\displaystyle f_{1}(x)/f_{2}(x)}
is a constant function, which equals 1, as
f
1
(
0
)
=
f
2
(
0
)
=
1.
{\displaystyle f_{1}(0)=f_{2}(0)=1.}
This proves the formula.
One has
e
i
x
=
cos
x
+
i
sin
x
e
−
i
x
=
cos
x
−
i
sin
x
.
{\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\[5pt]e^{-ix}&=\cos x-i\sin x.\end{aligned}}}
Solving this linear system in sine and cosine, one can express them in terms of the exponential function:
sin
x
=
e
i
x
−
e
−
i
x
2
i
cos
x
=
e
i
x
+
e
−
i
x
2
.
{\displaystyle {\begin{aligned}\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\[5pt]\cos x&={\frac {e^{ix}+e^{-ix}}{2}}.\end{aligned}}}
When x is real, this may be rewritten as
cos
x
=
Re
(
e
i
x
)
,
sin
x
=
Im
(
e
i
x
)
.
{\displaystyle \cos x=\operatorname {Re} \left(e^{ix}\right),\qquad \sin x=\operatorname {Im} \left(e^{ix}\right).}
Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity
e
a
+
b
=
e
a
e
b
{\displaystyle e^{a+b}=e^{a}e^{b}}
for simplifying the result.
Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set
U
{\displaystyle U}
of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group
R
/
Z
{\displaystyle \mathbb {R} /\mathbb {Z} }
, via an isomorphism
e
:
R
/
Z
→
U
.
{\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.}
In pedestrian terms
e
(
t
)
=
exp
(
2
π
i
t
)
{\displaystyle e(t)=\exp(2\pi it)}
, and this isomorphism is unique up to taking complex conjugates.
For a nonzero real number
a
{\displaystyle a}
(the base), the function
t
↦
e
(
t
/
a
)
{\displaystyle t\mapsto e(t/a)}
defines an isomorphism of the group
R
/
a
Z
→
U
{\displaystyle \mathbb {R} /a\mathbb {Z} \to U}
. The real and imaginary parts of
e
(
t
/
a
)
{\displaystyle e(t/a)}
are the cosine and sine, where
a
{\displaystyle a}
is used as the base for measuring angles. For example, when
a
=
2
π
{\displaystyle a=2\pi }
, we get the measure in radians, and the usual trigonometric functions. When
a
=
360
{\displaystyle a=360}
, we get the sine and cosine of angles measured in degrees.
Note that
a
=
2
π
{\displaystyle a=2\pi }
is the unique value at which the derivative
d
d
t
e
(
t
/
a
)
{\displaystyle {\frac {d}{dt}}e(t/a)}
becomes a unit vector with positive imaginary part at
t
=
0
{\displaystyle t=0}
. This fact can, in turn, be used to define the constant
2
π
{\displaystyle 2\pi }
.
=== Definition via integration ===
Another way to define the trigonometric functions in analysis is using integration. For a real number
t
{\displaystyle t}
, put
θ
(
t
)
=
∫
0
t
d
τ
1
+
τ
2
=
arctan
t
{\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t}
where this defines this inverse tangent function. Also,
π
{\displaystyle \pi }
is defined by
1
2
π
=
∫
0
∞
d
τ
1
+
τ
2
{\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}}
a definition that goes back to Karl Weierstrass.
On the interval
−
π
/
2
<
θ
<
π
/
2
{\displaystyle -\pi /2<\theta <\pi /2}
, the trigonometric functions are defined by inverting the relation
θ
=
arctan
t
{\displaystyle \theta =\arctan t}
. Thus we define the trigonometric functions by
tan
θ
=
t
,
cos
θ
=
(
1
+
t
2
)
−
1
/
2
,
sin
θ
=
t
(
1
+
t
2
)
−
1
/
2
{\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}}
where the point
(
t
,
θ
)
{\displaystyle (t,\theta )}
is on the graph of
θ
=
arctan
t
{\displaystyle \theta =\arctan t}
and the positive square root is taken.
This defines the trigonometric functions on
(
−
π
/
2
,
π
/
2
)
{\displaystyle (-\pi /2,\pi /2)}
. The definition can be extended to all real numbers by first observing that, as
θ
→
π
/
2
{\displaystyle \theta \to \pi /2}
,
t
→
∞
{\displaystyle t\to \infty }
, and so
cos
θ
=
(
1
+
t
2
)
−
1
/
2
→
0
{\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0}
and
sin
θ
=
t
(
1
+
t
2
)
−
1
/
2
→
1
{\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1}
. Thus
cos
θ
{\displaystyle \cos \theta }
and
sin
θ
{\displaystyle \sin \theta }
are extended continuously so that
cos
(
π
/
2
)
=
0
,
sin
(
π
/
2
)
=
1
{\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1}
. Now the conditions
cos
(
θ
+
π
)
=
−
cos
(
θ
)
{\displaystyle \cos(\theta +\pi )=-\cos(\theta )}
and
sin
(
θ
+
π
)
=
−
sin
(
θ
)
{\displaystyle \sin(\theta +\pi )=-\sin(\theta )}
define the sine and cosine as periodic functions with period
2
π
{\displaystyle 2\pi }
, for all real numbers.
Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First,
arctan
s
+
arctan
t
=
arctan
s
+
t
1
−
s
t
{\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}}
holds, provided
arctan
s
+
arctan
t
∈
(
−
π
/
2
,
π
/
2
)
{\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)}
, since
arctan
s
+
arctan
t
=
∫
−
s
t
d
τ
1
+
τ
2
=
∫
0
s
+
t
1
−
s
t
d
τ
1
+
τ
2
{\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}}
after the substitution
τ
→
s
+
τ
1
−
s
τ
{\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}}
. In particular, the limiting case as
s
→
∞
{\displaystyle s\to \infty }
gives
arctan
t
+
π
2
=
arctan
(
−
1
/
t
)
,
t
∈
(
−
∞
,
0
)
.
{\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).}
Thus we have
sin
(
θ
+
π
2
)
=
−
1
t
1
+
(
−
1
/
t
)
2
=
−
1
1
+
t
2
=
−
cos
(
θ
)
{\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )}
and
cos
(
θ
+
π
2
)
=
1
1
+
(
−
1
/
t
)
2
=
t
1
+
t
2
=
sin
(
θ
)
.
{\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).}
So the sine and cosine functions are related by translation over a quarter period
π
/
2
{\displaystyle \pi /2}
.
=== Definitions using functional equations ===
One can also define the trigonometric functions using various functional equations.
For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula
cos
(
x
−
y
)
=
cos
x
cos
y
+
sin
x
sin
y
{\displaystyle \cos(x-y)=\cos x\cos y+\sin x\sin y\,}
and the added condition
0
<
x
cos
x
<
sin
x
<
x
for
0
<
x
<
1.
{\displaystyle 0<x\cos x<\sin x<x\quad {\text{ for }}\quad 0<x<1.}
=== In the complex plane ===
The sine and cosine of a complex number
z
=
x
+
i
y
{\displaystyle z=x+iy}
can be expressed in terms of real sines, cosines, and hyperbolic functions as follows:
sin
z
=
sin
x
cosh
y
+
i
cos
x
sinh
y
cos
z
=
cos
x
cosh
y
−
i
sin
x
sinh
y
{\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y\\[5pt]\cos z&=\cos x\cosh y-i\sin x\sinh y\end{aligned}}}
By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of
z
{\displaystyle z}
becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two.
== Periodicity and asymptotes ==
The sine and cosine functions are periodic, with period
2
π
{\displaystyle 2\pi }
, which is the smallest positive period:
sin
(
z
+
2
π
)
=
sin
(
z
)
,
cos
(
z
+
2
π
)
=
cos
(
z
)
.
{\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).}
Consequently, the cosecant and secant also have
2
π
{\displaystyle 2\pi }
as their period.
The functions sine and cosine also have semiperiods
π
{\displaystyle \pi }
, and
sin
(
z
+
π
)
=
−
sin
(
z
)
,
cos
(
z
+
π
)
=
−
cos
(
z
)
{\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)}
and consequently
tan
(
z
+
π
)
=
tan
(
z
)
,
cot
(
z
+
π
)
=
cot
(
z
)
.
{\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).}
Also,
sin
(
x
+
π
/
2
)
=
cos
(
x
)
,
cos
(
x
+
π
/
2
)
=
−
sin
(
x
)
{\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)}
(see Complementary angles).
The function
sin
(
z
)
{\displaystyle \sin(z)}
has a unique zero (at
z
=
0
{\displaystyle z=0}
) in the strip
−
π
<
ℜ
(
z
)
<
π
{\displaystyle -\pi <\Re (z)<\pi }
. The function
cos
(
z
)
{\displaystyle \cos(z)}
has the pair of zeros
z
=
±
π
/
2
{\displaystyle z=\pm \pi /2}
in the same strip. Because of the periodicity, the zeros of sine are
π
Z
=
{
…
,
−
2
π
,
−
π
,
0
,
π
,
2
π
,
…
}
⊂
C
.
{\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .}
There zeros of cosine are
π
2
+
π
Z
=
{
…
,
−
3
π
2
,
−
π
2
,
π
2
,
3
π
2
,
…
}
⊂
C
.
{\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .}
All of the zeros are simple zeros, and both functions have derivative
±
1
{\displaystyle \pm 1}
at each of the zeros.
The tangent function
tan
(
z
)
=
sin
(
z
)
/
cos
(
z
)
{\displaystyle \tan(z)=\sin(z)/\cos(z)}
has a simple zero at
z
=
0
{\displaystyle z=0}
and vertical asymptotes at
z
=
±
π
/
2
{\displaystyle z=\pm \pi /2}
, where it has a simple pole of residue
−
1
{\displaystyle -1}
. Again, owing to the periodicity, the zeros are all the integer multiples of
π
{\displaystyle \pi }
and the poles are odd multiples of
π
/
2
{\displaystyle \pi /2}
, all having the same residue. The poles correspond to vertical asymptotes
lim
x
→
π
−
tan
(
x
)
=
+
∞
,
lim
x
→
π
+
tan
(
x
)
=
−
∞
.
{\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .}
The cotangent function
cot
(
z
)
=
cos
(
z
)
/
sin
(
z
)
{\displaystyle \cot(z)=\cos(z)/\sin(z)}
has a simple pole of residue 1 at the integer multiples of
π
{\displaystyle \pi }
and simple zeros at odd multiples of
π
/
2
{\displaystyle \pi /2}
. The poles correspond to vertical asymptotes
lim
x
→
0
−
cot
(
x
)
=
−
∞
,
lim
x
→
0
+
cot
(
x
)
=
+
∞
.
{\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .}
== Basic identities ==
Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π/2], see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function.
=== Parity ===
The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is:
sin
(
−
x
)
=
−
sin
x
cos
(
−
x
)
=
cos
x
tan
(
−
x
)
=
−
tan
x
cot
(
−
x
)
=
−
cot
x
csc
(
−
x
)
=
−
csc
x
sec
(
−
x
)
=
sec
x
.
{\displaystyle {\begin{aligned}\sin(-x)&=-\sin x\\\cos(-x)&=\cos x\\\tan(-x)&=-\tan x\\\cot(-x)&=-\cot x\\\csc(-x)&=-\csc x\\\sec(-x)&=\sec x.\end{aligned}}}
=== Periods ===
All trigonometric functions are periodic functions of period 2π. This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k, one has
sin
(
x
+
2
k
π
)
=
sin
x
cos
(
x
+
2
k
π
)
=
cos
x
tan
(
x
+
k
π
)
=
tan
x
cot
(
x
+
k
π
)
=
cot
x
csc
(
x
+
2
k
π
)
=
csc
x
sec
(
x
+
2
k
π
)
=
sec
x
.
{\displaystyle {\begin{array}{lrl}\sin(x+&2k\pi )&=\sin x\\\cos(x+&2k\pi )&=\cos x\\\tan(x+&k\pi )&=\tan x\\\cot(x+&k\pi )&=\cot x\\\csc(x+&2k\pi )&=\csc x\\\sec(x+&2k\pi )&=\sec x.\end{array}}}
See Periodicity and asymptotes.
=== Pythagorean identity ===
The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is
sin
2
x
+
cos
2
x
=
1
{\displaystyle \sin ^{2}x+\cos ^{2}x=1}
.
Dividing through by either
cos
2
x
{\displaystyle \cos ^{2}x}
or
sin
2
x
{\displaystyle \sin ^{2}x}
gives
tan
2
x
+
1
=
sec
2
x
{\displaystyle \tan ^{2}x+1=\sec ^{2}x}
1
+
cot
2
x
=
csc
2
x
{\displaystyle 1+\cot ^{2}x=\csc ^{2}x}
and
sec
2
x
+
csc
2
x
=
sec
2
x
csc
2
x
{\displaystyle \sec ^{2}x+\csc ^{2}x=\sec ^{2}x\csc ^{2}x}
.
=== Sum and difference formulas ===
The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities). One can also produce them algebraically using Euler's formula.
Sum
sin
(
x
+
y
)
=
sin
x
cos
y
+
cos
x
sin
y
,
cos
(
x
+
y
)
=
cos
x
cos
y
−
sin
x
sin
y
,
tan
(
x
+
y
)
=
tan
x
+
tan
y
1
−
tan
x
tan
y
.
{\displaystyle {\begin{aligned}\sin \left(x+y\right)&=\sin x\cos y+\cos x\sin y,\\[5mu]\cos \left(x+y\right)&=\cos x\cos y-\sin x\sin y,\\[5mu]\tan(x+y)&={\frac {\tan x+\tan y}{1-\tan x\tan y}}.\end{aligned}}}
Difference
sin
(
x
−
y
)
=
sin
x
cos
y
−
cos
x
sin
y
,
cos
(
x
−
y
)
=
cos
x
cos
y
+
sin
x
sin
y
,
tan
(
x
−
y
)
=
tan
x
−
tan
y
1
+
tan
x
tan
y
.
{\displaystyle {\begin{aligned}\sin \left(x-y\right)&=\sin x\cos y-\cos x\sin y,\\[5mu]\cos \left(x-y\right)&=\cos x\cos y+\sin x\sin y,\\[5mu]\tan(x-y)&={\frac {\tan x-\tan y}{1+\tan x\tan y}}.\end{aligned}}}
When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae.
sin
2
x
=
2
sin
x
cos
x
=
2
tan
x
1
+
tan
2
x
,
cos
2
x
=
cos
2
x
−
sin
2
x
=
2
cos
2
x
−
1
=
1
−
2
sin
2
x
=
1
−
tan
2
x
1
+
tan
2
x
,
tan
2
x
=
2
tan
x
1
−
tan
2
x
.
{\displaystyle {\begin{aligned}\sin 2x&=2\sin x\cos x={\frac {2\tan x}{1+\tan ^{2}x}},\\[5mu]\cos 2x&=\cos ^{2}x-\sin ^{2}x=2\cos ^{2}x-1=1-2\sin ^{2}x={\frac {1-\tan ^{2}x}{1+\tan ^{2}x}},\\[5mu]\tan 2x&={\frac {2\tan x}{1-\tan ^{2}x}}.\end{aligned}}}
These identities can be used to derive the product-to-sum identities.
By setting
t
=
tan
1
2
θ
,
{\displaystyle t=\tan {\tfrac {1}{2}}\theta ,}
all trigonometric functions of
θ
{\displaystyle \theta }
can be expressed as rational fractions of
t
{\displaystyle t}
:
sin
θ
=
2
t
1
+
t
2
,
cos
θ
=
1
−
t
2
1
+
t
2
,
tan
θ
=
2
t
1
−
t
2
.
{\displaystyle {\begin{aligned}\sin \theta &={\frac {2t}{1+t^{2}}},\\[5mu]\cos \theta &={\frac {1-t^{2}}{1+t^{2}}},\\[5mu]\tan \theta &={\frac {2t}{1-t^{2}}}.\end{aligned}}}
Together with
d
θ
=
2
1
+
t
2
d
t
,
{\displaystyle d\theta ={\frac {2}{1+t^{2}}}\,dt,}
this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions.
=== Derivatives and antiderivatives ===
The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration.
Note: For
0
<
x
<
π
{\displaystyle 0<x<\pi }
the integral of
csc
x
{\displaystyle \csc x}
can also be written as
−
arsinh
(
cot
x
)
,
{\displaystyle -\operatorname {arsinh} (\cot x),}
and for the integral of
sec
x
{\displaystyle \sec x}
for
−
π
/
2
<
x
<
π
/
2
{\displaystyle -\pi /2<x<\pi /2}
as
arsinh
(
tan
x
)
,
{\displaystyle \operatorname {arsinh} (\tan x),}
where
arsinh
{\displaystyle \operatorname {arsinh} }
is the inverse hyperbolic sine.
Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule:
d
cos
x
d
x
=
d
d
x
sin
(
π
/
2
−
x
)
=
−
cos
(
π
/
2
−
x
)
=
−
sin
x
,
d
csc
x
d
x
=
d
d
x
sec
(
π
/
2
−
x
)
=
−
sec
(
π
/
2
−
x
)
tan
(
π
/
2
−
x
)
=
−
csc
x
cot
x
,
d
cot
x
d
x
=
d
d
x
tan
(
π
/
2
−
x
)
=
−
sec
2
(
π
/
2
−
x
)
=
−
csc
2
x
.
{\displaystyle {\begin{aligned}{\frac {d\cos x}{dx}}&={\frac {d}{dx}}\sin(\pi /2-x)=-\cos(\pi /2-x)=-\sin x\,,\\{\frac {d\csc x}{dx}}&={\frac {d}{dx}}\sec(\pi /2-x)=-\sec(\pi /2-x)\tan(\pi /2-x)=-\csc x\cot x\,,\\{\frac {d\cot x}{dx}}&={\frac {d}{dx}}\tan(\pi /2-x)=-\sec ^{2}(\pi /2-x)=-\csc ^{2}x\,.\end{aligned}}}
== Inverse functions ==
The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function.
The notations sin−1, cos−1, etc. are often used for arcsin and arccos, etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond".
Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms.
== Applications ==
=== Angles and sides of a triangle ===
In this section A, B, C denote the three (interior) angles of a triangle, and a, b, c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve.
==== Law of sines ====
The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C:
sin
A
a
=
sin
B
b
=
sin
C
c
=
2
Δ
a
b
c
,
{\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},}
where Δ is the area of the triangle,
or, equivalently,
a
sin
A
=
b
sin
B
=
c
sin
C
=
2
R
,
{\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,}
where R is the triangle's circumradius.
It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance.
==== Law of cosines ====
The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem:
c
2
=
a
2
+
b
2
−
2
a
b
cos
C
,
{\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,}
or equivalently,
cos
C
=
a
2
+
b
2
−
c
2
2
a
b
.
{\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.}
In this formula the angle at C is opposite to the side c. This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem.
The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known.
==== Law of tangents ====
The law of tangents says that:
tan
A
−
B
2
tan
A
+
B
2
=
a
−
b
a
+
b
{\displaystyle {\frac {\tan {\frac {A-B}{2}}}{\tan {\frac {A+B}{2}}}}={\frac {a-b}{a+b}}}
.
==== Law of cotangents ====
If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that:
r
=
1
s
(
s
−
a
)
(
s
−
b
)
(
s
−
c
)
{\displaystyle r={\sqrt {{\frac {1}{s}}(s-a)(s-b)(s-c)}}}
.
The law of cotangents says that:
cot
A
2
=
s
−
a
r
{\displaystyle \cot {\frac {A}{2}}={\frac {s-a}{r}}}
It follows that
cot
A
2
s
−
a
=
cot
B
2
s
−
b
=
cot
C
2
s
−
c
=
1
r
.
{\displaystyle {\frac {\cot {\dfrac {A}{2}}}{s-a}}={\frac {\cot {\dfrac {B}{2}}}{s-b}}={\frac {\cot {\dfrac {C}{2}}}{s-c}}={\frac {1}{r}}.}
=== Periodic functions ===
The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion.
Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves.
Under rather general conditions, a periodic function f (x) can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by φk, the expansion of the periodic function f (t) takes the form:
f
(
t
)
=
∑
k
=
1
∞
c
k
φ
k
(
t
)
.
{\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).}
For example, the square wave can be written as the Fourier series
f
square
(
t
)
=
4
π
∑
k
=
1
∞
sin
(
(
2
k
−
1
)
t
)
2
k
−
1
.
{\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.}
In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath.
== History ==
While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.)
All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho.
Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.)
The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates.
The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583).
The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie.
In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x. Though defined as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.).
A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions.
crd
θ
=
2
sin
1
2
θ
,
vers
θ
=
1
−
cos
θ
=
2
sin
2
1
2
θ
,
hav
θ
=
1
2
vers
θ
=
sin
2
1
2
θ
,
covers
θ
=
1
−
sin
θ
=
vers
(
1
2
π
−
θ
)
,
exsec
θ
=
sec
θ
−
1.
{\displaystyle {\begin{aligned}\operatorname {crd} \theta &=2\sin {\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {vers} \theta &=1-\cos \theta =2\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {hav} \theta &={\tfrac {1}{2}}\operatorname {vers} \theta =\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {covers} \theta &=1-\sin \theta =\operatorname {vers} {\bigl (}{\tfrac {1}{2}}\pi -\theta {\bigr )},\\[5mu]\operatorname {exsec} \theta &=\sec \theta -1.\end{aligned}}}
Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent.
== Etymology ==
The word sine derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin.
The choice was based on a misreading of the Arabic written form j-y-b (جيب), which itself originated as a transliteration from Sanskrit jīvā, which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string".
The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle.
The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly.
== See also ==
== Notes ==
== References ==
== External links ==
"Trigonometric functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Visionlearning Module on Wave Mathematics
GonioLab Visualization of the unit circle, trigonometric and hyperbolic functions
q-Sine Article about the q-analog of sin at MathWorld
q-Cosine Article about the q-analog of cos at MathWorld | Wikipedia/Tangent_function |
In set theory, a continuous function is a sequence of ordinals such that the values assumed at limit stages are the limits (limit suprema and limit infima) of all values at previous stages. More formally, let γ be an ordinal, and
s
:=
⟨
s
α
|
α
<
γ
⟩
{\displaystyle s:=\langle s_{\alpha }|\alpha <\gamma \rangle }
be a γ-sequence of ordinals. Then s is continuous if at every limit ordinal β < γ,
s
β
=
lim sup
{
s
α
:
α
<
β
}
=
inf
{
sup
{
s
α
:
δ
≤
α
<
β
}
:
δ
<
β
}
{\displaystyle s_{\beta }=\limsup\{s_{\alpha }:\alpha <\beta \}=\inf\{\sup\{s_{\alpha }:\delta \leq \alpha <\beta \}:\delta <\beta \}}
and
s
β
=
lim inf
{
s
α
:
α
<
β
}
=
sup
{
inf
{
s
α
:
δ
≤
α
<
β
}
:
δ
<
β
}
.
{\displaystyle s_{\beta }=\liminf\{s_{\alpha }:\alpha <\beta \}=\sup\{\inf\{s_{\alpha }:\delta \leq \alpha <\beta \}:\delta <\beta \}\,.}
Alternatively, if s is an increasing function then s is continuous if s: γ → range(s) is a continuous function when the sets are each equipped with the order topology. These continuous functions are often used in cofinalities and cardinal numbers.
A normal function is a function that is both continuous and strictly increasing.
== References == | Wikipedia/Continuous_function_(set_theory) |
In axiomatic set theory, a function f : Ord → Ord is called normal (or a normal function) if it is continuous (with respect to the order topology) and strictly monotonically increasing. This is equivalent to the following two conditions:
For every limit ordinal γ (i.e. γ is neither zero nor a successor), it is the case that f (γ) = sup{f (ν) : ν < γ}.
For all ordinals α < β, it is the case that f (α) < f (β).
== Examples ==
A simple normal function is given by f (α) = 1 + α (see ordinal arithmetic). But f (α) = α + 1 is not normal because it is not continuous at any limit ordinal (for example,
f
(
ω
)
=
ω
+
1
≠
ω
=
sup
{
f
(
n
)
:
n
<
ω
}
{\displaystyle f(\omega )=\omega +1\neq \omega =\sup\{f(n):n<\omega \}}
). If β is a fixed ordinal, then the functions f (α) = β + α, f (α) = β × α (for β ≥ 1), and f (α) = βα (for β ≥ 2) are all normal.
More important examples of normal functions are given by the aleph numbers
f
(
α
)
=
ℵ
α
{\displaystyle f(\alpha )=\aleph _{\alpha }}
, which connect ordinal and cardinal numbers, and by the beth numbers
f
(
α
)
=
ℶ
α
{\displaystyle f(\alpha )=\beth _{\alpha }}
.
== Properties ==
If f is normal, then for any ordinal α,
f (α) ≥ α.
Proof: If not, choose γ minimal such that f (γ) < γ. Since f is strictly monotonically increasing, f (f (γ)) < f (γ), contradicting minimality of γ.
Furthermore, for any non-empty set S of ordinals, we have
f (sup S) = sup f (S).
Proof: "≥" follows from the monotonicity of f and the definition of the supremum. For "≤", set δ = sup S and consider three cases:
if δ = 0, then S = {0} and sup f (S) = f (0);
if δ = ν + 1 is a successor, then there exists s in S with ν < s, so that δ ≤ s. Therefore, f (δ) ≤ f (s), which implies f (δ) ≤ sup f (S);
if δ is a nonzero limit, pick any ν < δ, and an s in S such that ν < s (possible since δ = sup S). Therefore, f (ν) < f (s) so that f (ν) < sup f (S), yielding f (δ) = sup {f (ν) : ν < δ} ≤ sup f (S), as desired.
Every normal function f has arbitrarily large fixed points; see the fixed-point lemma for normal functions for a proof. One can create a normal function f ′ : Ord → Ord, called the derivative of f, such that f ′(α) is the α-th fixed point of f. For a hierarchy of normal functions, see Veblen functions.
== Notes ==
== References == | Wikipedia/Normal_function |
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. The reciprocal function, the function f(x) that maps x to 1/x, is one of the simplest examples of a function which is its own inverse (an involution).
Multiplying by a number is the same as dividing by its reciprocal and vice versa. For example, multiplication by 4/5 (or 0.8) will give the same result as division by 5/4 (or 1.25). Therefore, multiplication by a number followed by multiplication by its reciprocal yields the original number (since the product of the number and its reciprocal is 1).
The term reciprocal was in common use at least as far back as the third edition of Encyclopædia Britannica (1797) to describe two numbers whose product is 1; geometrical quantities in inverse proportion are described as reciprocall in a 1570 translation of Euclid's Elements.
In the phrase multiplicative inverse, the qualifier multiplicative is often omitted and then tacitly understood (in contrast to the additive inverse). Multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen that ab ≠ ba; then "inverse" typically implies that an element is both a left and right inverse.
The notation f −1 is sometimes also used for the inverse function of the function f, which is for most functions not equal to the multiplicative inverse. For example, the multiplicative inverse 1/(sin x) = (sin x)−1 is the cosecant of x, and not the inverse sine of x denoted by sin−1 x or arcsin x. The terminology difference reciprocal versus inverse is not sufficient to make this distinction, since many authors prefer the opposite naming convention, probably for historical reasons (for example in French, the inverse function is preferably called the bijection réciproque).
== Examples and counterexamples ==
In the real numbers, zero does not have a reciprocal (division by zero is undefined) because no real number multiplied by 0 produces 1 (the product of any number with zero is zero). With the exception of zero, reciprocals of every real number are real, reciprocals of every rational number are rational, and reciprocals of every complex number are complex. The property that every element other than zero has a multiplicative inverse is part of the definition of a field, of which these are all examples. On the other hand, no integer other than 1 and −1 has an integer reciprocal, and so the integers are not a field.
In modular arithmetic, the modular multiplicative inverse of a is also defined: it is the number x such that ax ≡ 1 (mod n). This multiplicative inverse exists if and only if a and n are coprime. For example, the inverse of 3 modulo 11 is 4 because 4 ⋅ 3 ≡ 1 (mod 11). The extended Euclidean algorithm may be used to compute it.
The sedenions are an algebra in which every nonzero element has a multiplicative inverse, but which nonetheless has divisors of zero, that is, nonzero elements x, y such that xy = 0.
A square matrix has an inverse if and only if its determinant has an inverse in the coefficient ring. The linear map that has the matrix A−1 with respect to some base is then the inverse function of the map having A as matrix in the same base. Thus, the two distinct notions of the inverse of a function are strongly related in this case, but they still do not coincide, since the multiplicative inverse of Ax would be (Ax)−1, not A−1x.
These two notions of an inverse function do sometimes coincide, for example for the function
f
(
x
)
=
x
i
=
e
i
ln
(
x
)
{\displaystyle f(x)=x^{i}=e^{i\ln(x)}}
where
ln
{\displaystyle \ln }
is the principal branch of the complex logarithm and
e
−
π
<
|
x
|
<
e
π
{\displaystyle e^{-\pi }<|x|<e^{\pi }}
:
(
(
1
/
f
)
∘
f
)
(
x
)
=
(
1
/
f
)
(
f
(
x
)
)
=
1
/
(
f
(
f
(
x
)
)
)
=
1
/
e
i
ln
(
e
i
ln
(
x
)
)
=
1
/
e
i
i
ln
(
x
)
=
1
/
e
−
ln
(
x
)
=
x
{\displaystyle ((1/f)\circ f)(x)=(1/f)(f(x))=1/(f(f(x)))=1/e^{i\ln(e^{i\ln(x)})}=1/e^{ii\ln(x)}=1/e^{-\ln(x)}=x}
.
The trigonometric functions are related by the reciprocal identity: the cotangent is the reciprocal of the tangent; the secant is the reciprocal of the cosine; the cosecant is the reciprocal of the sine.
A ring in which every nonzero element has a multiplicative inverse is a division ring; likewise an algebra in which this holds is a division algebra.
== Complex numbers ==
As mentioned above, the reciprocal of every nonzero complex number
z
=
a
+
b
i
{\displaystyle z=a+bi}
is complex. It can be found by multiplying both top and bottom of 1/z by its complex conjugate
z
¯
=
a
−
b
i
{\displaystyle {\bar {z}}=a-bi}
and using the property that
z
z
¯
=
‖
z
‖
2
{\displaystyle z{\bar {z}}=\|z\|^{2}}
, the absolute value of z squared, which is the real number a2 + b2:
1
z
=
z
¯
z
z
¯
=
z
¯
‖
z
‖
2
=
a
−
b
i
a
2
+
b
2
=
a
a
2
+
b
2
−
b
a
2
+
b
2
i
.
{\displaystyle {\frac {1}{z}}={\frac {\bar {z}}{z{\bar {z}}}}={\frac {\bar {z}}{\|z\|^{2}}}={\frac {a-bi}{a^{2}+b^{2}}}={\frac {a}{a^{2}+b^{2}}}-{\frac {b}{a^{2}+b^{2}}}i.}
The intuition is that
z
¯
‖
z
‖
{\displaystyle {\frac {\bar {z}}{\|z\|}}}
gives us the complex conjugate with a magnitude reduced to a value of
1
{\displaystyle 1}
, so dividing again by
‖
z
‖
{\displaystyle \|z\|}
ensures that the magnitude is now equal to the reciprocal of the original magnitude as well, hence:
1
z
=
z
¯
‖
z
‖
2
{\displaystyle {\frac {1}{z}}={\frac {\bar {z}}{\|z\|^{2}}}}
In particular, if ||z||=1 (z has unit magnitude), then
1
/
z
=
z
¯
{\displaystyle 1/z={\bar {z}}}
. Consequently, the imaginary units, ±i, have additive inverse equal to multiplicative inverse, and are the only complex numbers with this property. For example, additive and multiplicative inverses of i are −(i) = −i and 1/i = −i, respectively.
For a complex number in polar form z = r(cos φ + i sin φ), the reciprocal simply takes the reciprocal of the magnitude and the negative of the angle:
1
z
=
1
r
(
cos
(
−
φ
)
+
i
sin
(
−
φ
)
)
.
{\displaystyle {\frac {1}{z}}={\frac {1}{r}}\left(\cos(-\varphi )+i\sin(-\varphi )\right).}
== Calculus ==
In real calculus, the derivative of 1/x = x−1 is given by the power rule with the power −1:
d
d
x
x
−
1
=
(
−
1
)
x
(
−
1
)
−
1
=
−
x
−
2
=
−
1
x
2
.
{\displaystyle {\frac {d}{dx}}x^{-1}=(-1)x^{(-1)-1}=-x^{-2}=-{\frac {1}{x^{2}}}.}
The power rule for integrals (Cavalieri's quadrature formula) cannot be used to compute the integral of 1/x, because doing so would result in division by 0:
∫
d
x
x
=
x
0
0
+
C
{\displaystyle \int {\frac {dx}{x}}={\frac {x^{0}}{0}}+C}
Instead the integral is given by:
∫
1
a
d
x
x
=
ln
a
,
{\displaystyle \int _{1}^{a}{\frac {dx}{x}}=\ln a,}
∫
d
x
x
=
ln
x
+
C
.
{\displaystyle \int {\frac {dx}{x}}=\ln x+C.}
where ln is the natural logarithm. To show this, note that
d
d
y
e
y
=
e
y
{\textstyle {\frac {d}{dy}}e^{y}=e^{y}}
, so if
x
=
e
y
{\displaystyle x=e^{y}}
and
y
=
ln
x
{\displaystyle y=\ln x}
, we have:
d
x
d
y
=
x
⇒
d
x
x
=
d
y
⇒
∫
d
x
x
=
∫
d
y
=
y
+
C
=
ln
x
+
C
.
{\displaystyle {\begin{aligned}&{\frac {dx}{dy}}=x\quad \Rightarrow \quad {\frac {dx}{x}}=dy\\[10mu]&\quad \Rightarrow \quad \int {\frac {dx}{x}}=\int dy=y+C=\ln x+C.\end{aligned}}}
== Algorithms ==
The reciprocal may be computed by hand with the use of long division.
Computing the reciprocal is important in many division algorithms, since the quotient a/b can be computed by first computing 1/b and then multiplying it by a. Noting that
f
(
x
)
=
1
/
x
−
b
{\displaystyle f(x)=1/x-b}
has a zero at x = 1/b, Newton's method can find that zero, starting with a guess
x
0
{\displaystyle x_{0}}
and iterating using the rule:
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
1
/
x
n
−
b
−
1
/
x
n
2
=
2
x
n
−
b
x
n
2
=
x
n
(
2
−
b
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {1/x_{n}-b}{-1/x_{n}^{2}}}=2x_{n}-bx_{n}^{2}=x_{n}(2-bx_{n}).}
This continues until the desired precision is reached. For example, suppose we wish to compute 1/17 ≈ 0.0588 with 3 digits of precision. Taking x0 = 0.1, the following sequence is produced:
x1 = 0.1(2 − 17 × 0.1) = 0.03
x2 = 0.03(2 − 17 × 0.03) = 0.0447
x3 = 0.0447(2 − 17 × 0.0447) ≈ 0.0554
x4 = 0.0554(2 − 17 × 0.0554) ≈ 0.0586
x5 = 0.0586(2 − 17 × 0.0586) ≈ 0.0588
A typical initial guess can be found by rounding b to a nearby power of 2, then using bit shifts to compute its reciprocal.
In constructive mathematics, for a real number x to have a reciprocal, it is not sufficient that x ≠ 0. There must instead be given a rational number r such that 0 < r < |x|. In terms of the approximation algorithm described above, this is needed to prove that the change in y will eventually become arbitrarily small.
This iteration can also be generalized to a wider sort of inverses; for example, matrix inverses.
== Reciprocals of irrational numbers ==
Every real or complex number excluding zero has a reciprocal, and reciprocals of certain irrational numbers can have important special properties. Examples include the reciprocal of e (≈ 0.367879) and the golden ratio's reciprocal (≈ 0.618034). The first reciprocal is special because no other positive number can produce a lower number when put to the power of itself;
f
(
1
/
e
)
{\displaystyle f(1/e)}
is the global minimum of
f
(
x
)
=
x
x
{\displaystyle f(x)=x^{x}}
. The second number is the only positive number that is equal to its reciprocal plus one:
φ
=
1
/
φ
+
1
{\displaystyle \varphi =1/\varphi +1}
. Its additive inverse is the only negative number that is equal to its reciprocal minus one:
−
φ
=
−
1
/
φ
−
1
{\displaystyle -\varphi =-1/\varphi -1}
.
The function
f
(
n
)
=
n
+
n
2
+
1
,
n
∈
N
,
n
>
0
{\textstyle f(n)=n+{\sqrt {n^{2}+1}},n\in \mathbb {N} ,n>0}
gives an infinite number of irrational numbers that differ with their reciprocal by an integer. For example,
f
(
2
)
{\displaystyle f(2)}
is the irrational
2
+
5
{\displaystyle 2+{\sqrt {5}}}
. Its reciprocal
1
/
(
2
+
5
)
{\displaystyle 1/(2+{\sqrt {5}})}
is
−
2
+
5
{\displaystyle -2+{\sqrt {5}}}
, exactly
4
{\displaystyle 4}
less. Such irrational numbers share an evident property: they have the same fractional part as their reciprocal, since these numbers differ by an integer.
The reciprocal function plays an important role in simple continued fractions, which have a number of remarkable properties relating to the representation of (both rational and) irrational numbers.
== Further remarks ==
If the multiplication is associative, an element x with a multiplicative inverse cannot be a zero divisor (x is a zero divisor if some nonzero y, xy = 0). To see this, it is sufficient to multiply the equation xy = 0 by the inverse of x (on the left), and then simplify using associativity. In the absence of associativity, the sedenions provide a counterexample.
The converse does not hold: an element which is not a zero divisor is not guaranteed to have a multiplicative inverse.
Within Z, all integers except −1, 0, 1 provide examples; they are not zero divisors nor do they have inverses in Z.
If the ring or algebra is finite, however, then all elements a which are not zero divisors do have a (left and right) inverse. For, first observe that the map f(x) = ax must be injective: f(x) = f(y) implies x = y:
a
x
=
a
y
⇒
a
x
−
a
y
=
0
⇒
a
(
x
−
y
)
=
0
⇒
x
−
y
=
0
⇒
x
=
y
.
{\displaystyle {\begin{aligned}ax&=ay&\quad \Rightarrow &\quad ax-ay=0\\&&\quad \Rightarrow &\quad a(x-y)=0\\&&\quad \Rightarrow &\quad x-y=0\\&&\quad \Rightarrow &\quad x=y.\end{aligned}}}
Distinct elements map to distinct elements, so the image consists of the same finite number of elements, and the map is necessarily surjective. Specifically, ƒ (namely multiplication by a) must map some element x to 1, ax = 1, so that x is an inverse for a.
== Applications ==
The expansion of the reciprocal 1/q in any base can also act as a source of pseudo-random numbers, if q is a "suitable" safe prime, a prime of the form 2p + 1 where p is also a prime. A sequence of pseudo-random numbers of length q − 1 will be produced by the expansion.
== See also ==
Division (mathematics)
Exponential decay
Fraction
Group (mathematics)
Hyperbola
Inverse distribution
List of sums of reciprocals
Repeating decimal
6-sphere coordinates
Unit fractions – reciprocals of integers
Zeros and poles
== Notes ==
== References ==
Maximally Periodic Reciprocals, Matthews R.A.J. Bulletin of the Institute of Mathematics and its Applications vol 28 pp 147–148 1992 | Wikipedia/Reciprocal_function |
In category theory, a branch of mathematics, a diagram is the categorical analogue of an indexed family in set theory. The primary difference is that in the categorical setting one has morphisms that also need indexing. An indexed family of sets is a collection of sets, indexed by a fixed set; equivalently, a function from a fixed index set to the class of sets. A diagram is a collection of objects and morphisms, indexed by a fixed category; equivalently, a functor from a fixed index category to some category.
== Definition ==
Formally, a diagram of type J in a category C is a (covariant) functor
The category J is called the index category or the scheme of the diagram D; the functor is sometimes called a J-shaped diagram. The actual objects and morphisms in J are largely irrelevant; only the way in which they are interrelated matters. The diagram D is thought of as indexing a collection of objects and morphisms in C patterned on J.
Although, technically, there is no difference between an individual diagram and a functor or between a scheme and a category, the change in terminology reflects a change in perspective, just as in the set theoretic case: one fixes the index category, and allows the functor (and, secondarily, the target category) to vary.
One is most often interested in the case where the scheme J is a small or even finite category. A diagram is said to be small or finite whenever J is.
A morphism of diagrams of type J in a category C is a natural transformation between functors. One can then interpret the category of diagrams of type J in C as the functor category CJ, and a diagram is then an object in this category.
== Examples ==
Given any object A in C, one has the constant diagram, which is the diagram that maps all objects in J to A, and all morphisms of J to the identity morphism on A. Notationally, one often uses an underbar to denote the constant diagram: thus, for any object
A
{\displaystyle A}
in C, one has the constant diagram
A
_
{\displaystyle {\underline {A}}}
.
If J is a (small) discrete category, then a diagram of type J is essentially just an indexed family of objects in C (indexed by J). When used in the construction of the limit, the result is the product; for the colimit, one gets the coproduct. So, for example, when J is the discrete category with two objects, the resulting limit is just the binary product.
If J = −1 ← 0 → +1, then a diagram of type J (A ← B → C) is a span, and its colimit is a pushout. If one were to "forget" that the diagram had object B and the two arrows B → A, B → C, the resulting diagram would simply be the discrete category with the two objects A and C, and the colimit would simply be the binary coproduct. Thus, this example shows an important way in which the idea of the diagram generalizes that of the index set in set theory: by including the morphisms B → A, B → C, one discovers additional structure in constructions built from the diagram, structure that would not be evident if one only had an index set with no relations between the objects in the index.
Dual to the above, if J = −1 → 0 ← +1, then a diagram of type J (A → B ← C) is a cospan, and its limit is a pullback.
The index
J
=
0
⇉
1
{\displaystyle J=0\rightrightarrows 1}
is called "two parallel morphisms", or sometimes the free quiver or the walking quiver. A diagram of type
J
{\displaystyle J}
(
f
,
g
:
X
→
Y
)
{\displaystyle (f,g\colon X\to Y)}
is then a quiver; its limit is an equalizer, and its colimit is a coequalizer.
If J is a poset category, then a diagram of type J is a family of objects Di together with a unique morphism fij : Di → Dj whenever i ≤ j. If J is directed then a diagram of type J is called a direct system of objects and morphisms. If the diagram is contravariant then it is called an inverse system.
== Cones and limits ==
A cone with vertex N of a diagram D : J → C is a morphism from the constant diagram Δ(N) to D. The constant diagram is the diagram which sends every object of J to an object N of C and every morphism to the identity morphism on N.
The limit of a diagram D is a universal cone to D. That is, a cone through which all other cones uniquely factor. If the limit exists in a category C for all diagrams of type J one obtains a functor
which sends each diagram to its limit.
Dually, the colimit of diagram D is a universal cone from D. If the colimit exists for all diagrams of type J one has a functor
which sends each diagram to its colimit.
The universal functor of a diagram is the diagonal functor; its right adjoint is the limit and its left adjoint is the colimit. A cone can be thought of as a natural transformation from the diagonal functor to some arbitrary diagram.
== Commutative diagrams ==
Diagrams and functor categories are often visualized by commutative diagrams, particularly if the index category is a finite poset category with few elements: one draws a commutative diagram with a node for every object in the index category, and an arrow for a generating set of morphisms, omitting identity maps and morphisms that can be expressed as compositions. The commutativity corresponds to the uniqueness of a map between two objects in a poset category. Conversely, every commutative diagram represents a diagram (a functor from a poset index category) in this way.
Not every diagram commutes, as not every index category is a poset category:
most simply, the diagram of a single object with an endomorphism (
f
:
X
→
X
{\displaystyle f\colon X\to X}
), or with two parallel arrows (
∙
⇉
∙
{\displaystyle \bullet \rightrightarrows \bullet }
;
f
,
g
:
X
→
Y
{\displaystyle f,g\colon X\to Y}
) need not commute. Further, diagrams may be impossible to draw (because they are infinite) or simply messy (because there are too many objects or morphisms); however, schematic commutative diagrams (for subcategories of the index category, or with ellipses, such as for a directed system) are used to clarify such complex diagrams.
== See also ==
Diagonal functor
Direct system
Inverse system
== References ==
== External links ==
Diagram Chasing at MathWorld
WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, commutative diagrams, categories, functors, natural transformations. | Wikipedia/Diagram_(category_theory) |
In discrete mathematics, a direction-preserving function (or mapping) is a function on a discrete space, such as the integer grid, that (informally) does not change too drastically between two adjacent points. It can be considered a discrete analogue of a continuous function.
The concept was first defined by Iimura. Some variants of it were later defined by Yang, Chen and Deng, Herings, van-der-Laan, Talman and Yang, and others.
== Basic concepts ==
We focus on functions
f
:
X
→
R
n
{\displaystyle f:X\to \mathbb {R} ^{n}}
, where the domain X is a finite subset of the Euclidean space
R
n
{\displaystyle \mathbb {R} ^{n}}
. ch(X) denotes the convex hull of X.
There are many variants of direction-preservation properties, depending on how exactly one defines the "drastic change" and the "adjacent points". Regarding the "drastic change" there are two main variants:
Direction preservation (DP) means that, if x and y are adjacent, then for all
i
∈
[
n
]
{\displaystyle i\in [n]}
:
f
i
(
x
)
⋅
f
i
(
y
)
≥
0
{\displaystyle f_{i}(x)\cdot f_{i}(y)\geq 0}
. In words: every component of the function f must not switch signs between adjacent points.
Gross direction preservation (GDP) means that, if x and y are adjacent, then
f
(
x
)
⋅
f
(
y
)
≥
0
{\displaystyle f(x)\cdot f(y)\geq 0}
. In words: the direction of the function f (as a vector) does not change by more than 90 degrees between adjacent points. Note that DP implies GDP but not vice versa.
Regarding the "adjacent points" there are several variants:
Hypercubic means that x and y are adjacent iff they are contained in some axes-parallel hypercube of side-length 1.
Simplicial means that x and y are adjacent iff they are vertices of the same simplex, in some triangulation of the domain. Usually, simplicial adjacency is much stronger than hypercubic adjacency; accordingly, hypercubic DP is much stronger than simplicial DP.
Specific definitions are presented below. All examples below are for
n
=
2
{\displaystyle n=2}
dimensions and for X = { (2,6), (2,7), (3, 6), (3, 7) }.
== Properties and examples ==
=== Hypercubic direction-preservation ===
A cell is a subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
that can be expressed by
k
+
[
0
,
1
]
n
{\displaystyle k+[0,1]^{n}}
for some
k
∈
Z
n
{\displaystyle k\in \mathbb {Z} ^{n}}
. For example, the square
[
2
,
3
]
×
[
6
,
7
]
{\displaystyle [2,3]\times [6,7]}
is a cell.
Two points in
R
n
{\displaystyle \mathbb {R} ^{n}}
are called cell connected if there is a cell that contains both of them.
Hypercubic direction-preservation properties require that the function does not change too drastically in cell-connected points (points in the same hypercubic cell).
f is called hypercubic direction preserving (HDP) if, for any pair of cell-connected points x,y in X, for all
i
∈
[
n
]
{\displaystyle i\in [n]}
:
f
i
(
x
)
⋅
f
i
(
y
)
≥
0
{\displaystyle f_{i}(x)\cdot f_{i}(y)\geq 0}
. The term locally direction-preserving (LDP) is often used instead. The function fa on the right is DP.
Some authors: Def.1 use a variant requiring that, for any pair of cell-connected points x,y in X, for all
i
∈
[
n
]
{\displaystyle i\in [n]}
:
(
f
i
(
x
)
−
x
i
)
⋅
(
f
i
(
y
)
−
y
i
)
≥
0
{\displaystyle (f_{i}(x)-x_{i})\cdot (f_{i}(y)-y_{i})\geq 0}
. A function f(x) is HDP by the second variant, iff the function g(x):=f(x)-x is HDP by the first variant.
f is called hypercubic gross direction preserving (HGDP), or locally gross direction preserving (LGDP), if for any pair of cell-connected points x,y in X,
f
(
x
)
⋅
f
(
y
)
≥
0
{\displaystyle f(x)\cdot f(y)\geq 0}
.: Def.2.2 Every HDP function is HGDP, but the converse is not true. The function fb is HGDP, since the scalar product of every two vectors in the table is non-negative. But it is not HDP, since the second component switches sign between (2,6) and (3,6):
f
2
b
(
2
,
6
)
⋅
f
2
b
(
3
,
6
)
=
−
1
<
0
{\displaystyle f_{2}^{b}(2,6)\cdot f_{2}^{b}(3,6)=-1<0}
.
Some authors use a variant requiring that, for any pair of cell-connected points x,y in X,
(
f
(
x
)
−
x
)
⋅
(
f
(
y
)
−
y
)
≥
0
{\displaystyle (f(x)-x)\cdot (f(y)-y)\geq 0}
. A function f(x) is HGDP by the second variant, iff the function g(x):=f(x)-x is HGDP by the first variant.
=== Simplicial direction-preservation ===
A simplex is called integral if all its vertices have integer coordinates, and they all lie in the same cell (so the difference between coordinates of different vertices is at most 1).
A triangulation of some subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
is called integral if all its simplices are integral.
Given a triangulation, two points are called simplicially connected if there is a simplex of the triangulation that contains both of them.
Note that, in an integral triangulation, every simplicially-connected points are also cell-connected, but the converse is not true. For example, consider the cell
[
2
,
3
]
×
[
6
,
7
]
{\displaystyle [2,3]\times [6,7]}
. Consider the integral triangulation that partitions it into two triangles: {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}. The points (2,7) and (3,6) are cell-connected but not simplicially-connected.
Simplicial direction-preservation properties assume some fixed integral triangulation of the input domain. They require that the function does not change too drastically in simplicially-connected points (points in the same simplex of the triangulation). This is, in general, a much weaker requirement than hypercubic direction-preservation.
f is called simplicial direction preserving (SDP) if, for some integral triangulation of X, for any pair of simplicially-connected points x,y in X, for all
i
∈
[
n
]
{\displaystyle i\in [n]}
:
(
f
i
(
x
)
−
x
i
)
⋅
(
f
i
(
y
)
−
y
i
)
≥
0
{\displaystyle (f_{i}(x)-x_{i})\cdot (f_{i}(y)-y_{i})\geq 0}
.: Def.4
f is called simplicially gross direction preserving (SGDP) or simplicially-local gross direction preserving (SLGDP) if there exists an integral triangulation of ch(X) such that, for any pair of simplicially-connected points x,y in X,
f
(
x
)
⋅
f
(
y
)
≥
0
{\displaystyle f(x)\cdot f(y)\geq 0}
.
Every HGDP function is SGDP, but HGDP is much stronger: it is equivalent to SGDP w.r.t. all possible integral triangulations of ch(X), whereas SGDP relates to a single triangulation.: Def.2.3 As an example, the function fc on the right is SGDP by the triangulation that partitions the cell into the two triangles {(2,6),(2,7),(3,7)} and {(2,6),(3,6),(3,7)}, since in each triangle, the scalar product of every two vectors is non-negative. But it is not HGDP, since
f
c
(
3
,
6
)
⋅
f
c
(
2
,
7
)
=
−
1
<
0
{\displaystyle f^{c}(3,6)\cdot f^{c}(2,7)=-1<0}
.
== References == | Wikipedia/Direction-preserving_function |
In mathematics, the Dirichlet function is the indicator function
1
Q
{\displaystyle \mathbf {1} _{\mathbb {Q} }}
of the set of rational numbers
Q
{\displaystyle \mathbb {Q} }
, i.e.
1
Q
(
x
)
=
1
{\displaystyle \mathbf {1} _{\mathbb {Q} }(x)=1}
if x is a rational number and
1
Q
(
x
)
=
0
{\displaystyle \mathbf {1} _{\mathbb {Q} }(x)=0}
if x is not a rational number (i.e. is an irrational number).
1
Q
(
x
)
=
{
1
x
∈
Q
0
x
∉
Q
{\displaystyle \mathbf {1} _{\mathbb {Q} }(x)={\begin{cases}1&x\in \mathbb {Q} \\0&x\notin \mathbb {Q} \end{cases}}}
It is named after the mathematician Peter Gustav Lejeune Dirichlet. It is an example of a pathological function which provides counterexamples to many situations.
== Topological properties ==
== Periodicity ==
For any real number x and any positive rational number T,
1
Q
(
x
+
T
)
=
1
Q
(
x
)
{\displaystyle \mathbf {1} _{\mathbb {Q} }(x+T)=\mathbf {1} _{\mathbb {Q} }(x)}
. The Dirichlet function is therefore an example of a real periodic function which is not constant but whose set of periods, the set of rational numbers, is a dense subset of
R
{\displaystyle \mathbb {R} }
.
== Integration properties ==
== See also ==
Thomae's function, a variation that is discontinuous only at the rational numbers
== References == | Wikipedia/Dirichlet's_function |
In mathematics, the Weierstrass function, named after its discoverer, Karl Weierstrass, is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is also an example of a fractal curve.
The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were disliked by contemporaries: Charles Hermite, on finding that one class of function he was working on had such a property, described it as a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves).
== Construction ==
In Weierstrass's original paper, the function was defined as a Fourier series:
f
(
x
)
=
∑
n
=
0
∞
a
n
cos
(
b
n
π
x
)
,
{\displaystyle f(x)=\sum _{n=0}^{\infty }a^{n}\cos(b^{n}\pi x),}
where
0
<
a
<
1
{\textstyle 0<a<1}
,
b
{\textstyle b}
is a positive odd integer, and
a
b
>
1
+
3
2
π
.
{\displaystyle ab>1+{\frac {3}{2}}\pi .}
The minimum value of
b
{\textstyle b}
for which there exists
0
<
a
<
1
{\textstyle 0<a<1}
such that these constraints are satisfied is
b
=
7
{\textstyle b=7}
. This construction, along with the proof that the function is not differentiable at any point, was first delivered by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872.
Despite being differentiable nowhere, the function is continuous: Since the terms of the infinite series which defines it are bounded by
±
a
n
{\textstyle \pm a^{n}}
and this has finite sum for
0
<
a
<
1
{\textstyle 0<a<1}
, convergence of the sum of the terms is uniform by the Weierstrass M-test with
M
n
=
a
n
{\textstyle M_{n}=a^{n}}
. Since each partial sum is continuous, by the uniform limit theorem, it follows that
f
{\textstyle f}
is continuous. Additionally, since each partial sum is uniformly continuous, it follows that
f
{\textstyle f}
is also uniformly continuous.
It might be expected that a continuous function must have a derivative, or that the set of points where it is not differentiable should be countably infinite or finite. According to Weierstrass in his paper, earlier mathematicians including Gauss had often assumed that this was true. This might be because it is difficult to draw or visualise a continuous function whose set of nondifferentiable points is something other than a countable set of points. Analogous results for better behaved classes of continuous functions do exist, for example the Lipschitz functions, whose set of non-differentiability points must be a Lebesgue null set (Rademacher's theorem). When we try to draw a general continuous function, we usually draw the graph of a function which is Lipschitz or otherwise well-behaved. Moreover, the fact that the set of non-differentiability points for a monotone function is measure-zero implies that the rapid oscillations of Weierstrass' function are necessary to ensure that it is nowhere-differentiable.
The Weierstrass function was one of the first fractals studied, although this term was not used until much later. The function has detail at every level, so zooming in on a piece of the curve does not show it getting progressively closer and closer to a straight line. Rather between any two points no matter how close, the function will not be monotone.
The computation of the Hausdorff dimension
D
{\textstyle D}
of the graph of the classical Weierstrass function was an open problem until 2018, while it was generally believed that
D
=
2
+
log
b
(
a
)
<
2
{\textstyle D=2+\log _{b}(a)<2}
. That D is strictly less than 2 follows from the conditions on
a
{\textstyle a}
and
b
{\textstyle b}
from above. Only after more than 30 years was this proved rigorously.
The term Weierstrass function is often used in real analysis to refer to any function with similar properties and construction to Weierstrass's original example. For example, the cosine function can be replaced in the infinite series by a piecewise linear "zigzag" function. G. H. Hardy showed that the function of the above construction is nowhere differentiable with the assumptions
0
<
a
<
1
,
a
b
≥
1
{\textstyle 0<a<1,ab\geq 1}
.
== Riemann function ==
The Weierstrass function is based on the earlier Riemann function, claimed to be differentiable nowhere. Occasionally, this function has also been called the Weierstrass function.
f
(
x
)
=
∑
n
=
1
∞
sin
(
n
2
x
)
n
2
.
{\displaystyle f(x)=\sum _{n=1}^{\infty }{\frac {\sin(n^{2}x)}{n^{2}}}.}
While Bernhard Riemann strongly claimed that the function is differentiable nowhere, no evidence of this was published by Riemann, and Weierstrass noted that he did not find any evidence of it surviving either in Riemann's papers or orally from his students.
In 1916, G. H. Hardy confirmed that the function does not have a finite derivative in any value of
π
x
{\textstyle \pi x}
where x is irrational or is rational with the form of either
2
A
4
B
+
1
{\textstyle {\frac {2A}{4B+1}}}
or
2
A
+
1
2
B
{\textstyle {\frac {2A+1}{2B}}}
, where A and B are integers. In 1969, Joseph Gerver found that the Riemann function has a defined differential on every value of x that can be expressed in the form of
2
A
+
1
2
B
+
1
π
{\textstyle {\frac {2A+1}{2B+1}}\pi }
with integer A and B, that is, rational multipliers of
π
{\displaystyle \pi }
with an odd numerator and denominator. On these points, the function has a derivative of
−
1
2
{\textstyle -{\frac {1}{2}}}
. In 1971, J. Gerver showed that the function has no finite differential at the values of x that can be expressed in the form of
2
A
2
B
+
1
π
{\textstyle {\frac {2A}{2B+1}}\pi }
, completing the problem of the differentiability of the Riemann function.
As the Riemann function is differentiable only on a null set of points, it is differentiable almost nowhere.
== Hölder continuity ==
It is convenient to write the Weierstrass function equivalently as
W
α
(
x
)
=
∑
n
=
0
∞
b
−
n
α
cos
(
b
n
π
x
)
{\displaystyle W_{\alpha }(x)=\sum _{n=0}^{\infty }b^{-n\alpha }\cos(b^{n}\pi x)}
for
α
=
−
ln
(
a
)
ln
(
b
)
{\textstyle \alpha =-{\frac {\ln(a)}{\ln(b)}}}
. Then
W
α
(
x
)
{\textstyle W_{\alpha }(x)}
is Hölder continuous of exponent α, which is to say that there is a constant C such that
|
W
α
(
x
)
−
W
α
(
y
)
|
≤
C
|
x
−
y
|
α
{\displaystyle |W_{\alpha }(x)-W_{\alpha }(y)|\leq C|x-y|^{\alpha }}
for all
x
{\textstyle x}
and
y
{\textstyle y}
. Moreover,
W
1
{\textstyle W_{1}}
is Hölder continuous of all orders
α
<
1
{\textstyle \alpha <1}
but not Lipschitz continuous.
== Density of nowhere-differentiable functions ==
It turns out that the Weierstrass function is far from being an isolated example: although it is "pathological", it is also "typical" of continuous functions:
In a topological sense: the set of nowhere-differentiable real-valued functions on [0, 1] is comeager in the vector space C([0, 1]; R) of all continuous real-valued functions on [0, 1] with the topology of uniform convergence.
In a measure-theoretic sense: when the space C([0, 1]; R) is equipped with classical Wiener measure γ, the collection of functions that are differentiable at even a single point of [0, 1] has γ-measure zero. The same is true even if one takes finite-dimensional "slices" of C([0, 1]; R), in the sense that the nowhere-differentiable functions form a prevalent subset of C([0, 1]; R).
== See also ==
Blancmange curve
Koch snowflake
Nowhere continuous function
== Notes ==
== References ==
David, Claire (2018), "Bypassing dynamical systems : A simple way to get the box-counting dimension of the graph of the Weierstrass function", Proceedings of the International Geometry Center, 11 (2), Academy of Sciences of Ukraine: 53–68, arXiv:1711.10349, doi:10.15673/tmgc.v11i2.1028
Falconer, K. (1984), The Geometry of Fractal Sets, Cambridge Tracts in Mathematics, vol. Book 85, Cambridge: Cambridge University Press, ISBN 978-0-521-33705-2
Gelbaum, B Bernard R.; Olmstead, John M. H. (2003) [1964], Counterexamples in Analysis, Dover Books on Mathematics, Dover Publications, ISBN 978-0-486-42875-8
Hardy, G. H. (1916), "Weierstrass's nondifferentiable function" (PDF), Transactions of the American Mathematical Society, 17 (3), American Mathematical Society: 301–325, doi:10.2307/1989005, JSTOR 1989005
Weierstrass, Karl (18 July 1872), Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen, Königlich Preußische Akademie der Wissenschaften
Weierstrass, Karl (1895), "Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen", Mathematische Werke von Karl Weierstrass, vol. 2, Berlin, Germany: Mayer & Müller, pp. 71–74
English translation: Edgar, Gerald A. (1993), "On continuous functions of a real argument that do not possess a well-defined derivative for any value of their argument", Classics on Fractals, Studies in Nonlinearity, Addison-Wesley Publishing Company, pp. 3–9, ISBN 978-0-201-58701-2
== External links ==
Weisstein, Eric W. "Weierstrass function". MathWorld. (a different Weierstrass Function which is also continuous and nowhere differentiable)
Nowhere differentiable continuous function proof of existence using Banach's contraction principle.
Nowhere monotonic continuous function proof of existence using the Baire category theorem.
Johan Thim. "Continuous Nowhere Differentiable Functions". Master Thesis Lulea Univ of Technology 2003. Archived from the original on 22 February 2017. Retrieved 28 July 2006.
Weierstrass function in the complex plane Archived 24 September 2009 at the Wayback Machine Beautiful fractal.
SpringerLink - Journal of Fourier Analysis and Applications, Volume 16, Number 1 Simple Proofs of Nowhere-Differentiability for Weierstrass's Function and Cases of Slow Growth
Weierstrass functions: continuous but not differentiable anywhere
The Weierstrass Function by Brent Nelson at Berkeley, showing non-differentiable | Wikipedia/Weierstrass_function |
In mathematics, a function
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
is symmetrically continuous at a point x if
lim
h
→
0
f
(
x
+
h
)
−
f
(
x
−
h
)
=
0.
{\displaystyle \lim _{h\to 0}f(x+h)-f(x-h)=0.}
The usual definition of continuity implies symmetric continuity, but the converse is not true. For example, the function
x
−
2
{\displaystyle x^{-2}}
is symmetrically continuous at
x
=
0
{\displaystyle x=0}
, but not continuous.
Also, symmetric differentiability implies symmetric continuity, but the converse is not true just like usual continuity does not imply differentiability.
The set of the symmetrically continuous functions, with the usual scalar multiplication can be easily shown to have the structure of a vector space over
R
{\displaystyle \mathbb {R} }
, similarly to the usually continuous functions, which form a linear subspace within it.
== References ==
Thomson, Brian S. (1994). Symmetric Properties of Real Functions. Marcel Dekker. ISBN 0-8247-9230-0. | Wikipedia/Symmetrically_continuous_function |
In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are a general setting for studying many of the concepts of mathematical analysis and geometry.
The most familiar example of a metric space is 3-dimensional Euclidean space with its usual notion of distance. Other well-known examples are a sphere equipped with the angular distance and the hyperbolic plane. A metric may correspond to a metaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with the Hamming distance, which measures the number of characters that need to be changed to get from one string to another.
Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, including Riemannian manifolds, normed vector spaces, and graphs. In abstract algebra, the p-adic numbers arise as elements of the completion of a metric structure on the rational numbers. Metric spaces are also studied in their own right in metric geometry and analysis on metric spaces.
Many of the basic notions of mathematical analysis, including balls, completeness, as well as uniform, Lipschitz, and Hölder continuity, can be defined in the setting of metric spaces. Other notions, such as continuity, compactness, and open and closed sets, can be defined for metric spaces, but also in the even more general setting of topological spaces.
== Definition and illustration ==
=== Motivation ===
To see the utility of different notions of distance, consider the surface of the Earth as a set of points. We can measure the distance between two such points by the length of the shortest path along the surface, "as the crow flies"; this is particularly useful for shipping and aviation. We can also measure the straight-line distance between two points through the Earth's interior; this notion is, for example, natural in seismology, since it roughly corresponds to the length of time it takes for seismic waves to travel between those two points.
The notion of distance encoded by the metric space axioms has relatively few requirements. This generality gives metric spaces a lot of flexibility. At the same time, the notion is strong enough to encode many intuitive facts about what distance means. This means that general results about metric spaces can be applied in many different contexts.
Like many fundamental mathematical concepts, the metric on a metric space can be interpreted in many different ways. A particular metric may not be best thought of as measuring physical distance, but, instead, as the cost of changing from one state to another (as with Wasserstein metrics on spaces of measures) or the degree of difference between two objects (for example, the Hamming distance between two strings of characters, or the Gromov–Hausdorff distance between metric spaces themselves).
=== Definition ===
Formally, a metric space is an ordered pair (M, d) where M is a set and d is a metric on M, i.e., a function
d
:
M
×
M
→
R
{\displaystyle d\,\colon M\times M\to \mathbb {R} }
satisfying the following axioms for all points
x
,
y
,
z
∈
M
{\displaystyle x,y,z\in M}
:
The distance from a point to itself is zero:
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
(Positivity) The distance between two distinct points is always positive:
If
x
≠
y
, then
d
(
x
,
y
)
>
0
{\displaystyle {\text{If }}x\neq y{\text{, then }}d(x,y)>0}
(Symmetry) The distance from x to y is always the same as the distance from y to x:
d
(
x
,
y
)
=
d
(
y
,
x
)
{\displaystyle d(x,y)=d(y,x)}
The triangle inequality holds:
d
(
x
,
z
)
≤
d
(
x
,
y
)
+
d
(
y
,
z
)
{\displaystyle d(x,z)\leq d(x,y)+d(y,z)}
This is a natural property of both physical and metaphorical notions of distance: you can arrive at z from x by taking a detour through y, but this will not make your journey any shorter than the direct path.
If the metric d is unambiguous, one often refers by abuse of notation to "the metric space M".
By taking all axioms except the second, one can show that distance is always non-negative:
0
=
d
(
x
,
x
)
≤
d
(
x
,
y
)
+
d
(
y
,
x
)
=
2
d
(
x
,
y
)
{\displaystyle 0=d(x,x)\leq d(x,y)+d(y,x)=2d(x,y)}
Therefore the second axiom can be weakened to
If
x
≠
y
, then
d
(
x
,
y
)
≠
0
{\textstyle {\text{If }}x\neq y{\text{, then }}d(x,y)\neq 0}
and combined with the first to make
d
(
x
,
y
)
=
0
⟺
x
=
y
{\textstyle d(x,y)=0\iff x=y}
.
=== Simple examples ===
==== The real numbers ====
The real numbers with the distance function
d
(
x
,
y
)
=
|
y
−
x
|
{\displaystyle d(x,y)=|y-x|}
given by the absolute difference form a metric space. Many properties of metric spaces and functions between them are generalizations of concepts in real analysis and coincide with those concepts when applied to the real line.
==== Metrics on Euclidean spaces ====
The Euclidean plane
R
2
{\displaystyle \mathbb {R} ^{2}}
can be equipped with many different metrics. The Euclidean distance familiar from school mathematics can be defined by
d
2
(
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
)
=
(
x
2
−
x
1
)
2
+
(
y
2
−
y
1
)
2
.
{\displaystyle d_{2}((x_{1},y_{1}),(x_{2},y_{2}))={\sqrt {(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}}.}
The taxicab or Manhattan distance is defined by
d
1
(
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
)
=
|
x
2
−
x
1
|
+
|
y
2
−
y
1
|
{\displaystyle d_{1}((x_{1},y_{1}),(x_{2},y_{2}))=|x_{2}-x_{1}|+|y_{2}-y_{1}|}
and can be thought of as the distance you need to travel along horizontal and vertical lines to get from one point to the other, as illustrated at the top of the article.
The maximum,
L
∞
{\displaystyle L^{\infty }}
, or Chebyshev distance is defined by
d
∞
(
(
x
1
,
y
1
)
,
(
x
2
,
y
2
)
)
=
max
{
|
x
2
−
x
1
|
,
|
y
2
−
y
1
|
}
.
{\displaystyle d_{\infty }((x_{1},y_{1}),(x_{2},y_{2}))=\max\{|x_{2}-x_{1}|,|y_{2}-y_{1}|\}.}
This distance does not have an easy explanation in terms of paths in the plane, but it still satisfies the metric space axioms. It can be thought of similarly to the number of moves a king would have to make on a chess board to travel from one point to another on the given space.
In fact, these three distances, while they have distinct properties, are similar in some ways. Informally, points that are close in one are close in the others, too. This observation can be quantified with the formula
d
∞
(
p
,
q
)
≤
d
2
(
p
,
q
)
≤
d
1
(
p
,
q
)
≤
2
d
∞
(
p
,
q
)
,
{\displaystyle d_{\infty }(p,q)\leq d_{2}(p,q)\leq d_{1}(p,q)\leq 2d_{\infty }(p,q),}
which holds for every pair of points
p
,
q
∈
R
2
{\displaystyle p,q\in \mathbb {R} ^{2}}
.
A radically different distance can be defined by setting
d
(
p
,
q
)
=
{
0
,
if
p
=
q
,
1
,
otherwise.
{\displaystyle d(p,q)={\begin{cases}0,&{\text{if }}p=q,\\1,&{\text{otherwise.}}\end{cases}}}
Using Iverson brackets,
d
(
p
,
q
)
=
[
p
≠
q
]
{\displaystyle d(p,q)=[p\neq q]}
In this discrete metric, all distinct points are 1 unit apart: none of them are close to each other, and none of them are very far away from each other either. Intuitively, the discrete metric no longer remembers that the set is a plane, but treats it just as an undifferentiated set of points.
All of these metrics make sense on
R
n
{\displaystyle \mathbb {R} ^{n}}
as well as
R
2
{\displaystyle \mathbb {R} ^{2}}
.
==== Subspaces ====
Given a metric space (M, d) and a subset
A
⊆
M
{\displaystyle A\subseteq M}
, we can consider A to be a metric space by measuring distances the same way we would in M. Formally, the induced metric on A is a function
d
A
:
A
×
A
→
R
{\displaystyle d_{A}:A\times A\to \mathbb {R} }
defined by
d
A
(
x
,
y
)
=
d
(
x
,
y
)
.
{\displaystyle d_{A}(x,y)=d(x,y).}
For example, if we take the two-dimensional sphere S2 as a subset of
R
3
{\displaystyle \mathbb {R} ^{3}}
, the Euclidean metric on
R
3
{\displaystyle \mathbb {R} ^{3}}
induces the straight-line metric on S2 described above. Two more useful examples are the open interval (0, 1) and the closed interval [0, 1] thought of as subspaces of the real line.
== History ==
Arthur Cayley, in his article "On Distance", extended metric concepts beyond Euclidean geometry into domains bounded by a conic in a projective space. His distance was given by logarithm of a cross ratio. Any projectivity leaving the conic stable also leaves the cross ratio constant, so isometries are implicit. This method provides models for elliptic geometry and hyperbolic geometry, and Felix Klein, in several publications, established the field of non-euclidean geometry through the use of the Cayley-Klein metric.
The idea of an abstract space with metric properties was addressed in 1906 by René Maurice Fréchet and the term metric space was coined by Felix Hausdorff in 1914.
Fréchet's work laid the foundation for understanding convergence, continuity, and other key concepts in non-geometric spaces. This allowed mathematicians to study functions and sequences in a broader and more flexible way. This was important for the growing field of functional analysis. Mathematicians like Hausdorff and Stefan Banach further refined and expanded the framework of metric spaces. Hausdorff introduced topological spaces as a generalization of metric spaces. Banach's work in functional analysis heavily relied on the metric structure. Over time, metric spaces became a central part of modern mathematics. They have influenced various fields including topology, geometry, and applied mathematics. Metric spaces continue to play a crucial role in the study of abstract mathematical concepts.
== Basic notions ==
A distance function is enough to define notions of closeness and convergence that were first developed in real analysis. Properties that depend on the structure of a metric space are referred to as metric properties. Every metric space is also a topological space, and some metric properties can also be rephrased without reference to distance in the language of topology; that is, they are really topological properties.
=== The topology of a metric space ===
For any point x in a metric space M and any real number r > 0, the open ball of radius r around x is defined to be the set of points that are strictly less than distance r from x:
B
r
(
x
)
=
{
y
∈
M
:
d
(
x
,
y
)
<
r
}
.
{\displaystyle B_{r}(x)=\{y\in M:d(x,y)<r\}.}
This is a natural way to define a set of points that are relatively close to x. Therefore, a set
N
⊆
M
{\displaystyle N\subseteq M}
is a neighborhood of x (informally, it contains all points "close enough" to x) if it contains an open ball of radius r around x for some r > 0.
An open set is a set which is a neighborhood of all its points. It follows that the open balls form a base for a topology on M. In other words, the open sets of M are exactly the unions of open balls. As in any topology, closed sets are the complements of open sets. Sets may be both open and closed as well as neither open nor closed.
This topology does not carry all the information about the metric space. For example, the distances d1, d2, and d∞ defined above all induce the same topology on
R
2
{\displaystyle \mathbb {R} ^{2}}
, although they behave differently in many respects. Similarly,
R
{\displaystyle \mathbb {R} }
with the Euclidean metric and its subspace the interval (0, 1) with the induced metric are homeomorphic but have very different metric properties.
Conversely, not every topological space can be given a metric. Topological spaces which are compatible with a metric are called metrizable and are particularly well-behaved in many ways: in particular, they are paracompact Hausdorff spaces (hence normal) and first-countable. The Nagata–Smirnov metrization theorem gives a characterization of metrizability in terms of other topological properties, without reference to metrics.
=== Convergence ===
Convergence of sequences in Euclidean space is defined as follows:
A sequence (xn) converges to a point x if for every ε > 0 there is an integer N such that for all n > N, d(xn, x) < ε.
Convergence of sequences in a topological space is defined as follows:
A sequence (xn) converges to a point x if for every open set U containing x there is an integer N such that for all n > N,
x
n
∈
U
{\displaystyle x_{n}\in U}
.
In metric spaces, both of these definitions make sense and they are equivalent. This is a general pattern for topological properties of metric spaces: while they can be defined in a purely topological way, there is often a way that uses the metric which is easier to state or more familiar from real analysis.
=== Completeness ===
Informally, a metric space is complete if it has no "missing points": every sequence that looks like it should converge to something actually converges.
To make this precise: a sequence (xn) in a metric space M is Cauchy if for every ε > 0 there is an integer N such that for all m, n > N, d(xm, xn) < ε. By the triangle inequality, any convergent sequence is Cauchy: if xm and xn are both less than ε away from the limit, then they are less than 2ε away from each other. If the converse is true—every Cauchy sequence in M converges—then M is complete.
Euclidean spaces are complete, as is
R
2
{\displaystyle \mathbb {R} ^{2}}
with the other metrics described above. Two examples of spaces which are not complete are (0, 1) and the rationals, each with the metric induced from
R
{\displaystyle \mathbb {R} }
. One can think of (0, 1) as "missing" its endpoints 0 and 1. The rationals are missing all the irrationals, since any irrational has a sequence of rationals converging to it in
R
{\displaystyle \mathbb {R} }
(for example, its successive decimal approximations). These examples show that completeness is not a topological property, since
R
{\displaystyle \mathbb {R} }
is complete but the homeomorphic space (0, 1) is not.
This notion of "missing points" can be made precise. In fact, every metric space has a unique completion, which is a complete space that contains the given space as a dense subset. For example, [0, 1] is the completion of (0, 1), and the real numbers are the completion of the rationals.
Since complete spaces are generally easier to work with, completions are important throughout mathematics. For example, in abstract algebra, the p-adic numbers are defined as the completion of the rationals under a different metric. Completion is particularly common as a tool in functional analysis. Often one has a set of nice functions and a way of measuring distances between them. Taking the completion of this metric space gives a new set of functions which may be less nice, but nevertheless useful because they behave similarly to the original nice functions in important ways. For example, weak solutions to differential equations typically live in a completion (a Sobolev space) rather than the original space of nice functions for which the differential equation actually makes sense.
=== Bounded and totally bounded spaces ===
A metric space M is bounded if there is an r such that no pair of points in M is more than distance r apart. The least such r is called the diameter of M.
The space M is called precompact or totally bounded if for every r > 0 there is a finite cover of M by open balls of radius r. Every totally bounded space is bounded. To see this, start with a finite cover by r-balls for some arbitrary r. Since the subset of M consisting of the centers of these balls is finite, it has finite diameter, say D. By the triangle inequality, the diameter of the whole space is at most D + 2r. The converse does not hold: an example of a metric space that is bounded but not totally bounded is
R
2
{\displaystyle \mathbb {R} ^{2}}
(or any other infinite set) with the discrete metric.
=== Compactness ===
Compactness is a topological property which generalizes the properties of a closed and bounded subset of Euclidean space. There are several equivalent definitions of compactness in metric spaces:
A metric space M is compact if every open cover has a finite subcover (the usual topological definition).
A metric space M is compact if every sequence has a convergent subsequence. (For general topological spaces this is called sequential compactness and is not equivalent to compactness.)
A metric space M is compact if it is complete and totally bounded. (This definition is written in terms of metric properties and does not make sense for a general topological space, but it is nevertheless topologically invariant since it is equivalent to compactness.)
One example of a compact space is the closed interval [0, 1].
Compactness is important for similar reasons to completeness: it makes it easy to find limits. Another important tool is Lebesgue's number lemma, which shows that for any open cover of a compact space, every point is relatively deep inside one of the sets of the cover.
== Functions between metric spaces ==
Unlike in the case of topological spaces or algebraic structures such as groups or rings, there is no single "right" type of structure-preserving function between metric spaces. Instead, one works with different types of functions depending on one's goals. Throughout this section, suppose that
(
M
1
,
d
1
)
{\displaystyle (M_{1},d_{1})}
and
(
M
2
,
d
2
)
{\displaystyle (M_{2},d_{2})}
are two metric spaces. The words "function" and "map" are used interchangeably.
=== Isometries ===
One interpretation of a "structure-preserving" map is one that fully preserves the distance function:
A function
f
:
M
1
→
M
2
{\displaystyle f:M_{1}\to M_{2}}
is distance-preserving if for every pair of points x and y in M1,
d
2
(
f
(
x
)
,
f
(
y
)
)
=
d
1
(
x
,
y
)
.
{\displaystyle d_{2}(f(x),f(y))=d_{1}(x,y).}
It follows from the metric space axioms that a distance-preserving function is injective. A bijective distance-preserving function is called an isometry. One perhaps non-obvious example of an isometry between spaces described in this article is the map
f
:
(
R
2
,
d
1
)
→
(
R
2
,
d
∞
)
{\displaystyle f:(\mathbb {R} ^{2},d_{1})\to (\mathbb {R} ^{2},d_{\infty })}
defined by
f
(
x
,
y
)
=
(
x
+
y
,
x
−
y
)
.
{\displaystyle f(x,y)=(x+y,x-y).}
If there is an isometry between the spaces M1 and M2, they are said to be isometric. Metric spaces that are isometric are essentially identical.
=== Continuous maps ===
On the other end of the spectrum, one can forget entirely about the metric structure and study continuous maps, which only preserve topological structure. There are several equivalent definitions of continuity for metric spaces. The most important are:
Topological definition. A function
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is continuous if for every open set U in M2, the preimage
f
−
1
(
U
)
{\displaystyle f^{-1}(U)}
is open.
Sequential continuity. A function
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is continuous if whenever a sequence (xn) converges to a point x in M1, the sequence
f
(
x
1
)
,
f
(
x
2
)
,
…
{\displaystyle f(x_{1}),f(x_{2}),\ldots }
converges to the point f(x) in M2.
(These first two definitions are not equivalent for all topological spaces.)
ε–δ definition. A function
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is continuous if for every point x in M1 and every ε > 0 there exists δ > 0 such that for all y in M1 we have
d
1
(
x
,
y
)
<
δ
⟹
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
.
{\displaystyle d_{1}(x,y)<\delta \implies d_{2}(f(x),f(y))<\varepsilon .}
A homeomorphism is a continuous bijection whose inverse is also continuous; if there is a homeomorphism between M1 and M2, they are said to be homeomorphic. Homeomorphic spaces are the same from the point of view of topology, but may have very different metric properties. For example,
R
{\displaystyle \mathbb {R} }
is unbounded and complete, while (0, 1) is bounded but not complete.
=== Uniformly continuous maps ===
A function
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is uniformly continuous if for every real number ε > 0 there exists δ > 0 such that for all points x and y in M1 such that
d
(
x
,
y
)
<
δ
{\displaystyle d(x,y)<\delta }
, we have
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
.
{\displaystyle d_{2}(f(x),f(y))<\varepsilon .}
The only difference between this definition and the ε–δ definition of continuity is the order of quantifiers: the choice of δ must depend only on ε and not on the point x. However, this subtle change makes a big difference. For example, uniformly continuous maps take Cauchy sequences in M1 to Cauchy sequences in M2. In other words, uniform continuity preserves some metric properties which are not purely topological.
On the other hand, the Heine–Cantor theorem states that if M1 is compact, then every continuous map is uniformly continuous. In other words, uniform continuity cannot distinguish any non-topological features of compact metric spaces.
=== Lipschitz maps and contractions ===
A Lipschitz map is one that stretches distances by at most a bounded factor. Formally, given a real number K > 0, the map
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is K-Lipschitz if
d
2
(
f
(
x
)
,
f
(
y
)
)
≤
K
d
1
(
x
,
y
)
for all
x
,
y
∈
M
1
.
{\displaystyle d_{2}(f(x),f(y))\leq Kd_{1}(x,y)\quad {\text{for all}}\quad x,y\in M_{1}.}
Lipschitz maps are particularly important in metric geometry, since they provide more flexibility than distance-preserving maps, but still make essential use of the metric. For example, a curve in a metric space is rectifiable (has finite length) if and only if it has a Lipschitz reparametrization.
A 1-Lipschitz map is sometimes called a nonexpanding or metric map. Metric maps are commonly taken to be the morphisms of the category of metric spaces.
A K-Lipschitz map for K < 1 is called a contraction. The Banach fixed-point theorem states that if M is a complete metric space, then every contraction
f
:
M
→
M
{\displaystyle f:M\to M}
admits a unique fixed point. If the metric space M is compact, the result holds for a slightly weaker condition on f: a map
f
:
M
→
M
{\displaystyle f:M\to M}
admits a unique fixed point if
d
(
f
(
x
)
,
f
(
y
)
)
<
d
(
x
,
y
)
for all
x
≠
y
∈
M
1
.
{\displaystyle d(f(x),f(y))<d(x,y)\quad {\mbox{for all}}\quad x\neq y\in M_{1}.}
=== Quasi-isometries ===
A quasi-isometry is a map that preserves the "large-scale structure" of a metric space. Quasi-isometries need not be continuous. For example,
R
2
{\displaystyle \mathbb {R} ^{2}}
and its subspace
Z
2
{\displaystyle \mathbb {Z} ^{2}}
are quasi-isometric, even though one is connected and the other is discrete. The equivalence relation of quasi-isometry is important in geometric group theory: the Švarc–Milnor lemma states that all spaces on which a group acts geometrically are quasi-isometric.
Formally, the map
f
:
M
1
→
M
2
{\displaystyle f\,\colon M_{1}\to M_{2}}
is a quasi-isometric embedding if there exist constants A ≥ 1 and B ≥ 0 such that
1
A
d
2
(
f
(
x
)
,
f
(
y
)
)
−
B
≤
d
1
(
x
,
y
)
≤
A
d
2
(
f
(
x
)
,
f
(
y
)
)
+
B
for all
x
,
y
∈
M
1
.
{\displaystyle {\frac {1}{A}}d_{2}(f(x),f(y))-B\leq d_{1}(x,y)\leq Ad_{2}(f(x),f(y))+B\quad {\text{ for all }}\quad x,y\in M_{1}.}
It is a quasi-isometry if in addition it is quasi-surjective, i.e. there is a constant C ≥ 0 such that every point in
M
2
{\displaystyle M_{2}}
is at distance at most C from some point in the image
f
(
M
1
)
{\displaystyle f(M_{1})}
.
=== Notions of metric space equivalence ===
Given two metric spaces
(
M
1
,
d
1
)
{\displaystyle (M_{1},d_{1})}
and
(
M
2
,
d
2
)
{\displaystyle (M_{2},d_{2})}
:
They are called homeomorphic (topologically isomorphic) if there is a homeomorphism between them (i.e., a continuous bijection with a continuous inverse). If
M
1
=
M
2
{\displaystyle M_{1}=M_{2}}
and the identity map is a homeomorphism, then
d
1
{\displaystyle d_{1}}
and
d
2
{\displaystyle d_{2}}
are said to be topologically equivalent.
They are called uniformic (uniformly isomorphic) if there is a uniform isomorphism between them (i.e., a uniformly continuous bijection with a uniformly continuous inverse).
They are called bilipschitz homeomorphic if there is a bilipschitz bijection between them (i.e., a Lipschitz bijection with a Lipschitz inverse).
They are called isometric if there is a (bijective) isometry between them. In this case, the two metric spaces are essentially identical.
They are called quasi-isometric if there is a quasi-isometry between them.
== Metric spaces with additional structure ==
=== Normed vector spaces ===
A normed vector space is a vector space equipped with a norm, which is a function that measures the length of vectors. The norm of a vector v is typically denoted by
‖
v
‖
{\displaystyle \lVert v\rVert }
. Any normed vector space can be equipped with a metric in which the distance between two vectors x and y is given by
d
(
x
,
y
)
:=
‖
x
−
y
‖
.
{\displaystyle d(x,y):=\lVert x-y\rVert .}
The metric d is said to be induced by the norm
‖
⋅
‖
{\displaystyle \lVert {\cdot }\rVert }
. Conversely, if a metric d on a vector space X is
translation invariant:
d
(
x
,
y
)
=
d
(
x
+
a
,
y
+
a
)
{\displaystyle d(x,y)=d(x+a,y+a)}
for every x, y, and a in X; and
absolutely homogeneous:
d
(
α
x
,
α
y
)
=
|
α
|
d
(
x
,
y
)
{\displaystyle d(\alpha x,\alpha y)=|\alpha |d(x,y)}
for every x and y in X and real number α;
then it is the metric induced by the norm
‖
x
‖
:=
d
(
x
,
0
)
.
{\displaystyle \lVert x\rVert :=d(x,0).}
A similar relationship holds between seminorms and pseudometrics.
Among examples of metrics induced by a norm are the metrics d1, d2, and d∞ on
R
2
{\displaystyle \mathbb {R} ^{2}}
, which are induced by the Manhattan norm, the Euclidean norm, and the maximum norm, respectively. More generally, the Kuratowski embedding allows one to see any metric space as a subspace of a normed vector space.
Infinite-dimensional normed vector spaces, particularly spaces of functions, are studied in functional analysis. Completeness is particularly important in this context: a complete normed vector space is known as a Banach space. An unusual property of normed vector spaces is that linear transformations between them are continuous if and only if they are Lipschitz. Such transformations are known as bounded operators.
=== Length spaces ===
A curve in a metric space (M, d) is a continuous function
γ
:
[
0
,
T
]
→
M
{\displaystyle \gamma :[0,T]\to M}
. The length of γ is measured by
L
(
γ
)
=
sup
0
=
x
0
<
x
1
<
⋯
<
x
n
=
T
{
∑
k
=
1
n
d
(
γ
(
x
k
−
1
)
,
γ
(
x
k
)
)
}
.
{\displaystyle L(\gamma )=\sup _{0=x_{0}<x_{1}<\cdots <x_{n}=T}\left\{\sum _{k=1}^{n}d(\gamma (x_{k-1}),\gamma (x_{k}))\right\}.}
In general, this supremum may be infinite; a curve of finite length is called rectifiable. Suppose that the length of the curve γ is equal to the distance between its endpoints—that is, it is the shortest possible path between its endpoints. After reparametrization by arc length, γ becomes a geodesic: a curve which is a distance-preserving function. A geodesic is a shortest possible path between any two of its points.
A geodesic metric space is a metric space which admits a geodesic between any two of its points. The spaces
(
R
2
,
d
1
)
{\displaystyle (\mathbb {R} ^{2},d_{1})}
and
(
R
2
,
d
2
)
{\displaystyle (\mathbb {R} ^{2},d_{2})}
are both geodesic metric spaces. In
(
R
2
,
d
2
)
{\displaystyle (\mathbb {R} ^{2},d_{2})}
, geodesics are unique, but in
(
R
2
,
d
1
)
{\displaystyle (\mathbb {R} ^{2},d_{1})}
, there are often infinitely many geodesics between two points, as shown in the figure at the top of the article.
The space M is a length space (or the metric d is intrinsic) if the distance between any two points x and y is the infimum of lengths of paths between them. Unlike in a geodesic metric space, the infimum does not have to be attained. An example of a length space which is not geodesic is the Euclidean plane minus the origin: the points (1, 0) and (-1, 0) can be joined by paths of length arbitrarily close to 2, but not by a path of length 2. An example of a metric space which is not a length space is given by the straight-line metric on the sphere: the straight line between two points through the center of the Earth is shorter than any path along the surface.
Given any metric space (M, d), one can define a new, intrinsic distance function dintrinsic on M by setting the distance between points x and y to be the infimum of the d-lengths of paths between them. For instance, if d is the straight-line distance on the sphere, then dintrinsic is the great-circle distance. However, in some cases dintrinsic may have infinite values. For example, if M is the Koch snowflake with the subspace metric d induced from
R
2
{\displaystyle \mathbb {R} ^{2}}
, then the resulting intrinsic distance is infinite for any pair of distinct points.
=== Riemannian manifolds ===
A Riemannian manifold is a space equipped with a Riemannian metric tensor, which determines lengths of tangent vectors at every point. This can be thought of defining a notion of distance infinitesimally. In particular, a differentiable path
γ
:
[
0
,
T
]
→
M
{\displaystyle \gamma :[0,T]\to M}
in a Riemannian manifold M has length defined as the integral of the length of the tangent vector to the path:
L
(
γ
)
=
∫
0
T
|
γ
˙
(
t
)
|
d
t
.
{\displaystyle L(\gamma )=\int _{0}^{T}|{\dot {\gamma }}(t)|dt.}
On a connected Riemannian manifold, one then defines the distance between two points as the infimum of lengths of smooth paths between them. This construction generalizes to other kinds of infinitesimal metrics on manifolds, such as sub-Riemannian and Finsler metrics.
The Riemannian metric is uniquely determined by the distance function; this means that in principle, all information about a Riemannian manifold can be recovered from its distance function. One direction in metric geometry is finding purely metric ("synthetic") formulations of properties of Riemannian manifolds. For example, a Riemannian manifold is a CAT(k) space (a synthetic condition which depends purely on the metric) if and only if its sectional curvature is bounded above by k. Thus CAT(k) spaces generalize upper curvature bounds to general metric spaces.
=== Metric measure spaces ===
Real analysis makes use of both the metric on
R
n
{\displaystyle \mathbb {R} ^{n}}
and the Lebesgue measure. Therefore, generalizations of many ideas from analysis naturally reside in metric measure spaces: spaces that have both a measure and a metric which are compatible with each other. Formally, a metric measure space is a metric space equipped with a Borel regular measure such that every ball has positive measure. For example Euclidean spaces of dimension n, and more generally n-dimensional Riemannian manifolds, naturally have the structure of a metric measure space, equipped with the Lebesgue measure. Certain fractal metric spaces such as the Sierpiński gasket can be equipped with the α-dimensional Hausdorff measure where α is the Hausdorff dimension. In general, however, a metric space may not have an "obvious" choice of measure.
One application of metric measure spaces is generalizing the notion of Ricci curvature beyond Riemannian manifolds. Just as CAT(k) and Alexandrov spaces generalize sectional curvature bounds, RCD spaces are a class of metric measure spaces which generalize lower bounds on Ricci curvature.
== Further examples and applications ==
=== Graphs and finite metric spaces ===
A metric space is discrete if its induced topology is the discrete topology. Although many concepts, such as completeness and compactness, are not interesting for such spaces, they are nevertheless an object of study in several branches of mathematics. In particular, finite metric spaces (those having a finite number of points) are studied in combinatorics and theoretical computer science. Embeddings in other metric spaces are particularly well-studied. For example, not every finite metric space can be isometrically embedded in a Euclidean space or in Hilbert space. On the other hand, in the worst case the required distortion (bilipschitz constant) is only logarithmic in the number of points.
For any undirected connected graph G, the set V of vertices of G can be turned into a metric space by defining the distance between vertices x and y to be the length of the shortest edge path connecting them. This is also called shortest-path distance or geodesic distance. In geometric group theory this construction is applied to the Cayley graph of a (typically infinite) finitely-generated group, yielding the word metric. Up to a bilipschitz homeomorphism, the word metric depends only on the group and not on the chosen finite generating set.
=== Metric embeddings and approximations ===
An important area of study in finite metric spaces is the embedding of complex metric spaces into simpler ones while controlling the distortion of distances. This is particularly useful in computer science and discrete mathematics, where algorithms often perform more efficiently on simpler structures like tree metrics.
A significant result in this area is that any finite metric space can be probabilistically embedded into a tree metric with an expected distortion of
O
(
l
o
g
n
)
{\displaystyle O(logn)}
, where
n
{\displaystyle n}
is the number of points in the metric space.
This embedding is notable because it achieves the best possible asymptotic bound on distortion, matching the lower bound of
Ω
(
l
o
g
n
)
{\displaystyle \Omega (logn)}
. The tree metrics produced in this embedding dominate the original metrics, meaning that distances in the tree are greater than or equal to those in the original space. This property is particularly useful for designing approximation algorithms, as it allows for the preservation of distance-related properties while simplifying the underlying structure.
The result has significant implications for various computational problems:
Network design: Improves approximation algorithms for problems like the Group Steiner tree problem (a generalization of the Steiner tree problem) and Buy-at-bulk network design (a problem in Network planning and design) by simplifying the metric space to a tree metric.
Clustering: Enhances algorithms for clustering problems where hierarchical clustering can be performed more efficiently on tree metrics.
Online algorithms: Benefits problems like the k-server problem and metrical task system by providing better competitive ratios through simplified metrics.
The technique involves constructing a hierarchical decomposition of the original metric space and converting it into a tree metric via a randomized algorithm. The
O
(
l
o
g
n
)
{\displaystyle O(logn)}
distortion bound has led to improved approximation ratios in several algorithmic problems, demonstrating the practical significance of this theoretical result.
=== Distances between mathematical objects ===
In modern mathematics, one often studies spaces whose points are themselves mathematical objects. A distance function on such a space generally aims to measure the dissimilarity between two objects. Here are some examples:
Functions to a metric space. If X is any set and M is a metric space, then the set of all bounded functions
f
:
X
→
M
{\displaystyle f\colon X\to M}
(i.e. those functions whose image is a bounded subset of
M
{\displaystyle M}
) can be turned into a metric space by defining the distance between two bounded functions f and g to be
d
(
f
,
g
)
=
sup
x
∈
X
d
(
f
(
x
)
,
g
(
x
)
)
.
{\displaystyle d(f,g)=\sup _{x\in X}d(f(x),g(x)).}
This metric is called the uniform metric or supremum metric. If M is complete, then this function space is complete as well; moreover, if X is also a topological space, then the subspace consisting of all bounded continuous functions from X to M is also complete. When X is a subspace of
R
n
{\displaystyle \mathbb {R} ^{n}}
, this function space is known as a classical Wiener space.
String metrics and edit distances. There are many ways of measuring distances between strings of characters, which may represent sentences in computational linguistics or code words in coding theory. Edit distances attempt to measure the number of changes necessary to get from one string to another. For example, the Hamming distance measures the minimal number of substitutions needed, while the Levenshtein distance measures the minimal number of deletions, insertions, and substitutions; both of these can be thought of as distances in an appropriate graph.
Graph edit distance is a measure of dissimilarity between two graphs, defined as the minimal number of graph edit operations required to transform one graph into another.
Wasserstein metrics measure the distance between two measures on the same metric space. The Wasserstein distance between two measures is, roughly speaking, the cost of transporting one to the other.
The set of all m by n matrices over some field is a metric space with respect to the rank distance
d
(
A
,
B
)
=
r
a
n
k
(
B
−
A
)
{\displaystyle d(A,B)=\mathrm {rank} (B-A)}
.
The Helly metric in game theory measures the difference between strategies in a game.
=== Hausdorff and Gromov–Hausdorff distance ===
The idea of spaces of mathematical objects can also be applied to subsets of a metric space, as well as metric spaces themselves. Hausdorff and Gromov–Hausdorff distance define metrics on the set of compact subsets of a metric space and the set of compact metric spaces, respectively.
Suppose (M, d) is a metric space, and let S be a subset of M. The distance from S to a point x of M is, informally, the distance from x to the closest point of S. However, since there may not be a single closest point, it is defined via an infimum:
d
(
x
,
S
)
=
inf
{
d
(
x
,
s
)
:
s
∈
S
}
.
{\displaystyle d(x,S)=\inf\{d(x,s):s\in S\}.}
In particular,
d
(
x
,
S
)
=
0
{\displaystyle d(x,S)=0}
if and only if x belongs to the closure of S. Furthermore, distances between points and sets satisfy a version of the triangle inequality:
d
(
x
,
S
)
≤
d
(
x
,
y
)
+
d
(
y
,
S
)
,
{\displaystyle d(x,S)\leq d(x,y)+d(y,S),}
and therefore the map
d
S
:
M
→
R
{\displaystyle d_{S}:M\to \mathbb {R} }
defined by
d
S
(
x
)
=
d
(
x
,
S
)
{\displaystyle d_{S}(x)=d(x,S)}
is continuous. Incidentally, this shows that metric spaces are completely regular.
Given two subsets S and T of M, their Hausdorff distance is
d
H
(
S
,
T
)
=
max
{
sup
{
d
(
s
,
T
)
:
s
∈
S
}
,
sup
{
d
(
t
,
S
)
:
t
∈
T
}
}
.
{\displaystyle d_{H}(S,T)=\max\{\sup\{d(s,T):s\in S\},\sup\{d(t,S):t\in T\}\}.}
Informally, two sets S and T are close to each other in the Hausdorff distance if no element of S is too far from T and vice versa. For example, if S is an open set in Euclidean space T is an ε-net inside S, then
d
H
(
S
,
T
)
<
ε
{\displaystyle d_{H}(S,T)<\varepsilon }
. In general, the Hausdorff distance
d
H
(
S
,
T
)
{\displaystyle d_{H}(S,T)}
can be infinite or zero. However, the Hausdorff distance between two distinct compact sets is always positive and finite. Thus the Hausdorff distance defines a metric on the set of compact subsets of M.
The Gromov–Hausdorff metric defines a distance between (isometry classes of) compact metric spaces. The Gromov–Hausdorff distance between compact spaces X and Y is the infimum of the Hausdorff distance over all metric spaces Z that contain X and Y as subspaces. While the exact value of the Gromov–Hausdorff distance is rarely useful to know, the resulting topology has found many applications.
=== Miscellaneous examples ===
Given a metric space (X, d) and an increasing concave function
f
:
[
0
,
∞
)
→
[
0
,
∞
)
{\displaystyle f\colon [0,\infty )\to [0,\infty )}
such that f(t) = 0 if and only if t = 0, then
d
f
(
x
,
y
)
=
f
(
d
(
x
,
y
)
)
{\displaystyle d_{f}(x,y)=f(d(x,y))}
is also a metric on X. If f(t) = tα for some real number α < 1, such a metric is known as a snowflake of d.
The tight span of a metric space is another metric space which can be thought of as an abstract version of the convex hull.
The knight's move metric, the minimal number of knight's moves to reach one point in
Z
2
{\displaystyle \mathbb {Z} ^{2}}
from another, is a metric on
Z
2
{\displaystyle \mathbb {Z} ^{2}}
.
The British Rail metric (also called the "post office metric" or the "French railway metric") on a normed vector space is given by
d
(
x
,
y
)
=
‖
x
‖
+
‖
y
‖
{\displaystyle d(x,y)=\lVert x\rVert +\lVert y\rVert }
for distinct points
x
{\displaystyle x}
and
y
{\displaystyle y}
, and
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
. More generally
‖
⋅
‖
{\displaystyle \lVert \cdot \rVert }
can be replaced with a function
f
{\displaystyle f}
taking an arbitrary set
S
{\displaystyle S}
to non-negative reals and taking the value
0
{\displaystyle 0}
at most once: then the metric is defined on
S
{\displaystyle S}
by
d
(
x
,
y
)
=
f
(
x
)
+
f
(
y
)
{\displaystyle d(x,y)=f(x)+f(y)}
for distinct points
x
{\displaystyle x}
and
y
{\displaystyle y}
, and
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
. The name alludes to the tendency of railway journeys to proceed via London (or Paris) irrespective of their final destination.
The Robinson–Foulds metric used for calculating the distances between Phylogenetic trees in Phylogenetics
== Constructions ==
=== Product metric spaces ===
If
(
M
1
,
d
1
)
,
…
,
(
M
n
,
d
n
)
{\displaystyle (M_{1},d_{1}),\ldots ,(M_{n},d_{n})}
are metric spaces, and N is the Euclidean norm on
R
n
{\displaystyle \mathbb {R} ^{n}}
, then
(
M
1
×
⋯
×
M
n
,
d
×
)
{\displaystyle {\bigl (}M_{1}\times \cdots \times M_{n},d_{\times }{\bigr )}}
is a metric space, where the product metric is defined by
d
×
(
(
x
1
,
…
,
x
n
)
,
(
y
1
,
…
,
y
n
)
)
=
N
(
d
1
(
x
1
,
y
1
)
,
…
,
d
n
(
x
n
,
y
n
)
)
,
{\displaystyle d_{\times }{\bigl (}(x_{1},\ldots ,x_{n}),(y_{1},\ldots ,y_{n}){\bigr )}=N{\bigl (}d_{1}(x_{1},y_{1}),\ldots ,d_{n}(x_{n},y_{n}){\bigr )},}
and the induced topology agrees with the product topology. By the equivalence of norms in finite dimensions, a topologically equivalent metric is obtained if N is the taxicab norm, a p-norm, the maximum norm, or any other norm which is non-decreasing as the coordinates of a positive n-tuple increase (yielding the triangle inequality).
Similarly, a metric on the topological product of countably many metric spaces can be obtained using the metric
d
(
x
,
y
)
=
∑
i
=
1
∞
1
2
i
d
i
(
x
i
,
y
i
)
1
+
d
i
(
x
i
,
y
i
)
.
{\displaystyle d(x,y)=\sum _{i=1}^{\infty }{\frac {1}{2^{i}}}{\frac {d_{i}(x_{i},y_{i})}{1+d_{i}(x_{i},y_{i})}}.}
The topological product of uncountably many metric spaces need not be metrizable. For example, an uncountable product of copies of
R
{\displaystyle \mathbb {R} }
is not first-countable and thus is not metrizable.
=== Quotient metric spaces ===
If M is a metric space with metric d, and
∼
{\displaystyle \sim }
is an equivalence relation on M, then we can endow the quotient set
M
/
∼
{\displaystyle M/\!\sim }
with a pseudometric. The distance between two equivalence classes
[
x
]
{\displaystyle [x]}
and
[
y
]
{\displaystyle [y]}
is defined as
d
′
(
[
x
]
,
[
y
]
)
=
inf
{
d
(
p
1
,
q
1
)
+
d
(
p
2
,
q
2
)
+
⋯
+
d
(
p
n
,
q
n
)
}
,
{\displaystyle d'([x],[y])=\inf\{d(p_{1},q_{1})+d(p_{2},q_{2})+\dotsb +d(p_{n},q_{n})\},}
where the infimum is taken over all finite sequences
(
p
1
,
p
2
,
…
,
p
n
)
{\displaystyle (p_{1},p_{2},\dots ,p_{n})}
and
(
q
1
,
q
2
,
…
,
q
n
)
{\displaystyle (q_{1},q_{2},\dots ,q_{n})}
with
p
1
∼
x
{\displaystyle p_{1}\sim x}
,
q
n
∼
y
{\displaystyle q_{n}\sim y}
,
q
i
∼
p
i
+
1
,
i
=
1
,
2
,
…
,
n
−
1
{\displaystyle q_{i}\sim p_{i+1},i=1,2,\dots ,n-1}
. In general this will only define a pseudometric, i.e.
d
′
(
[
x
]
,
[
y
]
)
=
0
{\displaystyle d'([x],[y])=0}
does not necessarily imply that
[
x
]
=
[
y
]
{\displaystyle [x]=[y]}
. However, for some equivalence relations (e.g., those given by gluing together polyhedra along faces),
d
′
{\displaystyle d'}
is a metric.
The quotient metric
d
′
{\displaystyle d'}
is characterized by the following universal property. If
f
:
(
M
,
d
)
→
(
X
,
δ
)
{\displaystyle f\,\colon (M,d)\to (X,\delta )}
is a metric (i.e. 1-Lipschitz) map between metric spaces satisfying f(x) = f(y) whenever
x
∼
y
{\displaystyle x\sim y}
, then the induced function
f
¯
:
M
/
∼
→
X
{\displaystyle {\overline {f}}\,\colon {M/\sim }\to X}
, given by
f
¯
(
[
x
]
)
=
f
(
x
)
{\displaystyle {\overline {f}}([x])=f(x)}
, is a metric map
f
¯
:
(
M
/
∼
,
d
′
)
→
(
X
,
δ
)
.
{\displaystyle {\overline {f}}\,\colon (M/\sim ,d')\to (X,\delta ).}
The quotient metric does not always induce the quotient topology. For example, the topological quotient of the metric space
N
×
[
0
,
1
]
{\displaystyle \mathbb {N} \times [0,1]}
identifying all points of the form
(
n
,
0
)
{\displaystyle (n,0)}
is not metrizable since it is not first-countable, but the quotient metric is a well-defined metric on the same set which induces a coarser topology. Moreover, different metrics on the original topological space (a disjoint union of countably many intervals) lead to different topologies on the quotient.
A topological space is sequential if and only if it is a (topological) quotient of a metric space.
== Generalizations of metric spaces ==
There are several notions of spaces which have less structure than a metric space, but more than a topological space.
Uniform spaces are spaces in which distances are not defined, but uniform continuity is.
Approach spaces are spaces in which point-to-set distances are defined, instead of point-to-point distances. They have particularly good properties from the point of view of category theory.
Continuity spaces are a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains.
There are also numerous ways of relaxing the axioms for a metric, giving rise to various notions of generalized metric spaces. These generalizations can also be combined. The terminology used to describe them is not completely standardized. Most notably, in functional analysis pseudometrics often come from seminorms on vector spaces, and so it is natural to call them "semimetrics". This conflicts with the use of the term in topology.
=== Extended metrics ===
Some authors define metrics so as to allow the distance function d to attain the value ∞, i.e. distances are non-negative numbers on the extended real number line. Such a function is also called an extended metric or "∞-metric". Every extended metric can be replaced by a real-valued metric that is topologically equivalent. This can be done using a subadditive monotonically increasing bounded function which is zero at zero, e.g.
d
′
(
x
,
y
)
=
d
(
x
,
y
)
/
(
1
+
d
(
x
,
y
)
)
{\displaystyle d'(x,y)=d(x,y)/(1+d(x,y))}
or
d
″
(
x
,
y
)
=
min
(
1
,
d
(
x
,
y
)
)
{\displaystyle d''(x,y)=\min(1,d(x,y))}
.
=== Metrics valued in structures other than the real numbers ===
The requirement that the metric take values in
[
0
,
∞
)
{\displaystyle [0,\infty )}
can be relaxed to consider metrics with values in other structures, including:
Ordered fields, yielding the notion of a generalised metric.
More general directed sets. In the absence of an addition operation, the triangle inequality does not make sense and is replaced with an ultrametric inequality. This leads to the notion of a generalized ultrametric.
These generalizations still induce a uniform structure on the space.
=== Pseudometrics ===
A pseudometric on
X
{\displaystyle X}
is a function
d
:
X
×
X
→
R
{\displaystyle d:X\times X\to \mathbb {R} }
which satisfies the axioms for a metric, except that instead of the second (identity of indiscernibles) only
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
for all
x
{\displaystyle x}
is required. In other words, the axioms for a pseudometric are:
d
(
x
,
y
)
≥
0
{\displaystyle d(x,y)\geq 0}
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
d
(
x
,
y
)
=
d
(
y
,
x
)
{\displaystyle d(x,y)=d(y,x)}
d
(
x
,
z
)
≤
d
(
x
,
y
)
+
d
(
y
,
z
)
{\displaystyle d(x,z)\leq d(x,y)+d(y,z)}
.
In some contexts, pseudometrics are referred to as semimetrics because of their relation to seminorms.
=== Quasimetrics ===
Occasionally, a quasimetric is defined as a function that satisfies all axioms for a metric with the possible exception of symmetry. The name of this generalisation is not entirely standardized.
d
(
x
,
y
)
≥
0
{\displaystyle d(x,y)\geq 0}
d
(
x
,
y
)
=
0
⟺
x
=
y
{\displaystyle d(x,y)=0\iff x=y}
d
(
x
,
z
)
≤
d
(
x
,
y
)
+
d
(
y
,
z
)
{\displaystyle d(x,z)\leq d(x,y)+d(y,z)}
Quasimetrics are common in real life. For example, given a set X of mountain villages, the typical walking times between elements of X form a quasimetric because travel uphill takes longer than travel downhill. Another example is the length of car rides in a city with one-way streets: here, a shortest path from point A to point B goes along a different set of streets than a shortest path from B to A and may have a different length.
A quasimetric on the reals can be defined by setting
d
(
x
,
y
)
=
{
x
−
y
if
x
≥
y
,
1
otherwise.
{\displaystyle d(x,y)={\begin{cases}x-y&{\text{if }}x\geq y,\\1&{\text{otherwise.}}\end{cases}}}
The 1 may be replaced, for example, by infinity or by
1
+
y
−
x
{\displaystyle 1+{\sqrt {y-x}}}
or any other subadditive function of y-x. This quasimetric describes the cost of modifying a metal stick: it is easy to reduce its size by filing it down, but it is difficult or impossible to grow it.
Given a quasimetric on X, one can define an R-ball around x to be the set
{
y
∈
X
|
d
(
x
,
y
)
≤
R
}
{\displaystyle \{y\in X|d(x,y)\leq R\}}
. As in the case of a metric, such balls form a basis for a topology on X, but this topology need not be metrizable. For example, the topology induced by the quasimetric on the reals described above is the (reversed) Sorgenfrey line.
=== Metametrics or partial metrics ===
In a metametric, all the axioms of a metric are satisfied except that the distance between identical points is not necessarily zero. In other words, the axioms for a metametric are:
d
(
x
,
y
)
≥
0
{\displaystyle d(x,y)\geq 0}
d
(
x
,
y
)
=
0
⟹
x
=
y
{\displaystyle d(x,y)=0\implies x=y}
d
(
x
,
y
)
=
d
(
y
,
x
)
{\displaystyle d(x,y)=d(y,x)}
d
(
x
,
z
)
≤
d
(
x
,
y
)
+
d
(
y
,
z
)
.
{\displaystyle d(x,z)\leq d(x,y)+d(y,z).}
Metametrics appear in the study of Gromov hyperbolic metric spaces and their boundaries. The visual metametric on such a space satisfies
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
for points
x
{\displaystyle x}
on the boundary, but otherwise
d
(
x
,
x
)
{\displaystyle d(x,x)}
is approximately the distance from
x
{\displaystyle x}
to the boundary. Metametrics were first defined by Jussi Väisälä. In other work, a function satisfying these axioms is called a partial metric or a dislocated metric.
=== Semimetrics ===
A semimetric on
X
{\displaystyle X}
is a function
d
:
X
×
X
→
R
{\displaystyle d:X\times X\to \mathbb {R} }
that satisfies the first three axioms, but not necessarily the triangle inequality:
d
(
x
,
y
)
≥
0
{\displaystyle d(x,y)\geq 0}
d
(
x
,
y
)
=
0
⟺
x
=
y
{\displaystyle d(x,y)=0\iff x=y}
d
(
x
,
y
)
=
d
(
y
,
x
)
{\displaystyle d(x,y)=d(y,x)}
Some authors work with a weaker form of the triangle inequality, such as:
The ρ-inframetric inequality implies the ρ-relaxed triangle inequality (assuming the first axiom), and the ρ-relaxed triangle inequality implies the 2ρ-inframetric inequality. Semimetrics satisfying these equivalent conditions have sometimes been referred to as quasimetrics, nearmetrics or inframetrics.
The ρ-inframetric inequalities were introduced to model round-trip delay times in the internet. The triangle inequality implies the 2-inframetric inequality, and the ultrametric inequality is exactly the 1-inframetric inequality.
=== Premetrics ===
Relaxing the last three axioms leads to the notion of a premetric, i.e. a function satisfying the following conditions:
d
(
x
,
y
)
≥
0
{\displaystyle d(x,y)\geq 0}
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
This is not a standard term. Sometimes it is used to refer to other generalizations of metrics such as pseudosemimetrics or pseudometrics; in translations of Russian books it sometimes appears as "prametric". A premetric that satisfies symmetry, i.e. a pseudosemimetric, is also called a distance.
Any premetric gives rise to a topology as follows. For a positive real
r
{\displaystyle r}
, the
r
{\displaystyle r}
-ball centered at a point
p
{\displaystyle p}
is defined as
B
r
(
p
)
=
{
x
|
d
(
x
,
p
)
<
r
}
.
{\displaystyle B_{r}(p)=\{x|d(x,p)<r\}.}
A set is called open if for any point
p
{\displaystyle p}
in the set there is an
r
{\displaystyle r}
-ball centered at
p
{\displaystyle p}
which is contained in the set. Every premetric space is a topological space, and in fact a sequential space.
In general, the
r
{\displaystyle r}
-balls themselves need not be open sets with respect to this topology.
As for metrics, the distance between two sets
A
{\displaystyle A}
and
B
{\displaystyle B}
, is defined as
d
(
A
,
B
)
=
inf
x
∈
A
,
y
∈
B
d
(
x
,
y
)
.
{\displaystyle d(A,B)={\underset {x\in A,y\in B}{\inf }}d(x,y).}
This defines a premetric on the power set of a premetric space. If we start with a (pseudosemi-)metric space, we get a pseudosemimetric, i.e. a symmetric premetric.
Any premetric gives rise to a preclosure operator
c
l
{\displaystyle cl}
as follows:
c
l
(
A
)
=
{
x
|
d
(
x
,
A
)
=
0
}
.
{\displaystyle cl(A)=\{x|d(x,A)=0\}.}
=== Pseudoquasimetrics ===
The prefixes pseudo-, quasi- and semi- can also be combined, e.g., a pseudoquasimetric (sometimes called hemimetric) relaxes both the indiscernibility axiom and the symmetry axiom and is simply a premetric satisfying the triangle inequality. For pseudoquasimetric spaces the open
r
{\displaystyle r}
-balls form a basis of open sets. A very basic example of a pseudoquasimetric space is the set
{
0
,
1
}
{\displaystyle \{0,1\}}
with the premetric given by
d
(
0
,
1
)
=
1
{\displaystyle d(0,1)=1}
and
d
(
1
,
0
)
=
0.
{\displaystyle d(1,0)=0.}
The associated topological space is the Sierpiński space.
Sets equipped with an extended pseudoquasimetric were studied by William Lawvere as "generalized metric spaces". From a categorical point of view, the extended pseudometric spaces and the extended pseudoquasimetric spaces, along with their corresponding nonexpansive maps, are the best behaved of the metric space categories. One can take arbitrary products and coproducts and form quotient objects within the given category. If one drops "extended", one can only take finite products and coproducts. If one drops "pseudo", one cannot take quotients.
Lawvere also gave an alternate definition of such spaces as enriched categories. The ordered set
(
R
,
≥
)
{\displaystyle (\mathbb {R} ,\geq )}
can be seen as a category with one morphism
a
→
b
{\displaystyle a\to b}
if
a
≥
b
{\displaystyle a\geq b}
and none otherwise. Using + as the tensor product and 0 as the identity makes this category into a monoidal category
R
∗
{\displaystyle R^{*}}
.
Every (extended pseudoquasi-)metric space
(
M
,
d
)
{\displaystyle (M,d)}
can now be viewed as a category
M
∗
{\displaystyle M^{*}}
enriched over
R
∗
{\displaystyle R^{*}}
:
The objects of the category are the points of M.
For every pair of points x and y such that
d
(
x
,
y
)
<
∞
{\displaystyle d(x,y)<\infty }
, there is a single morphism which is assigned the object
d
(
x
,
y
)
{\displaystyle d(x,y)}
of
R
∗
{\displaystyle R^{*}}
.
The triangle inequality and the fact that
d
(
x
,
x
)
=
0
{\displaystyle d(x,x)=0}
for all points x derive from the properties of composition and identity in an enriched category.
Since
R
∗
{\displaystyle R^{*}}
is a poset, all diagrams that are required for an enriched category commute automatically.
=== Metrics on multisets ===
The notion of a metric can be generalized from a distance between two elements to a number assigned to a multiset of elements. A multiset is a generalization of the notion of a set in which an element can occur more than once. Define the multiset union
U
=
X
Y
{\displaystyle U=XY}
as follows: if an element x occurs m times in X and n times in Y then it occurs m + n times in U. A function d on the set of nonempty finite multisets of elements of a set M is a metric if
d
(
X
)
=
0
{\displaystyle d(X)=0}
if all elements of X are equal and
d
(
X
)
>
0
{\displaystyle d(X)>0}
otherwise (positive definiteness)
d
(
X
)
{\displaystyle d(X)}
depends only on the (unordered) multiset X (symmetry)
d
(
X
Y
)
≤
d
(
X
Z
)
+
d
(
Z
Y
)
{\displaystyle d(XY)\leq d(XZ)+d(ZY)}
(triangle inequality)
By considering the cases of axioms 1 and 2 in which the multiset X has two elements and the case of axiom 3 in which the multisets X, Y, and Z have one element each, one recovers the usual axioms for a metric. That is, every multiset metric yields an ordinary metric when restricted to sets of two elements.
A simple example is the set of all nonempty finite multisets
X
{\displaystyle X}
of integers with
d
(
X
)
=
max
(
X
)
−
min
(
X
)
{\displaystyle d(X)=\max(X)-\min(X)}
. More complex examples are information distance in multisets; and normalized compression distance (NCD) in multisets.
== See also ==
Acoustic metric – Tensor characterizing signal-carrying properties in a medium
Complete metric space – Metric geometry
Diversity (mathematics) – Generalization of metric spaces
Generalized metric space
Hilbert's fourth problem – Construct all metric spaces where lines resemble those on a sphere
Metric tree
Minkowski distance – Vector distance using pth powers
Signed distance function – Distance from a point to the boundary of a set
Similarity measure – Real-valued function that quantifies similarity between two objects
Space (mathematics) – Mathematical set with some added structure
Ultrametric space – Type of metric space
== Notes ==
== Citations ==
== References ==
== External links ==
"Metric space", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Far and near—several examples of distance functions at cut-the-knot. | Wikipedia/Metric_topology |
In mathematics, a real function
f
{\displaystyle f}
of real numbers is said to be uniformly continuous if there is a positive real number
δ
{\displaystyle \delta }
such that function values over any function domain interval of the size
δ
{\displaystyle \delta }
are as close to each other as we want. In other words, for a uniformly continuous real function of real numbers, if we want function value differences to be less than any positive real number
ε
{\displaystyle \varepsilon }
, then there is a positive real number
δ
{\displaystyle \delta }
such that
|
f
(
x
)
−
f
(
y
)
|
<
ε
{\displaystyle |f(x)-f(y)|<\varepsilon }
for any
x
{\displaystyle x}
and
y
{\displaystyle y}
in any interval of length
δ
{\displaystyle \delta }
within the domain of
f
{\displaystyle f}
.
The difference between uniform continuity and (ordinary) continuity is that in uniform continuity there is a globally applicable
δ
{\displaystyle \delta }
(the size of a function domain interval over which function value differences are less than
ε
{\displaystyle \varepsilon }
) that depends on only
ε
{\displaystyle \varepsilon }
, while in (ordinary) continuity there is a locally applicable
δ
{\displaystyle \delta }
that depends on both
ε
{\displaystyle \varepsilon }
and
x
{\displaystyle x}
. So uniform continuity is a stronger continuity condition than continuity; a function that is uniformly continuous is continuous but a function that is continuous is not necessarily uniformly continuous. The concepts of uniform continuity and continuity can be expanded to functions defined between metric spaces.
Continuous functions can fail to be uniformly continuous if they are unbounded on a bounded domain, such as
f
(
x
)
=
1
x
{\displaystyle f(x)={\tfrac {1}{x}}}
on
(
0
,
1
)
{\displaystyle (0,1)}
, or if their slopes become unbounded on an infinite domain, such as
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
on the real (number) line. However, any Lipschitz map between metric spaces is uniformly continuous, in particular any isometry (distance-preserving map).
Although continuity can be defined for functions between general topological spaces, defining uniform continuity requires more structure. The concept relies on comparing the sizes of neighbourhoods of distinct points, so it requires a metric space, or more generally a uniform space.
== Definition for functions on metric spaces ==
For a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
with metric spaces
(
X
,
d
1
)
{\displaystyle (X,d_{1})}
and
(
Y
,
d
2
)
{\displaystyle (Y,d_{2})}
, the following definitions of uniform continuity and (ordinary) continuity hold.
=== Definition of uniform continuity ===
f
{\displaystyle f}
is called uniformly continuous if for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists a real number
δ
>
0
{\displaystyle \delta >0}
such that for every
x
,
y
∈
X
{\displaystyle x,y\in X}
with
d
1
(
x
,
y
)
<
δ
{\displaystyle d_{1}(x,y)<\delta }
, we have
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle d_{2}(f(x),f(y))<\varepsilon }
. The set
{
y
∈
X
:
d
1
(
x
,
y
)
<
δ
}
{\displaystyle \{y\in X:d_{1}(x,y)<\delta \}}
for each
x
{\displaystyle x}
is a neighbourhood of
x
{\displaystyle x}
and the set
{
x
∈
X
:
d
1
(
x
,
y
)
<
δ
}
{\displaystyle \{x\in X:d_{1}(x,y)<\delta \}}
for each
y
{\displaystyle y}
is a neighbourhood of
y
{\displaystyle y}
by the definition of a neighbourhood in a metric space.
If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
are subsets of the real line, then
d
1
{\displaystyle d_{1}}
and
d
2
{\displaystyle d_{2}}
can be the standard one-dimensional Euclidean distance, yielding the following definition: for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists a real number
δ
>
0
{\displaystyle \delta >0}
such that for every
x
,
y
∈
X
{\displaystyle x,y\in X}
,
|
x
−
y
|
<
δ
⟹
|
f
(
x
)
−
f
(
y
)
|
<
ε
{\displaystyle |x-y|<\delta \implies |f(x)-f(y)|<\varepsilon }
(where
A
⟹
B
{\displaystyle A\implies B}
is a material conditional statement saying "if
A
{\displaystyle A}
, then
B
{\displaystyle B}
").
Equivalently,
f
{\displaystyle f}
is said to be uniformly continuous if
∀
ε
>
0
∃
δ
>
0
∀
x
∈
X
∀
y
∈
X
:
d
1
(
x
,
y
)
<
δ
⇒
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle \forall \varepsilon >0\;\exists \delta >0\;\forall x\in X\;\forall y\in X:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon }
. Here quantifications (
∀
ε
>
0
{\displaystyle \forall \varepsilon >0}
,
∃
δ
>
0
{\displaystyle \exists \delta >0}
,
∀
x
∈
X
{\displaystyle \forall x\in X}
, and
∀
y
∈
X
{\displaystyle \forall y\in X}
) are used.
Equivalently,
f
{\displaystyle f}
is uniformly continuous if it admits a modulus of continuity.
=== Definition of (ordinary) continuity ===
f
{\displaystyle f}
is called continuous
at
x
_
{\displaystyle {\underline {{\text{at }}x}}}
if for every real number
ε
>
0
{\displaystyle \varepsilon >0}
there exists a real number
δ
>
0
{\displaystyle \delta >0}
such that for every
y
∈
X
{\displaystyle y\in X}
with
d
1
(
x
,
y
)
<
δ
{\displaystyle d_{1}(x,y)<\delta }
, we have
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle d_{2}(f(x),f(y))<\varepsilon }
. The set
{
y
∈
X
:
d
1
(
x
,
y
)
<
δ
}
{\displaystyle \{y\in X:d_{1}(x,y)<\delta \}}
is a neighbourhood of
x
{\displaystyle x}
. Thus, (ordinary) continuity is a local property of the function at the point
x
{\displaystyle x}
.
Equivalently, a function
f
{\displaystyle f}
is said to be continuous if
∀
x
∈
X
∀
ε
>
0
∃
δ
>
0
∀
y
∈
X
:
d
1
(
x
,
y
)
<
δ
⇒
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle \forall x\in X\;\forall \varepsilon >0\;\exists \delta >0\;\forall y\in X:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon }
.
Alternatively, a function
f
{\displaystyle f}
is said to be continuous if there is a function of all positive real numbers
ε
{\displaystyle \varepsilon }
and
x
∈
X
{\displaystyle x\in X}
,
δ
(
ε
,
x
)
{\displaystyle \delta (\varepsilon ,x)}
representing the maximum positive real number, such that at each
x
{\displaystyle x}
if
y
∈
X
{\displaystyle y\in X}
satisfies
d
1
(
x
,
y
)
<
δ
(
ε
,
x
)
{\displaystyle d_{1}(x,y)<\delta (\varepsilon ,x)}
then
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle d_{2}(f(x),f(y))<\varepsilon }
. At every
x
{\displaystyle x}
,
δ
(
ε
,
x
)
{\displaystyle \delta (\varepsilon ,x)}
is a monotonically non-decreasing function.
== Local continuity versus global uniform continuity ==
In the definitions, the difference between uniform continuity and continuity is that, in uniform continuity there is a globally applicable
δ
{\displaystyle \delta }
(the size of a neighbourhood in
X
{\displaystyle X}
over which values of the metric for function values in
Y
{\displaystyle Y}
are less than
ε
{\displaystyle \varepsilon }
) that depends on only
ε
{\displaystyle \varepsilon }
while in continuity there is a locally applicable
δ
{\displaystyle \delta }
that depends on the both
ε
{\displaystyle \varepsilon }
and
x
{\displaystyle x}
. Continuity is a local property of a function — that is, a function
f
{\displaystyle f}
is continuous, or not, at a particular point
x
{\displaystyle x}
of the function domain
X
{\displaystyle X}
, and this can be determined by looking at only the values of the function in an arbitrarily small neighbourhood of that point. When we speak of a function being continuous on an interval, we mean that the function is continuous at every point of the interval. In contrast, uniform continuity is a global property of
f
{\displaystyle f}
, in the sense that the standard definition of uniform continuity refers to every point of
X
{\displaystyle X}
. On the other hand, it is possible to give a definition that is local in terms of the natural extension
f
∗
{\displaystyle f^{*}}
(the characteristics of which at nonstandard points are determined by the global properties of
f
{\displaystyle f}
), although it is not possible to give a local definition of uniform continuity for an arbitrary hyperreal-valued function, see below.
A mathematical definition that a function
f
{\displaystyle f}
is continuous on an interval
I
{\displaystyle I}
and a definition that
f
{\displaystyle f}
is uniformly continuous on
I
{\displaystyle I}
are structurally similar as shown in the following.
Continuity of a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
for metric spaces
(
X
,
d
1
)
{\displaystyle (X,d_{1})}
and
(
Y
,
d
2
)
{\displaystyle (Y,d_{2})}
at every point
x
{\displaystyle x}
of an interval
I
⊆
X
{\displaystyle I\subseteq X}
(i.e., continuity of
f
{\displaystyle f}
on the interval
I
{\displaystyle I}
) is expressed by a formula starting with quantifications
∀
x
∈
I
∀
ε
>
0
∃
δ
>
0
∀
y
∈
I
:
d
1
(
x
,
y
)
<
δ
⇒
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle \forall x\in I\;\forall \varepsilon >0\;\exists \delta >0\;\forall y\in I:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon }
,
(metrics
d
1
(
x
,
y
)
{\displaystyle d_{1}(x,y)}
and
d
2
(
f
(
x
)
,
f
(
y
)
)
{\displaystyle d_{2}(f(x),f(y))}
are
|
x
−
y
|
{\displaystyle |x-y|}
and
|
f
(
x
)
−
f
(
y
)
|
{\displaystyle |f(x)-f(y)|}
for
f
:
R
→
R
{\displaystyle f:\mathbb {R} \to \mathbb {R} }
for the set of real numbers
R
{\displaystyle \mathbb {R} }
).
For uniform continuity, the order of the first, second, and third quantifications (
∀
x
∈
I
{\displaystyle \forall x\in I}
,
∀
ε
>
0
{\displaystyle \forall \varepsilon >0}
, and
∃
δ
>
0
{\displaystyle \exists \delta >0}
) are rotated:
∀
ε
>
0
∃
δ
>
0
∀
x
∈
I
∀
y
∈
I
:
d
1
(
x
,
y
)
<
δ
⇒
d
2
(
f
(
x
)
,
f
(
y
)
)
<
ε
{\displaystyle \forall \varepsilon >0\;\exists \delta >0\;\forall x\in I\;\forall y\in I:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon }
.
Thus for continuity on the interval, one takes an arbitrary point
x
{\displaystyle x}
of the interval, and then there must exist a distance
δ
{\displaystyle \delta }
,
⋯
∀
x
∃
δ
⋯
,
{\displaystyle \cdots \forall x\,\exists \delta \cdots ,}
while for uniform continuity, a single
δ
{\displaystyle \delta }
must work uniformly for all points
x
{\displaystyle x}
of the interval,
⋯
∃
δ
∀
x
⋯
.
{\displaystyle \cdots \exists \delta \,\forall x\cdots .}
== Properties ==
Every uniformly continuous function is continuous, but the converse does not hold. Consider for instance the continuous function
f
:
R
→
R
,
x
↦
x
2
{\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} ,x\mapsto x^{2}}
where
R
{\displaystyle \mathbb {R} }
is the set of real numbers. Given a positive real number
ε
{\displaystyle \varepsilon }
, uniform continuity requires the existence of a positive real number
δ
{\displaystyle \delta }
such that for all
x
1
,
x
2
∈
R
{\displaystyle x_{1},x_{2}\in \mathbb {R} }
with
|
x
1
−
x
2
|
<
δ
{\displaystyle |x_{1}-x_{2}|<\delta }
, we have
|
f
(
x
1
)
−
f
(
x
2
)
|
<
ε
{\displaystyle |f(x_{1})-f(x_{2})|<\varepsilon }
. But
f
(
x
+
δ
)
−
f
(
x
)
=
2
x
⋅
δ
+
δ
2
,
{\displaystyle f\left(x+\delta \right)-f(x)=2x\cdot \delta +\delta ^{2},}
and as
x
{\displaystyle x}
goes to be a higher and higher value,
δ
{\displaystyle \delta }
needs to be lower and lower to satisfy
|
f
(
x
+
β
)
−
f
(
x
)
|
<
ε
{\displaystyle |f(x+\beta )-f(x)|<\varepsilon }
for positive real numbers
β
<
δ
{\displaystyle \beta <\delta }
and the given
ε
{\displaystyle \varepsilon }
. This means that there is no specifiable (no matter how small it is) positive real number
δ
{\displaystyle \delta }
to satisfy the condition for
f
{\displaystyle f}
to be uniformly continuous so
f
{\displaystyle f}
is not uniformly continuous.
Any absolutely continuous function (over a compact interval) is uniformly continuous. On the other hand, the Cantor function is uniformly continuous but not absolutely continuous.
The image of a totally bounded subset under a uniformly continuous function is totally bounded. However, the image of a bounded subset of an arbitrary metric space under a uniformly continuous function need not be bounded: as a counterexample, consider the identity function from the integers endowed with the discrete metric to the integers endowed with the usual Euclidean metric.
The Heine–Cantor theorem asserts that every continuous function on a compact set is uniformly continuous. In particular, if a function is continuous on a closed bounded interval of the real line, it is uniformly continuous on that interval. The Darboux integrability of continuous functions follows almost immediately from this theorem.
If a real-valued function
f
{\displaystyle f}
is continuous on
[
0
,
∞
)
{\displaystyle [0,\infty )}
and
lim
x
→
∞
f
(
x
)
{\displaystyle \lim _{x\to \infty }f(x)}
exists (and is finite), then
f
{\displaystyle f}
is uniformly continuous. In particular, every element of
C
0
(
R
)
{\displaystyle C_{0}(\mathbb {R} )}
, the space of continuous functions on
R
{\displaystyle \mathbb {R} }
that vanish at infinity, is uniformly continuous. This is a generalization of the Heine-Cantor theorem mentioned above, since
C
c
(
R
)
⊂
C
0
(
R
)
{\displaystyle C_{c}(\mathbb {R} )\subset C_{0}(\mathbb {R} )}
.
== Examples and nonexamples ==
=== Examples ===
Linear functions
x
↦
a
x
+
b
{\displaystyle x\mapsto ax+b}
are the simplest examples of uniformly continuous functions.
Any continuous function on the interval
[
0
,
1
]
{\displaystyle [0,1]}
is also uniformly continuous, since
[
0
,
1
]
{\displaystyle [0,1]}
is a compact set.
If a function is differentiable on an open interval and its derivative is bounded, then the function is uniformly continuous on that interval.
Every Lipschitz continuous map between two metric spaces is uniformly continuous. More generally, every Hölder continuous function is uniformly continuous.
The absolute value function is uniformly continuous, despite not being differentiable at
x
=
0
{\displaystyle x=0}
. This shows uniformly continuous functions are not always differentiable.
Despite being nowhere differentiable, the Weierstrass function is uniformly continuous.
Every member of a uniformly equicontinuous set of functions is uniformly continuous.
=== Nonexamples ===
Functions that are unbounded on a bounded domain are not uniformly continuous. The tangent function is continuous on the interval
(
−
π
/
2
,
π
/
2
)
{\displaystyle (-\pi /2,\pi /2)}
but is not uniformly continuous on that interval, as it goes to infinity as
x
→
π
/
2
{\displaystyle x\to \pi /2}
.
Functions whose derivative tends to infinity as
x
{\displaystyle x}
grows large cannot be uniformly continuous. The exponential function
x
↦
e
x
{\displaystyle x\mapsto e^{x}}
is continuous everywhere on the real line but is not uniformly continuous on the line, since its derivative is
e
x
{\displaystyle e^{x}}
, and
e
x
→
∞
{\displaystyle e^{x}\to \infty }
as
x
→
∞
{\displaystyle x\to \infty }
.
== Visualization ==
For a uniformly continuous function, for every positive real number
ε
>
0
{\displaystyle \varepsilon >0}
there is a positive real number
δ
>
0
{\displaystyle \delta >0}
such that two function values
f
(
x
)
{\displaystyle f(x)}
and
f
(
y
)
{\displaystyle f(y)}
have the maximum distance
ε
{\displaystyle \varepsilon }
whenever
x
{\displaystyle x}
and
y
{\displaystyle y}
are within the maximum distance
δ
{\displaystyle \delta }
. Thus at each point
(
x
,
f
(
x
)
)
{\displaystyle (x,f(x))}
of the graph, if we draw a rectangle with a height slightly less than
2
ε
{\displaystyle 2\varepsilon }
and width a slightly less than
2
δ
{\displaystyle 2\delta }
around that point, then the graph lies completely within the height of the rectangle, i.e., the graph do not pass through the top or the bottom side of the rectangle. For functions that are not uniformly continuous, this isn't possible; for these functions, the graph might lie inside the height of the rectangle at some point on the graph but there is a point on the graph where the graph lies above or below the rectangle. (the graph penetrates the top or bottom side of the rectangle.)
== History ==
The first published definition of uniform continuity was by Heine in 1870, and in 1872 he published a proof that a continuous function on an open interval need not be uniformly continuous. The proofs are almost verbatim given by Dirichlet in his lectures on definite integrals in 1854. The definition of uniform continuity appears earlier in the work of Bolzano where he also proved that continuous functions on an open interval do not need to be uniformly continuous. In addition he also states that a continuous function on a closed interval is uniformly continuous, but he does not give a complete proof.
== Other characterizations ==
=== Non-standard analysis ===
In non-standard analysis, a real-valued function
f
{\displaystyle f}
of a real variable is microcontinuous at a point
a
{\displaystyle a}
precisely if the difference
f
∗
(
a
+
δ
)
−
f
∗
(
a
)
{\displaystyle f^{*}(a+\delta )-f^{*}(a)}
is infinitesimal whenever
δ
{\displaystyle \delta }
is infinitesimal. Thus
f
{\displaystyle f}
is continuous on a set
A
{\displaystyle A}
in
R
{\displaystyle \mathbb {R} }
precisely if
f
∗
{\displaystyle f^{*}}
is microcontinuous at every real point
a
∈
A
{\displaystyle a\in A}
. Uniform continuity can be expressed as the condition that (the natural extension of)
f
{\displaystyle f}
is microcontinuous not only at real points in
A
{\displaystyle A}
, but at all points in its non-standard counterpart (natural extension)
∗
A
{\displaystyle ^{*}A}
in
∗
R
{\displaystyle ^{*}\mathbb {R} }
. Note that there exist hyperreal-valued functions which meet this criterion but are not uniformly continuous, as well as uniformly continuous hyperreal-valued functions which do not meet this criterion, however, such functions cannot be expressed in the form
f
∗
{\displaystyle f^{*}}
for any real-valued function
f
{\displaystyle f}
. (see non-standard calculus for more details and examples).
=== Cauchy continuity ===
For a function between metric spaces, uniform continuity implies Cauchy continuity (Fitzpatrick 2006). More specifically, let
A
{\displaystyle A}
be a subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
. If a function
f
:
A
→
R
n
{\displaystyle f:A\to \mathbb {R} ^{n}}
is uniformly continuous then for every pair of sequences
x
n
{\displaystyle x_{n}}
and
y
n
{\displaystyle y_{n}}
such that
lim
n
→
∞
|
x
n
−
y
n
|
=
0
{\displaystyle \lim _{n\to \infty }|x_{n}-y_{n}|=0}
we have
lim
n
→
∞
|
f
(
x
n
)
−
f
(
y
n
)
|
=
0.
{\displaystyle \lim _{n\to \infty }|f(x_{n})-f(y_{n})|=0.}
== Relations with the extension problem ==
Let
X
{\displaystyle X}
be a metric space,
S
{\displaystyle S}
a subset of
X
{\displaystyle X}
,
R
{\displaystyle R}
a complete metric space, and
f
:
S
→
R
{\displaystyle f:S\rightarrow R}
a continuous function. A question to answer: When can
f
{\displaystyle f}
be extended to a continuous function on all of
X
{\displaystyle X}
?
If
S
{\displaystyle S}
is closed in
X
{\displaystyle X}
, the answer is given by the Tietze extension theorem. So it is necessary and sufficient to extend
f
{\displaystyle f}
to the closure of
S
{\displaystyle S}
in
X
{\displaystyle X}
: that is, we may assume without loss of generality that
S
{\displaystyle S}
is dense in
X
{\displaystyle X}
, and this has the further pleasant consequence that if the extension exists, it is unique. A sufficient condition for
f
{\displaystyle f}
to extend to a continuous function
f
:
X
→
R
{\displaystyle f:X\rightarrow R}
is that it is Cauchy-continuous, i.e., the image under
f
{\displaystyle f}
of a Cauchy sequence remains Cauchy. If
X
{\displaystyle X}
is complete (and thus the completion of
S
{\displaystyle S}
), then every continuous function from
X
{\displaystyle X}
to a metric space
Y
{\displaystyle Y}
is Cauchy-continuous. Therefore when
X
{\displaystyle X}
is complete,
f
{\displaystyle f}
extends to a continuous function
f
:
X
→
R
{\displaystyle f:X\rightarrow R}
if and only if
f
{\displaystyle f}
is Cauchy-continuous.
It is easy to see that every uniformly continuous function is Cauchy-continuous and thus extends to
X
{\displaystyle X}
. The converse does not hold, since the function
f
:
R
→
R
,
x
↦
x
2
{\displaystyle f:R\rightarrow R,x\mapsto x^{2}}
is, as seen above, not uniformly continuous, but it is continuous and thus Cauchy continuous. In general, for functions defined on unbounded spaces like
R
{\displaystyle R}
, uniform continuity is a rather strong condition. It is desirable to have a weaker condition from which to deduce extendability.
For example, suppose
a
>
1
{\displaystyle a>1}
is a real number. At the precalculus level, the function
f
:
x
↦
a
x
{\displaystyle f:x\mapsto a^{x}}
can be given a precise definition only for rational values of
x
{\displaystyle x}
(assuming the existence of qth roots of positive real numbers, an application of the Intermediate Value Theorem). One would like to extend
f
{\displaystyle f}
to a function defined on all of
R
{\displaystyle R}
. The identity
f
(
x
+
δ
)
−
f
(
x
)
=
a
x
(
a
δ
−
1
)
{\displaystyle f(x+\delta )-f(x)=a^{x}\left(a^{\delta }-1\right)}
shows that
f
{\displaystyle f}
is not uniformly continuous on the set
Q
{\displaystyle Q}
of all rational numbers; however for any bounded interval
I
{\displaystyle I}
the restriction of
f
{\displaystyle f}
to
Q
∩
I
{\displaystyle Q\cap I}
is uniformly continuous, hence Cauchy-continuous, hence
f
{\displaystyle f}
extends to a continuous function on
I
{\displaystyle I}
. But since this holds for every
I
{\displaystyle I}
, there is then a unique extension of
f
{\displaystyle f}
to a continuous function on all of
R
{\displaystyle R}
.
More generally, a continuous function
f
:
S
→
R
{\displaystyle f:S\rightarrow R}
whose restriction to every bounded subset of
S
{\displaystyle S}
is uniformly continuous is extendable to
X
{\displaystyle X}
, and the converse holds if
X
{\displaystyle X}
is locally compact.
A typical application of the extendability of a uniformly continuous function is the proof of the inverse Fourier transformation formula. We first prove that the formula is true for test functions, there are densely many of them. We then extend the inverse map to the whole space using the fact that linear map is continuous; thus, uniformly continuous.
== Generalization to topological vector spaces ==
In the special case of two topological vector spaces
V
{\displaystyle V}
and
W
{\displaystyle W}
, the notion of uniform continuity of a map
f
:
V
→
W
{\displaystyle f:V\to W}
becomes: for any neighborhood
B
{\displaystyle B}
of zero in
W
{\displaystyle W}
, there exists a neighborhood
A
{\displaystyle A}
of zero in
V
{\displaystyle V}
such that
v
1
−
v
2
∈
A
{\displaystyle v_{1}-v_{2}\in A}
implies
f
(
v
1
)
−
f
(
v
2
)
∈
B
.
{\displaystyle f(v_{1})-f(v_{2})\in B.}
For linear transformations
f
:
V
→
W
{\displaystyle f:V\to W}
, uniform continuity is equivalent to continuity. This fact is frequently used implicitly in functional analysis to extend a linear map off a dense subspace of a Banach space.
== Generalization to uniform spaces ==
Just as the most natural and general setting for continuity is topological spaces, the most natural and general setting for the study of uniform continuity are the uniform spaces. A function
f
:
X
→
Y
{\displaystyle f:X\to Y}
between uniform spaces is called uniformly continuous if for every entourage
V
{\displaystyle V}
in
Y
{\displaystyle Y}
there exists an entourage
U
{\displaystyle U}
in
X
{\displaystyle X}
such that for every
(
x
1
,
x
2
)
{\displaystyle (x_{1},x_{2})}
in
U
{\displaystyle U}
we have
(
f
(
x
1
)
,
f
(
x
2
)
)
{\displaystyle (f(x_{1}),f(x_{2}))}
in
V
{\displaystyle V}
.
In this setting, it is also true that uniformly continuous maps transform Cauchy sequences into Cauchy sequences.
Each compact Hausdorff space possesses exactly one uniform structure compatible with the topology. A consequence is a generalization of the Heine-Cantor theorem: each continuous function from a compact Hausdorff space to a uniform space is uniformly continuous.
== See also ==
Contraction mapping – Function reducing distance between all points
Uniform convergence – Mode of convergence of a function sequence
Uniform isomorphism – Uniformly continuous homeomorphism
== References ==
== Further reading ==
Bourbaki, Nicolas (1989). General Topology: Chapters 1–4 [Topologie Générale]. Springer. ISBN 0-387-19374-X. Chapter II is a comprehensive reference of uniform spaces.
Dieudonné, Jean (1960). Foundations of Modern Analysis. Academic Press.
Fitzpatrick, Patrick (2006). Advanced Calculus. Brooks/Cole. ISBN 0-534-92612-6.
Kelley, John L. (1955). General topology. Graduate Texts in Mathematics. Springer-Verlag. ISBN 0-387-90125-6. {{cite book}}: ISBN / Date incompatibility (help)
Kudryavtsev, L.D. (2001) [1994], "Uniform continuity", Encyclopedia of Mathematics, EMS Press
Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. ISBN 978-0-07-054235-8.
Rusnock, P.; Kerr-Lawson, A. (2005), "Bolzano and uniform continuity", Historia Mathematica, 32 (3): 303–311, doi:10.1016/j.hm.2004.11.003 | Wikipedia/Uniformly_continuous_function |
Thomae's function is a real-valued function of a real variable that can be defined as:: 531
f
(
x
)
=
{
1
q
if
x
=
p
q
(
x
is rational), with
p
∈
Z
and
q
∈
N
coprime
0
if
x
is irrational.
{\displaystyle f(x)={\begin{cases}{\frac {1}{q}}&{\text{if }}x={\tfrac {p}{q}}\quad (x{\text{ is rational), with }}p\in \mathbb {Z} {\text{ and }}q\in \mathbb {N} {\text{ coprime}}\\0&{\text{if }}x{\text{ is irrational.}}\end{cases}}}
It is named after Carl Johannes Thomae, but has many other names: the popcorn function, the raindrop function, the countable cloud function, the modified Dirichlet function, the ruler function (not to be confused with the integer ruler function), the Riemann function, or the Stars over Babylon (John Horton Conway's name). Thomae mentioned it as an example for an integrable function with infinitely many discontinuities in an early textbook on Riemann's notion of integration.
Since every rational number has a unique representation with coprime (also termed relatively prime)
p
∈
Z
{\displaystyle p\in \mathbb {Z} }
and
q
∈
N
{\displaystyle q\in \mathbb {N} }
, the function is well-defined. Note that
q
=
+
1
{\displaystyle q=+1}
is the only number in
N
{\displaystyle \mathbb {N} }
that is coprime to
p
=
0.
{\displaystyle p=0.}
It is a modification of the Dirichlet function, which is 1 at rational numbers and 0 elsewhere.
== Properties ==
== Related probability distributions ==
Empirical probability distributions related to Thomae's function appear in DNA sequencing. The human genome is diploid, having two strands per chromosome. When sequenced, small pieces ("reads") are generated: for each spot on the genome, an integer number of reads overlap with it. Their ratio is a rational number, and typically distributed similarly to Thomae's function.
If pairs of positive integers
m
,
n
{\displaystyle m,n}
are sampled from a distribution
f
(
n
,
m
)
{\displaystyle f(n,m)}
and used to generate ratios
q
=
n
/
(
n
+
m
)
{\displaystyle q=n/(n+m)}
, this gives rise to a distribution
g
(
q
)
{\displaystyle g(q)}
on the rational numbers. If the integers are independent the distribution can be viewed as a convolution over the rational numbers,
g
(
a
/
(
a
+
b
)
)
=
∑
t
=
1
∞
f
(
t
a
)
f
(
t
b
)
{\textstyle g(a/(a+b))=\sum _{t=1}^{\infty }f(ta)f(tb)}
. Closed form solutions exist for power-law distributions with a cut-off. If
f
(
k
)
=
k
−
α
e
−
β
k
/
L
i
α
(
e
−
β
)
{\displaystyle f(k)=k^{-\alpha }e^{-\beta k}/\mathrm {Li} _{\alpha }(e^{-\beta })}
(where
L
i
α
{\displaystyle \mathrm {Li} _{\alpha }}
is the polylogarithm function) then
g
(
a
/
(
a
+
b
)
)
=
(
a
b
)
−
α
L
i
2
α
(
e
−
(
a
+
b
)
β
)
/
L
i
α
2
(
e
−
β
)
{\displaystyle g(a/(a+b))=(ab)^{-\alpha }\mathrm {Li} _{2\alpha }(e^{-(a+b)\beta })/\mathrm {Li} _{\alpha }^{2}(e^{-\beta })}
. In the case of uniform distributions on the set
{
1
,
2
,
…
,
L
}
{\displaystyle \{1,2,\ldots ,L\}}
g
(
a
/
(
a
+
b
)
)
=
(
1
/
L
2
)
⌊
L
/
max
(
a
,
b
)
⌋
{\displaystyle g(a/(a+b))=(1/L^{2})\lfloor L/\max(a,b)\rfloor }
, which is very similar to Thomae's function.
== The ruler function ==
For integers, the exponent of the highest power of 2 dividing
n
{\displaystyle n}
gives 0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, ... (sequence A007814 in the OEIS). If 1 is added, or if the 0s are removed, 1, 2, 1, 3, 1, 2, 1, 4, 1, 2, 1, 3, 1, 2, 1, ... (sequence A001511 in the OEIS). The values resemble tick-marks on a 1/16th graduated ruler, hence the name. These values correspond to the restriction of the Thomae function to the dyadic rationals: those rational numbers whose denominators are powers of 2.
== Related functions ==
A natural follow-up question one might ask is if there is a function which is continuous on the rational numbers and discontinuous on the irrational numbers. This turns out to be impossible. The set of discontinuities of any function must be an Fσ set. If such a function existed, then the irrationals would be an Fσ set. The irrationals would then be the countable union of closed sets
⋃
i
=
0
∞
C
i
{\textstyle \bigcup _{i=0}^{\infty }C_{i}}
, but since the irrationals do not contain an interval, neither can any of the
C
i
{\displaystyle C_{i}}
. Therefore, each of the
C
i
{\displaystyle C_{i}}
would be nowhere dense, and the irrationals would be a meager set. It would follow that the real numbers, being the union of the irrationals and the rationals (which, as a countable set, is evidently meager), would also be a meager set. This would contradict the Baire category theorem: because the reals form a complete metric space, they form a Baire space, which cannot be meager in itself.
A variant of Thomae's function can be used to show that any Fσ subset of the real numbers can be the set of discontinuities of a function. If
A
=
⋃
n
=
1
∞
F
n
{\textstyle A=\bigcup _{n=1}^{\infty }F_{n}}
is a countable union of closed sets
F
n
{\displaystyle F_{n}}
, define
f
A
(
x
)
=
{
1
n
if
x
is rational and
n
is minimal so that
x
∈
F
n
−
1
n
if
x
is irrational and
n
is minimal so that
x
∈
F
n
0
if
x
∉
A
{\displaystyle f_{A}(x)={\begin{cases}{\frac {1}{n}}&{\text{if }}x{\text{ is rational and }}n{\text{ is minimal so that }}x\in F_{n}\\-{\frac {1}{n}}&{\text{if }}x{\text{ is irrational and }}n{\text{ is minimal so that }}x\in F_{n}\\0&{\text{if }}x\notin A\end{cases}}}
Then a similar argument as for Thomae's function shows that
f
A
{\displaystyle f_{A}}
has A as its set of discontinuities.
== See also ==
Blumberg theorem
Cantor function
Dirichlet function
Euclid's orchard – Thomae's function can be interpreted as a perspective drawing of Euclid's orchard
Volterra's function
== References ==
== External links ==
"Dirichlet-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Dirichlet Function". MathWorld. | Wikipedia/Thomae's_function |
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that
|
f
(
x
)
−
f
(
y
)
|
≤
C
‖
x
−
y
‖
α
{\displaystyle |f(x)-f(y)|\leq C\|x-y\|^{\alpha }}
for all x and y in the domain of f. More generally, the condition can be formulated for functions between any two metric spaces. The number
α
{\displaystyle \alpha }
is called the exponent of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant (see proof below). If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder.
If
α
=
0
{\displaystyle \alpha =0}
, the function is simply bounded (any two values
f
{\displaystyle f}
takes are at most
C
{\displaystyle C}
apart).
We have the following chain of inclusions for functions defined on a closed and bounded interval [a, b] of the real line with a < b:
where 0 < α ≤ 1.
== Hölder spaces ==
Hölder spaces consisting of functions satisfying a Hölder condition are basic in areas of functional analysis relevant to solving partial differential equations, and in dynamical systems. The Hölder space Ck,α(Ω), where Ω is an open subset of some Euclidean space and k ≥ 0 an integer, consists of those functions on Ω having continuous derivatives up through order k and such that the k-th partial derivatives are Hölder continuous with exponent α, where 0 < α ≤ 1. This is a locally convex topological vector space. If the Hölder coefficient
|
f
|
C
0
,
α
=
sup
x
,
y
∈
Ω
,
x
≠
y
|
f
(
x
)
−
f
(
y
)
|
‖
x
−
y
‖
α
,
{\displaystyle \left|f\right|_{C^{0,\alpha }}=\sup _{x,y\in \Omega ,x\neq y}{\frac {|f(x)-f(y)|}{\left\|x-y\right\|^{\alpha }}},}
is finite, then the function f is said to be (uniformly) Hölder continuous with exponent α in Ω. In this case, the Hölder coefficient serves as a seminorm. If the Hölder coefficient is merely bounded on compact subsets of Ω, then the function f is said to be locally Hölder continuous with exponent α in Ω.
If the function f and its derivatives up to order k are bounded on the closure of Ω, then the Hölder space
C
k
,
α
(
Ω
¯
)
{\displaystyle C^{k,\alpha }({\overline {\Omega }})}
can be assigned the norm
‖
f
‖
C
k
,
α
=
‖
f
‖
C
k
+
max
|
β
|
=
k
|
D
β
f
|
C
0
,
α
{\displaystyle \left\|f\right\|_{C^{k,\alpha }}=\left\|f\right\|_{C^{k}}+\max _{|\beta |=k}\left|D^{\beta }f\right|_{C^{0,\alpha }}}
where β ranges over multi-indices and
‖
f
‖
C
k
=
max
|
β
|
≤
k
sup
x
∈
Ω
|
D
β
f
(
x
)
|
.
{\displaystyle \|f\|_{C^{k}}=\max _{|\beta |\leq k}\sup _{x\in \Omega }\left|D^{\beta }f(x)\right|.}
These seminorms and norms are often denoted simply
|
f
|
0
,
α
{\displaystyle \left|f\right|_{0,\alpha }}
and
‖
f
‖
k
,
α
{\displaystyle \left\|f\right\|_{k,\alpha }}
or also
|
f
|
0
,
α
,
Ω
{\displaystyle \left|f\right|_{0,\alpha ,\Omega }\;}
and
‖
f
‖
k
,
α
,
Ω
{\displaystyle \left\|f\right\|_{k,\alpha ,\Omega }}
in order to stress the dependence on the domain of f. If Ω is open and bounded, then
C
k
,
α
(
Ω
¯
)
{\displaystyle C^{k,\alpha }({\overline {\Omega }})}
is a Banach space with respect to the norm
‖
⋅
‖
C
k
,
α
{\displaystyle \|\cdot \|_{C^{k,\alpha }}}
.
== Compact embedding of Hölder spaces ==
Let Ω be a bounded subset of some Euclidean space (or more generally, any totally bounded metric space) and let 0 < α < β ≤ 1 two Hölder exponents. Then, there is an obvious inclusion map of the corresponding Hölder spaces:
C
0
,
β
(
Ω
)
→
C
0
,
α
(
Ω
)
,
{\displaystyle C^{0,\beta }(\Omega )\to C^{0,\alpha }(\Omega ),}
which is continuous since, by definition of the Hölder norms, we have:
∀
f
∈
C
0
,
β
(
Ω
)
:
|
f
|
0
,
α
,
Ω
≤
d
i
a
m
(
Ω
)
β
−
α
|
f
|
0
,
β
,
Ω
.
{\displaystyle \forall f\in C^{0,\beta }(\Omega ):\qquad |f|_{0,\alpha ,\Omega }\leq \mathrm {diam} (\Omega )^{\beta -\alpha }|f|_{0,\beta ,\Omega }.}
Moreover, this inclusion is compact, meaning that bounded sets in the ‖ · ‖0,β norm are relatively compact in the ‖ · ‖0,α norm. This is a direct consequence of the Ascoli-Arzelà theorem. Indeed, let (un) be a bounded sequence in C0,β(Ω). Thanks to the Ascoli-Arzelà theorem we can assume without loss of generality that un → u uniformly, and we can also assume u = 0. Then
|
u
n
−
u
|
0
,
α
=
|
u
n
|
0
,
α
→
0
,
{\displaystyle \left|u_{n}-u\right|_{0,\alpha }=\left|u_{n}\right|_{0,\alpha }\to 0,}
because
|
u
n
(
x
)
−
u
n
(
y
)
|
|
x
−
y
|
α
=
(
|
u
n
(
x
)
−
u
n
(
y
)
|
|
x
−
y
|
β
)
α
β
|
u
n
(
x
)
−
u
n
(
y
)
|
1
−
α
β
≤
|
u
n
|
0
,
β
α
β
(
2
‖
u
n
‖
∞
)
1
−
α
β
=
o
(
1
)
.
{\displaystyle {\frac {|u_{n}(x)-u_{n}(y)|}{|x-y|^{\alpha }}}=\left({\frac {|u_{n}(x)-u_{n}(y)|}{|x-y|^{\beta }}}\right)^{\frac {\alpha }{\beta }}\left|u_{n}(x)-u_{n}(y)\right|^{1-{\frac {\alpha }{\beta }}}\leq |u_{n}|_{0,\beta }^{\frac {\alpha }{\beta }}\left(2\|u_{n}\|_{\infty }\right)^{1-{\frac {\alpha }{\beta }}}=o(1).}
== Examples ==
If 0 < α ≤ β ≤ 1 then all
C
0
,
β
(
Ω
¯
)
{\displaystyle C^{0,\beta }({\overline {\Omega }})}
Hölder continuous functions on a bounded set Ω are also
C
0
,
α
(
Ω
¯
)
{\displaystyle C^{0,\alpha }({\overline {\Omega }})}
Hölder continuous. This also includes β = 1 and therefore all Lipschitz continuous functions on a bounded set are also C0,α Hölder continuous.
The function f(x) = xβ (with β ≤ 1) defined on [0, 1] serves as a prototypical example of a function that is C0,α Hölder continuous for 0 < α ≤ β, but not for α > β. Further, if we defined f analogously on
[
0
,
∞
)
{\displaystyle [0,\infty )}
, it would be C0,α Hölder continuous only for α = β.
If a function
f
{\displaystyle f}
is
α
{\displaystyle \alpha }
–Hölder continuous on an interval and
α
>
1
,
{\displaystyle \alpha >1,}
then
f
{\displaystyle f}
is constant.
There are examples of uniformly continuous functions that are not α–Hölder continuous for any α. For instance, the function defined on [0, 1/2] by f(0) = 0 and by f(x) = 1/log(x) otherwise is continuous, and therefore uniformly continuous by the Heine-Cantor theorem. It does not satisfy a Hölder condition of any order, however.
The Weierstrass function defined by:
f
(
x
)
=
∑
n
=
0
∞
a
n
cos
(
b
n
π
x
)
,
{\displaystyle f(x)=\sum _{n=0}^{\infty }a^{n}\cos \left(b^{n}\pi x\right),}
where
0
<
a
<
1
,
b
{\displaystyle 0<a<1,b}
is an integer,
b
≥
2
{\displaystyle b\geq 2}
and
a
b
>
1
+
3
π
2
,
{\displaystyle ab>1+{\tfrac {3\pi }{2}},}
is α-Hölder continuous with
α
=
−
log
(
a
)
log
(
b
)
.
{\displaystyle \alpha =-{\frac {\log(a)}{\log(b)}}.}
The Cantor function is Hölder continuous for any exponent
α
≤
log
2
log
3
,
{\displaystyle \alpha \leq {\tfrac {\log 2}{\log 3}},}
and for no larger one. (The number
log
2
log
3
{\displaystyle {\tfrac {\log 2}{\log 3}}}
is the Hausdorff dimension of the standard Cantor set.) In the former case, the inequality of the definition holds with the constant C := 2.
Peano curves from [0, 1] onto the square [0, 1]2 can be constructed to be 1/2–Hölder continuous. It can be proved that when
α
>
1
2
{\displaystyle \alpha >{\tfrac {1}{2}}}
the image of a
α
{\displaystyle \alpha }
-Hölder continuous function from the unit interval to the square cannot fill the square.
Sample paths of Brownian motion are almost surely everywhere locally
α
{\displaystyle \alpha }
-Hölder for every
α
<
1
2
{\displaystyle \alpha <{\tfrac {1}{2}}}
.
Functions which are locally integrable and whose integrals satisfy an appropriate growth condition are also Hölder continuous. For example, if we let
u
x
,
r
=
1
|
B
r
|
∫
B
r
(
x
)
u
(
y
)
d
y
{\displaystyle u_{x,r}={\frac {1}{|B_{r}|}}\int _{B_{r}(x)}u(y)\,dy}
and u satisfies
∫
B
r
(
x
)
|
u
(
y
)
−
u
x
,
r
|
2
d
y
≤
C
r
n
+
2
α
,
{\displaystyle \int _{B_{r}(x)}\left|u(y)-u_{x,r}\right|^{2}dy\leq Cr^{n+2\alpha },}
then u is Hölder continuous with exponent α.
Functions whose oscillation decay at a fixed rate with respect to distance are Hölder continuous with an exponent that is determined by the rate of decay. For instance, if
w
(
u
,
x
0
,
r
)
=
sup
B
r
(
x
0
)
u
−
inf
B
r
(
x
0
)
u
{\displaystyle w(u,x_{0},r)=\sup _{B_{r}(x_{0})}u-\inf _{B_{r}(x_{0})}u}
for some function u(x) satisfies
w
(
u
,
x
0
,
r
2
)
≤
λ
w
(
u
,
x
0
,
r
)
{\displaystyle w\left(u,x_{0},{\tfrac {r}{2}}\right)\leq \lambda w\left(u,x_{0},r\right)}
for a fixed λ with 0 < λ < 1 and all sufficiently small values of r, then u is Hölder continuous.
Functions in Sobolev space can be embedded into the appropriate Hölder space via Morrey's inequality if the spatial dimension is less than the exponent of the Sobolev space. To be precise, if
n
<
p
≤
∞
{\displaystyle n<p\leq \infty }
then there exists a constant C, depending only on p and n, such that:
∀
u
∈
C
1
(
R
n
)
∩
L
p
(
R
n
)
:
‖
u
‖
C
0
,
γ
(
R
n
)
≤
C
‖
u
‖
W
1
,
p
(
R
n
)
,
{\displaystyle \forall u\in C^{1}(\mathbf {R} ^{n})\cap L^{p}(\mathbf {R} ^{n}):\qquad \|u\|_{C^{0,\gamma }(\mathbf {R} ^{n})}\leq C\|u\|_{W^{1,p}(\mathbf {R} ^{n})},}
where
γ
=
1
−
n
p
.
{\displaystyle \gamma =1-{\tfrac {n}{p}}.}
Thus if u ∈ W1, p(Rn), then u is in fact Hölder continuous of exponent γ, after possibly being redefined on a set of measure 0.
== Properties ==
A closed additive subgroup of an infinite dimensional Hilbert space H, connected by α–Hölder continuous arcs with α > 1/2, is a linear subspace. There are closed additive subgroups of H, not linear subspaces, connected by 1/2–Hölder continuous arcs. An example is the additive subgroup L2(R, Z) of the Hilbert space L2(R, R).
Any α–Hölder continuous function f on a metric space X admits a Lipschitz approximation by means of a sequence of functions (fk) such that fk is k-Lipschitz and
‖
f
−
f
k
‖
∞
,
X
=
O
(
k
−
α
1
−
α
)
.
{\displaystyle \left\|f-f_{k}\right\|_{\infty ,X}=O\left(k^{-{\frac {\alpha }{1-\alpha }}}\right).}
Conversely, any such sequence (fk) of Lipschitz functions converges to an α–Hölder continuous uniform limit f.
Any α–Hölder function f on a subset X of a normed space E admits a uniformly continuous extension to the whole space, which is Hölder continuous with the same constant C and the same exponent α. The largest such extension is:
f
∗
(
x
)
:=
inf
y
∈
X
{
f
(
y
)
+
C
|
x
−
y
|
α
}
.
{\displaystyle f^{*}(x):=\inf _{y\in X}\left\{f(y)+C|x-y|^{\alpha }\right\}.}
The image of any
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
under an α–Hölder function has Hausdorff dimension at most
dim
H
(
U
)
α
{\displaystyle {\tfrac {\dim _{H}(U)}{\alpha }}}
, where
dim
H
(
U
)
{\displaystyle \dim _{H}(U)}
is the Hausdorff dimension of
U
{\displaystyle U}
.
The space
C
0
,
α
(
Ω
)
,
0
<
α
≤
1
{\displaystyle C^{0,\alpha }(\Omega ),0<\alpha \leq 1}
is not separable.
The embedding
C
0
,
β
(
Ω
)
⊂
C
0
,
α
(
Ω
)
,
0
<
α
<
β
≤
1
{\displaystyle C^{0,\beta }(\Omega )\subset C^{0,\alpha }(\Omega ),0<\alpha <\beta \leq 1}
is not dense.
If
f
(
t
)
{\displaystyle f(t)}
and
g
(
t
)
{\displaystyle g(t)}
satisfy on smooth arc L the
H
(
μ
)
{\displaystyle H(\mu )}
and
H
(
ν
)
{\displaystyle H(\nu )}
conditions respectively, then the functions
f
(
t
)
+
g
(
t
)
{\displaystyle f(t)+g(t)}
and
f
(
t
)
g
(
t
)
{\displaystyle f(t)g(t)}
satisfy the
H
(
α
)
{\displaystyle H(\alpha )}
condition on L, where
α
=
min
{
μ
,
ν
}
{\displaystyle \alpha =\min\{\mu ,\nu \}}
.
== See also ==
p-variation
== Notes ==
== References == | Wikipedia/Hölder_continuous_function |
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.
At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.
The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
== Overview ==
The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.
Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.
== History ==
Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.
Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.
== Formal definition ==
In the most general sense,
a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function
Φ
:
U
⊆
(
T
×
X
)
→
X
{\displaystyle \Phi :U\subseteq (T\times X)\to X}
with
p
r
o
j
2
(
U
)
=
X
{\displaystyle \mathrm {proj} _{2}(U)=X}
(where
p
r
o
j
2
{\displaystyle \mathrm {proj} _{2}}
is the 2nd projection map)
and for any x in X:
Φ
(
0
,
x
)
=
x
{\displaystyle \Phi (0,x)=x}
Φ
(
t
2
,
Φ
(
t
1
,
x
)
)
=
Φ
(
t
2
+
t
1
,
x
)
,
{\displaystyle \Phi (t_{2},\Phi (t_{1},x))=\Phi (t_{2}+t_{1},x),}
for
t
1
,
t
2
+
t
1
∈
I
(
x
)
{\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)}
and
t
2
∈
I
(
Φ
(
t
1
,
x
)
)
{\displaystyle \ t_{2}\in I(\Phi (t_{1},x))}
, where we have defined the set
I
(
x
)
:=
{
t
∈
T
:
(
t
,
x
)
∈
U
}
{\displaystyle I(x):=\{t\in T:(t,x)\in U\}}
for any x in X.
In particular, in the case that
U
=
T
×
X
{\displaystyle U=T\times X}
we have for every x in X that
I
(
x
)
=
T
{\displaystyle I(x)=T}
and thus that Φ defines a monoid action of T on X.
The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.
We often write
Φ
x
(
t
)
≡
Φ
(
t
,
x
)
{\displaystyle \Phi _{x}(t)\equiv \Phi (t,x)}
Φ
t
(
x
)
≡
Φ
(
t
,
x
)
{\displaystyle \Phi ^{t}(x)\equiv \Phi (t,x)}
if we take one of the variables as constant. The function
Φ
x
:
I
(
x
)
→
X
{\displaystyle \Phi _{x}:I(x)\to X}
is called the flow through x and its graph is called the trajectory through x. The set
γ
x
≡
{
Φ
(
t
,
x
)
:
t
∈
I
(
x
)
}
{\displaystyle \gamma _{x}\equiv \{\Phi (t,x):t\in I(x)\}}
is called the orbit through x.
The orbit through x is the image of the flow through x.
A subset S of the state space X is called Φ-invariant if for all x in S and all t in T
Φ
(
t
,
x
)
∈
S
.
{\displaystyle \Phi (t,x)\in S.}
Thus, in particular, if S is Φ-invariant,
I
(
x
)
=
T
{\displaystyle I(x)=T}
for all x in S. That is, the flow through x must be defined for all time for every element of S.
More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.
=== Geometrical definition ===
In the geometrical definition, a dynamical system is the tuple
⟨
T
,
M
,
f
⟩
{\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle }
.
T
{\displaystyle {\mathcal {T}}}
is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative.
M
{\displaystyle {\mathcal {M}}}
is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with
t
∈
T
{\displaystyle t\in {\mathcal {T}}}
) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain
T
{\displaystyle {\mathcal {T}}}
into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain
T
{\displaystyle {\mathcal {T}}}
.
==== Real dynamical system ====
A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.
==== Discrete dynamical system ====
A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.
==== Cellular automaton ====
A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.
==== Multidimensional generalization ====
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.
==== Compactification of a dynamical system ====
Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).
In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.
=== Measure theoretical definition ===
A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has
Φ
−
1
σ
∈
Σ
{\displaystyle \Phi ^{-1}\sigma \in \Sigma }
. A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has
μ
(
Φ
−
1
σ
)
=
μ
(
σ
)
{\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )}
. Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.
The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates
Φ
n
=
Φ
∘
Φ
∘
⋯
∘
Φ
{\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi }
for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.
==== Relation to geometric definition ====
The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.
Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.
For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.
== Construction of dynamical systems ==
The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:
x
˙
=
v
(
t
,
x
)
{\displaystyle {\dot {\boldsymbol {x}}}={\boldsymbol {v}}(t,{\boldsymbol {x}})}
x
|
t
=
0
=
x
0
{\displaystyle {\boldsymbol {x}}|_{t=0}={\boldsymbol {x}}_{0}}
where
x
˙
{\displaystyle {\dot {\boldsymbol {x}}}}
represents the velocity of the material point x
M is a finite dimensional manifold
v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.
There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.
Depending on the properties of this vector field, the mechanical system is called
autonomous, when v(t, x) = v(x)
homogeneous when v(t, 0) = 0 for all t
The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above
x
(
t
)
=
Φ
(
t
,
x
0
)
{\displaystyle {\boldsymbol {x}}(t)=\Phi (t,{\boldsymbol {x}}_{0})}
The dynamical system is then (T, M, Φ).
Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy
x
˙
−
v
(
t
,
x
)
=
0
⇔
G
(
t
,
Φ
(
t
,
x
0
)
)
=
0
{\displaystyle {\dot {\boldsymbol {x}}}-{\boldsymbol {v}}(t,{\boldsymbol {x}})=0\qquad \Leftrightarrow \qquad {\mathfrak {G}}\left(t,\Phi (t,{\boldsymbol {x}}_{0})\right)=0}
where
G
:
(
T
×
M
)
M
→
C
{\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} }
is a functional from the set of evolution functions to the field of the complex numbers.
This equation is useful when modeling mechanical systems with complicated constraints.
Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.
== Examples ==
== Linear dynamical systems ==
Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).
=== Flows ===
For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,
x
˙
=
v
(
x
)
=
A
x
+
b
,
{\displaystyle {\dot {x}}=v(x)=Ax+b,}
with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity).
The case b ≠ 0 with A = 0 is just a straight line in the direction of b:
Φ
t
(
x
1
)
=
x
1
+
b
t
.
{\displaystyle \Phi ^{t}(x_{1})=x_{1}+bt.}
When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there.
For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,
Φ
t
(
x
0
)
=
e
t
A
x
0
.
{\displaystyle \Phi ^{t}(x_{0})=e^{tA}x_{0}.}
When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.
The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.
=== Maps ===
A discrete-time, affine dynamical system has the form of a matrix difference equation:
x
n
+
1
=
A
x
n
+
b
,
{\displaystyle x_{n+1}=Ax_{n}+b,}
with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.
There are also many other discrete dynamical systems.
== Local dynamics ==
The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.
=== Rectification ===
A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.
The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.
=== Near periodic orbits ===
In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.
The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part
h
−
1
∘
F
∘
h
(
x
)
=
J
⋅
x
.
{\displaystyle h^{-1}\circ F\circ h(x)=J\cdot x.}
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.
=== Conjugation results ===
The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.
In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.
The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.
== Bifurcation theory ==
When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.
Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.
The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.
Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.
== Ergodic systems ==
In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that
v
o
l
(
A
)
=
v
o
l
(
Φ
t
(
A
)
)
.
{\displaystyle \mathrm {vol} (A)=\mathrm {vol} (\Phi ^{t}(A)).}
In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.
In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.
For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.
One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).
The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,
(
U
t
a
)
(
x
)
=
a
(
Φ
−
t
(
x
)
)
.
{\displaystyle (U^{t}a)(x)=a(\Phi ^{-t}(x)).}
By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.
== Nonlinear dynamical systems and chaos ==
Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).
This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"
The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.
=== Solutions of finite duration ===
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
As example, the equation:
y
′
=
−
sgn
(
y
)
|
y
|
,
y
(
0
)
=
1
{\displaystyle y'=-{\text{sgn}}(y){\sqrt {|y|}},\,\,y(0)=1}
Admits the finite duration solution:
y
(
t
)
=
1
4
(
1
−
t
2
+
|
1
−
t
2
|
)
2
{\displaystyle y(t)={\frac {1}{4}}\left(1-{\frac {t}{2}}+\left|1-{\frac {t}{2}}\right|\right)^{2}}
that is zero for
t
≥
2
{\displaystyle t\geq 2}
and is not Lipschitz continuous at its ending time
t
=
2.
{\displaystyle t=2.}
== See also ==
== References ==
== Further reading ==
== External links ==
Arxiv preprint server has daily submissions of (non-refereed) manuscripts in dynamical systems.
Encyclopedia of dynamical systems A part of Scholarpedia — peer-reviewed and written by invited experts.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Sci.Nonlinear FAQ 2.0 (Sept 2003) provides definitions, explanations and resources related to nonlinear science
Online books or lecture notes
Geometrical theory of dynamical systems. Nils Berglund's lecture notes for a course at ETH at the advanced undergraduate level.
Dynamical systems. George D. Birkhoff's 1927 book already takes a modern approach to dynamical systems.
Chaos: classical and quantum. An introduction to dynamical systems from the periodic orbit point of view.
Learning Dynamical Systems. Tutorial on learning dynamical systems.
Ordinary Differential Equations and Dynamical Systems. Lecture notes by Gerald Teschl
Research groups
Dynamical Systems Group Groningen, IWI, University of Groningen.
Chaos @ UMD. Concentrates on the applications of dynamical systems.
[2], SUNY Stony Brook. Lists of conferences, researchers, and some open problems.
Center for Dynamics and Geometry, Penn State.
Control and Dynamical Systems, Caltech.
Laboratory of Nonlinear Systems, Ecole Polytechnique Fédérale de Lausanne (EPFL).
Center for Dynamical Systems, University of Bremen
Systems Analysis, Modelling and Prediction Group, University of Oxford
Non-Linear Dynamics Group, Instituto Superior Técnico, Technical University of Lisbon
Dynamical Systems Archived 2017-06-02 at the Wayback Machine, IMPA, Instituto Nacional de Matemática Pura e Applicada.
Nonlinear Dynamics Workgroup Archived 2015-01-21 at the Wayback Machine, Institute of Computer Science, Czech Academy of Sciences.
UPC Dynamical Systems Group Barcelona, Polytechnical University of Catalonia.
Center for Control, Dynamical Systems, and Computation, University of California, Santa Barbara. | Wikipedia/Dynamic_systems |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Spin_tensor_(mechanics) |
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an initial configuration to a final configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body.
A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials.
Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations may remain, and these exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation.
In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material.
== Definition and formulation ==
Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred.
It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at t = 0 is considered the reference configuration, κ0(B). The configuration at the current time t is the current configuration.
For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest.
The components Xi of the position vector X of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components xi of the position vector x of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates
There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description.
There is continuity during deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
=== Affine deformation ===
An affine deformation is a deformation that can be completely described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations.
Therefore, an affine deformation has the form
x
(
X
,
t
)
=
F
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {F}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where x is the position of a point in the deformed configuration, X is the position in a reference configuration, t is a time-like parameter, F is the linear transformer and c is the translation. In matrix form, where the components are with respect to an orthonormal basis,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
F
11
(
t
)
F
12
(
t
)
F
13
(
t
)
F
21
(
t
)
F
22
(
t
)
F
23
(
t
)
F
31
(
t
)
F
32
(
t
)
F
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}F_{11}(t)&F_{12}(t)&F_{13}(t)\\F_{21}(t)&F_{22}(t)&F_{23}(t)\\F_{31}(t)&F_{32}(t)&F_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
The above deformation becomes non-affine or inhomogeneous if F = F(X,t) or c = c(X,t).
=== Rigid body motion ===
A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix F is proper orthogonal in order to allow rotations but no reflections.
A rigid body motion can be described by
x
(
X
,
t
)
=
Q
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {Q}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where
Q
⋅
Q
T
=
Q
T
⋅
Q
=
1
{\displaystyle {\boldsymbol {Q}}\cdot {\boldsymbol {Q}}^{T}={\boldsymbol {Q}}^{T}\cdot {\boldsymbol {Q}}={\boldsymbol {\mathit {1}}}}
In matrix form,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
Q
11
(
t
)
Q
12
(
t
)
Q
13
(
t
)
Q
21
(
t
)
Q
22
(
t
)
Q
23
(
t
)
Q
31
(
t
)
Q
32
(
t
)
Q
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}Q_{11}(t)&Q_{12}(t)&Q_{13}(t)\\Q_{21}(t)&Q_{22}(t)&Q_{23}(t)\\Q_{31}(t)&Q_{32}(t)&Q_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
== Background: displacement ==
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0(B) to a current or deformed configuration κt(B) (Figure 1).
If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred.
The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector u(X,t) = uiei in the Lagrangian description, or U(x,t) = UJEJ in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as
u
(
X
,
t
)
=
b
(
X
,
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (\mathbf {X} ,t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
b
(
x
,
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (\mathbf {x} ,t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
where αJi are the direction cosines between the material and spatial coordinate systems with unit vectors EJ and ei, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
and the relationship between ui and UJ is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0, and the direction cosines become Kronecker deltas:
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus, we have
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}}
=== Displacement gradient tensor ===
The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇Xu. Thus we have:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
∇
X
u
=
∇
X
x
−
I
∇
X
u
=
F
−
I
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} ,t)&=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \\\nabla _{\mathbf {X} }\mathbf {u} &=\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {I} \\\nabla _{\mathbf {X} }\mathbf {u} &=\mathbf {F} -\mathbf {I} \end{aligned}}}
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
δ
i
K
{\displaystyle {\begin{aligned}u_{i}&=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}\\{\frac {\partial u_{i}}{\partial X_{K}}}&={\frac {\partial x_{i}}{\partial X_{K}}}-\delta _{iK}\end{aligned}}}
where F is the deformation gradient tensor.
Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor ∇xU. Thus we have,
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
∇
x
U
=
I
−
∇
x
X
∇
x
U
=
I
−
F
−
1
{\displaystyle {\begin{aligned}\mathbf {U} (\mathbf {x} ,t)&=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\nabla _{\mathbf {x} }\mathbf {X} \\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\mathbf {F} ^{-1}\end{aligned}}}
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
∂
U
J
∂
x
k
=
δ
J
k
−
∂
X
J
∂
x
k
{\displaystyle {\begin{aligned}U_{J}&=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}\\{\frac {\partial U_{J}}{\partial x_{k}}}&=\delta _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}\end{aligned}}}
== Examples ==
Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are
uniform extension
pure dilation
equibiaxial tension
simple shear
pure shear
Linear or longitudinal deformations of long objects, such as beams and fibers, are called elongation or shortening; derived quantities are the relative elongation and the stretch ratio.
Plane deformations are also of interest, particularly in the experimental context.
Volume deformation is a uniform scaling due to isotropic compression; the relative volume deformation is called volumetric strain.
=== Plane deformation ===
A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors e1, e2, the deformation gradient has the form
F
=
F
11
e
1
⊗
e
1
+
F
12
e
1
⊗
e
2
+
F
21
e
2
⊗
e
1
+
F
22
e
2
⊗
e
2
+
e
3
⊗
e
3
{\displaystyle {\boldsymbol {F}}=F_{11}\mathbf {e} _{1}\otimes \mathbf {e} _{1}+F_{12}\mathbf {e} _{1}\otimes \mathbf {e} _{2}+F_{21}\mathbf {e} _{2}\otimes \mathbf {e} _{1}+F_{22}\mathbf {e} _{2}\otimes \mathbf {e} _{2}+\mathbf {e} _{3}\otimes \mathbf {e} _{3}}
In matrix form,
F
=
[
F
11
F
12
0
F
21
F
22
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}F_{11}&F_{12}&0\\F_{21}&F_{22}&0\\0&0&1\end{bmatrix}}}
From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write
F
=
R
⋅
U
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
[
λ
1
0
0
0
λ
2
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}\lambda _{1}&0&0\\0&\lambda _{2}&0\\0&0&1\end{bmatrix}}}
where θ is the angle of rotation and λ1, λ2 are the principal stretches.
==== Isochoric plane deformation ====
If the deformation is isochoric (volume preserving) then det(F) = 1 and we have
F
11
F
22
−
F
12
F
21
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1}
Alternatively,
λ
1
λ
2
=
1
{\displaystyle \lambda _{1}\lambda _{2}=1}
==== Simple shear ====
A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation.
If e1 is the fixed reference orientation in which line elements do not deform during the deformation then λ1 = 1 and F·e1 = e1.
Therefore,
F
11
e
1
+
F
21
e
2
=
e
1
⟹
F
11
=
1
;
F
21
=
0
{\displaystyle F_{11}\mathbf {e} _{1}+F_{21}\mathbf {e} _{2}=\mathbf {e} _{1}\quad \implies \quad F_{11}=1~;~~F_{21}=0}
Since the deformation is isochoric,
F
11
F
22
−
F
12
F
21
=
1
⟹
F
22
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1\quad \implies \quad F_{22}=1}
Define
γ
:=
F
12
{\displaystyle \gamma :=F_{12}}
Then, the deformation gradient in simple shear can be expressed as
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
Now,
F
⋅
e
2
=
F
12
e
1
+
F
22
e
2
=
γ
e
1
+
e
2
⟹
F
⋅
(
e
2
⊗
e
2
)
=
γ
e
1
⊗
e
2
+
e
2
⊗
e
2
{\displaystyle {\boldsymbol {F}}\cdot \mathbf {e} _{2}=F_{12}\mathbf {e} _{1}+F_{22}\mathbf {e} _{2}=\gamma \mathbf {e} _{1}+\mathbf {e} _{2}\quad \implies \quad {\boldsymbol {F}}\cdot (\mathbf {e} _{2}\otimes \mathbf {e} _{2})=\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}+\mathbf {e} _{2}\otimes \mathbf {e} _{2}}
Since
e
i
⊗
e
i
=
1
{\displaystyle \mathbf {e} _{i}\otimes \mathbf {e} _{i}={\boldsymbol {\mathit {1}}}}
we can also write the deformation gradient as
F
=
1
+
γ
e
1
⊗
e
2
{\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}}
== See also ==
The deformation of long elements such as beams or studs due to bending forces is known as deflection.
Euler–Bernoulli beam theory
Deformation (engineering)
Finite strain theory
Infinitesimal strain theory
Moiré pattern
Shear modulus
Shear stress
Shear strength
Strain (mechanics)
Stress (mechanics)
Stress measures
== References ==
== Further reading ==
Bazant, Zdenek P.; Cedolin, Luigi (2010). Three-Dimensional Continuum Instabilities and Effects of Finite Strain Tensor, chapter 11 in "Stability of Structures", 3rd ed. Singapore, New Jersey, London: World Scientific Publishing. ISBN 978-9814317030.
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Hutter, Kolumban; Jöhnk, Klaus (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Jirasek, M; Bazant, Z.P. (2002). Inelastic Analysis of Structures. London and New York: J. Wiley & Sons. ISBN 0471987166.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; Mase, George E. (1999). Continuum Mechanics for Engineers (2nd ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Prager, William (1961). Introduction to Mechanics of Continua. Boston: Ginn and Co. ISBN 0486438090. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Deformation_(physics) |
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the direction perpendicular to the interface, and frictional forces in the tangential direction. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, whereas frictionless contact mechanics assumes the absence of such effects.
Frictional contact mechanics is concerned with a large range of different scales.
At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies (see Contact dynamics). For instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern.
At the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the macroscopic scale, or to investigate wear and damage of the contacting bodies' surfaces. Application areas of this scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis, etc.
Finally, at the microscopic and nano-scales, contact mechanics is used to increase our understanding of tribological systems (e.g., investigate the origin of friction) and for the engineering of advanced devices like atomic force microscopes and MEMS devices.
This page is mainly concerned with the second scale: getting basic insight in the stresses and deformations in and near the contact patch, without paying too much attention to the detailed mechanisms by which they come about.
== History ==
Several famous scientists, engineers and mathematicians contributed to our understanding of friction.
They include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication.
Deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out. Further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the (linearly) elastic regime.
Classical results for a true frictional contact problem concern the papers by F.W. Carter (1926) and H. Fromm (1927). They independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law (see below). These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles. With respect to sliding, the classical solutions are due to C. Cattaneo (1938) and R.D. Mindlin (1949), who considered the tangential shifting of a sphere on a plane (see below).
In the 1950s, interest in the rolling contact of railway wheels grew. In 1958, Kenneth L. Johnson presented an approximate approach for the 3D frictional problem with Hertzian geometry, with either lateral or spin creepage. Among others he found that spin creepage, which is symmetric about the center of the contact patch, leads to a net lateral force in rolling conditions. This is due to the fore-aft differences in the distribution of tractions in the contact patch.
In 1967, Joost Jacques Kalker published his milestone PhD thesis on the linear theory for rolling contact. This theory is exact for the situation of an infinite friction coefficient in which case the slip area vanishes, and is approximative for non-vanishing creepages. It does assume Coulomb's friction law, which more or less requires (scrupulously) clean surfaces. This theory is for massive bodies such as the railway wheel-rail contact. With respect to road-tire interaction, an important contribution concerns the so-called magic tire formula by Hans Pacejka.
In the 1970s, many numerical models were devised. Particularly variational approaches, such as those relying on Duvaut and Lion’s existence and uniqueness theories. Over time, these grew into finite element approaches for contact problems with general material models and geometries, and into half-space based approaches for so-called smooth-edged contact problems for linearly elastic materials. Models of the first category were presented by Laursen and by Wriggers. An example of the latter category is Kalker’s CONTACT model.
A drawback of the well-founded variational approaches is their large computation times. Therefore, many different approximate approaches were devised as well. Several well-known approximate theories for the rolling contact problem are Kalker’s FASTSIM approach, the Shen-Hedrick-Elkins formula, and Polach’s approach.
More information on the history of the wheel/rail contact problem is provided in Knothe's paper. Further Johnson collected in his book a tremendous amount of information on contact mechanics and related subjects. With respect to rolling contact mechanics an overview of various theories is presented by Kalker as well. Finally the proceedings of a CISM course are of interest, which provide an introduction to more advanced aspects of rolling contact theory.
== Problem formulation ==
Central in the analysis of frictional contact problems is the understanding that the stresses at the surface of each body are spatially varying. Consequently, the strains and deformations of the bodies are varying with position too. And the motion of particles of the contacting bodies can be different at different locations: in part of the contact patch particles of the opposing bodies may adhere (stick) to each other, whereas in other parts of the contact patch relative movement occurs. This local relative sliding is called micro-slip.
This subdivision of the contact area into stick (adhesion) and slip areas manifests itself a.o. in fretting wear. Note that wear occurs only where power is dissipated, which requires stress and local relative displacement (slip) between the two surfaces.
The size and shape of the contact patch itself and of its adhesion and slip areas are generally unknown in advance. If these were known, then the elastic fields in the two bodies could be solved independently from each other and the problem would not be a contact problem anymore.
Three different components can be distinguished in a contact problem.
First of all, there is the deformation of the separate bodies in reaction to loads applied on their surfaces. This is the subject of general continuum mechanics. It depends largely on the geometry of the bodies and on their (constitutive) material behavior (e.g. elastic vs. plastic response, homogeneous vs. layered structure etc.).
Secondly, there is the overall motion of the bodies relative to each other. For instance the bodies can be at rest (statics) or approaching each other quickly (impact), and can be shifted (sliding) or rotated (rolling) over each other. These overall motions are generally studied in classical mechanics, see for instance multibody dynamics.
Finally there are the processes at the contact interface: compression and adhesion in the direction perpendicular to the interface, and friction and micro-slip in the tangential directions.
The last aspect is the primary concern of contact mechanics. It is described in terms of so-called contact conditions.
For the direction perpendicular to the interface, the normal contact problem, adhesion effects are usually small (at larger spatial scales) and the following conditions are typically employed:
The gap
e
n
{\displaystyle e_{n}}
between the two surfaces must be zero (contact) or strictly positive (separation,
e
n
>
0
{\displaystyle e_{n}>0}
);
The normal stress
p
n
{\displaystyle p_{n}}
acting on each body is zero (separation) or compressive (
p
n
>
0
{\displaystyle p_{n}>0}
in contact).
Mathematically:
e
n
≥
0
,
p
n
≥
0
,
e
n
⋅
p
n
=
0
{\displaystyle e_{n}\geq 0,p_{n}\geq 0,e_{n}\cdot p_{n}=0\,\!}
. Here
e
n
,
p
n
{\displaystyle e_{n},p_{n}}
are functions that vary with the position along the bodies' surfaces.
In the tangential directions the following conditions are often used:
The local (tangential) shear stress
p
→
=
(
p
x
,
p
y
)
T
{\displaystyle {\vec {p}}=(p_{x},p_{y})^{\mathsf {T}}\,\!}
(assuming the normal direction parallel to the
z
{\displaystyle z}
-axis) cannot exceed a certain position-dependent maximum, the so-called traction bound
g
{\displaystyle g}
;
Where the magnitude of tangential traction falls below the traction bound
‖
p
→
‖
<
g
{\displaystyle \|{\vec {p}}\|<g\,\!}
, the opposing surfaces adhere together and micro-slip vanishes,
s
→
=
(
s
x
,
s
y
)
T
=
0
→
{\displaystyle {\vec {s}}=(s_{x},s_{y})^{\mathsf {T}}={\vec {0}}\,\!}
;
Micro-slip occurs where the tangential tractions are at the traction bound; the direction of the tangential traction is then opposite to the direction of micro-slip
p
→
=
−
g
s
→
/
‖
s
→
‖
{\displaystyle {\vec {p}}=-g{\vec {s}}/\|{\vec {s}}\|\,\!}
.
The precise form of the traction bound is the so-called local friction law. For this Coulomb's (global) friction law is often applied locally:
‖
p
→
‖
≤
g
=
μ
p
n
{\displaystyle \|{\vec {p}}\|\leq g=\mu p_{n}\,\!}
, with
μ
{\displaystyle \mu }
the friction coefficient. More detailed formulae are also possible, for instance with
μ
{\displaystyle \mu }
depending on temperature
T
{\displaystyle T}
, local sliding velocity
‖
s
→
‖
{\displaystyle \|{\vec {s}}\|}
, etc.
== Solutions for static cases ==
=== Rope on a bollard, the capstan equation ===
Consider a rope where equal forces (e.g.,
F
hold
=
400
N
{\displaystyle F_{\text{hold}}=400\,\mathrm {N} }
) are exerted on both sides. By this the rope is stretched a bit and an internal tension
T
{\displaystyle T}
is induced (
T
=
400
N
{\displaystyle T=400\,\mathrm {N} }
on every position along the rope). The rope is wrapped around a fixed item such as a bollard; it is bent and makes contact to the item's surface over a contact angle (e.g.,
180
∘
{\displaystyle 180^{\circ }}
). Normal pressure comes into being between the rope and bollard, but no friction occurs yet. Next the force on one side of the bollard is increased to a higher value (e.g.,
F
load
=
600
N
{\displaystyle F_{\text{load}}=600\,\mathrm {N} }
). This does cause frictional shear stresses in the contact area. In the final situation the bollard exercises a friction force on the rope such that a static situation occurs.
The tension distribution in the rope in this final situation is described by the capstan equation, with solution:
T
(
ϕ
)
=
T
hold
,
ϕ
∈
[
ϕ
hold
,
ϕ
intf
]
T
(
ϕ
)
=
T
load
e
−
μ
ϕ
,
ϕ
∈
[
ϕ
intf
,
ϕ
load
]
ϕ
intf
=
1
μ
log
(
T
load
T
hold
)
{\displaystyle {\begin{aligned}T(\phi )&=T_{\text{hold}},&\phi &\in \left[\phi _{\text{hold}},\phi _{\text{intf}}\right]\\T(\phi )&=T_{\text{load}}e^{-\mu \phi },&\phi &\in \left[\phi _{\text{intf}},\phi _{\text{load}}\right]\\\phi _{\text{intf}}&={\frac {1}{\mu }}\log \left({\frac {T_{\text{load}}}{T_{\text{hold}}}}\right)&\end{aligned}}}
The tension increases from
T
hold
{\displaystyle T_{\text{hold}}}
on the slack side (
ϕ
=
ϕ
hold
{\displaystyle \phi =\phi _{\text{hold}}}
) to
T
load
{\displaystyle T_{\text{load}}}
on the high side
ϕ
=
ϕ
load
{\displaystyle \phi =\phi _{\text{load}}}
. When viewed from the high side, the tension drops exponentially, until it reaches the lower load at
ϕ
=
ϕ
intf
{\displaystyle \phi =\phi _{\text{intf}}}
. From there on it is constant at this value. The transition point
ϕ
intf
{\displaystyle \phi _{\text{intf}}}
is determined by the ratio of the two loads and the friction coefficient. Here the tensions
T
{\displaystyle T}
are in Newtons and the angles
ϕ
{\displaystyle \phi }
in radians.
The tension
T
{\displaystyle T}
in the rope in the final situation is increased with respect to the initial state. Therefore, the rope is elongated a bit. This means that not all surface particles of the rope can have held their initial position on the bollard surface. During the loading process, the rope slipped a little bit along the bollard surface in the slip area
ϕ
∈
[
ϕ
intf
,
ϕ
load
]
{\displaystyle \phi \in [\phi _{\text{intf}},\phi _{\text{load}}]}
. This slip is precisely large enough to get to the elongation that occurs in the final state. Note that there is no slipping going on in the final state; the term slip area refers to the slippage that occurred during the loading process. Note further that the location of the slip area depends on the initial state and the loading process. If the initial tension is
600
N
{\displaystyle 600\,\mathrm {N} }
and the tension is reduced to
400
N
{\displaystyle 400\,\mathrm {N} }
at the slack side, then the slip area occurs at the slack side of the contact area. For initial tensions between
400
{\displaystyle 400}
and
600
N
{\displaystyle 600\,\mathrm {N} }
, there can be slip areas on both sides with a stick area in between.
=== Generalization for a rope lying on an arbitrary orthotropic surface ===
If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied:
This generalization has been obtained by Konyukhov A.,
=== Sphere on a plane, the (3D) Cattaneo problem ===
Consider a sphere that is pressed onto a plane (half space) and then shifted over the plane's surface. If the sphere and plane are idealised as rigid bodies, then contact would occur in just a single point, and the sphere would not move until the tangential force that is applied reaches the maximum friction force. Then it starts sliding over the surface until the applied force is reduced again.
In reality, with elastic effects taken into consideration, the situation is much different. If an elastic sphere is pressed onto an elastic plane of the same material then both bodies deform, a circular contact area comes into being, and a (Hertzian) normal pressure distribution arises. The center of the sphere is moved down by a distance
δ
n
{\displaystyle \delta _{n}}
called the approach, which is equivalent to the maximum penetration of the undeformed surfaces. For a sphere of radius
R
{\displaystyle R}
and elastic constants
E
,
ν
{\displaystyle E,\nu }
this Hertzian solution reads:
p
n
(
x
,
y
)
=
p
0
1
−
r
2
a
2
r
=
x
2
+
y
2
≤
a
a
=
R
δ
n
p
0
=
2
π
E
∗
δ
n
R
F
n
=
4
3
E
∗
R
δ
n
3
2
E
∗
=
E
2
(
1
−
ν
2
)
{\displaystyle {\begin{aligned}p_{n}(x,y)&=p_{0}{\sqrt {1-{\frac {r^{2}}{a^{2}}}}}&r&={\sqrt {x^{2}+y^{2}}}\leq a&a&={\sqrt {R\delta _{n}}}\\p_{0}&={\frac {2}{\pi }}E^{*}{\sqrt {\frac {\delta _{n}}{R}}}&F_{n}&={\frac {4}{3}}E^{*}{\sqrt {R}}\delta _{n}^{\frac {3}{2}}&E^{*}&={\frac {E}{2\left(1-\nu ^{2}\right)}}\end{aligned}}}
Now consider that a tangential force
F
x
{\displaystyle F_{x}}
is applied that is lower than the Coulomb friction bound
μ
F
n
{\displaystyle \mu F_{n}}
. The center of the sphere will then be moved sideways by a small distance
δ
x
{\displaystyle \delta _{x}}
that is called the shift. A static equilibrium is obtained in which elastic deformations occur as well as frictional shear stresses in the contact interface. In this case, if the tangential force is reduced then the elastic deformations and shear stresses reduce as well. The sphere largely shifts back to its original position, except for frictional losses that arise due to local slip in the contact patch.
This contact problem was solved approximately by Cattaneo using an analytical approach. The stress distribution in the equilibrium state consists of two parts:
p
x
(
x
,
y
)
=
μ
p
0
(
1
−
r
2
a
2
−
c
a
1
−
r
2
c
2
)
0
≤
r
≤
c
p
x
(
x
,
y
)
=
μ
p
n
(
x
,
y
)
c
≤
r
≤
a
p
x
(
x
,
y
)
=
0
a
≤
r
{\displaystyle {\begin{aligned}p_{x}(x,y)&=\mu p_{0}\left({\sqrt {1-{\frac {r^{2}}{a^{2}}}}}-{\frac {c}{a}}{\sqrt {1-{\frac {r^{2}}{c^{2}}}}}\right)&0\leq {}&r\leq c\\p_{x}(x,y)&=\mu p_{n}(x,y)&c\leq {}&r\leq a\\p_{x}(x,y)&=0&a\leq {}&r\end{aligned}}}
In the central, sticking region
0
≤
r
≤
c
{\displaystyle 0\leq r\leq c}
, the surface particles of the plane displace over
u
x
=
δ
x
/
2
{\displaystyle u_{x}=\delta _{x}/2}
to the right whereas the surface particles of the sphere displace over
u
x
=
−
δ
x
/
2
{\displaystyle u_{x}=-\delta _{x}/2}
to the left. Even though the sphere as a whole moves over
δ
x
{\displaystyle \delta _{x}}
relative to the plane, these surface particles did not move relative to each other. In the outer annulus
c
≤
r
≤
a
{\displaystyle c\leq r\leq a}
, the surface particles did move relative to each other. Their local shift is obtained as
s
x
(
x
,
y
)
=
δ
x
+
u
x
sphere
(
x
,
y
)
−
u
x
plane
(
x
,
y
)
{\displaystyle s_{x}(x,y)=\delta _{x}+u_{x}^{\text{sphere}}(x,y)-u_{x}^{\text{plane}}(x,y)}
This shift
s
x
(
x
,
y
)
{\displaystyle s_{x}(x,y)}
is precisely as large such that a static equilibrium is obtained with shear stresses at the traction bound in this so-called slip area.
So, during the tangential loading of the sphere, partial sliding occurs. The contact area is thus divided into a slip area where the surfaces move relative to each other and a stick area where they do not. In the equilibrium state no more sliding is going on.
== Solutions for dynamic sliding problems ==
The solution of a contact problem consists of the state at the interface (where the contact is, division of the contact area into stick and slip zones, and the normal and shear stress distributions) plus the elastic field in the bodies' interiors. This solution depends on the history of the contact. This can be seen by extension of the Cattaneo problem described above.
In the Cattaneo problem, the sphere is first pressed onto the plane and then shifted tangentially. This yields partial slip as described above.
If the sphere is first shifted tangentially and then pressed onto the plane, then there is no tangential displacement difference between the opposing surfaces and consequently there is no tangential stress in the contact interface.
If the approach in normal direction and tangential shift are increased simultaneously ("oblique compression") then a situation can be achieved with tangential stress but without local slip.
This demonstrates that the state in the contact interface is not only dependent on the relative positions of the two bodies, but also on their motion history. Another example of this occurs if the sphere is shifted back to its original position. Initially there was no tangential stress in the contact interface. After the initial shift micro-slip has occurred. This micro-slip is not entirely undone by shifting back. So in the final situation tangential stresses remain in the interface, in what looks like an identical configuration as the original one.
Influence of friction on dynamic contacts (impacts) is considered in detail in.
== Solution of rolling contact problems ==
Rolling contact problems are dynamic problems in which the contacting bodies are continuously moving with respect to each other. A difference to dynamic sliding contact problems is that there is more variety in the state of different surface particles. Whereas the contact patch in a sliding problem continuously consists of more or less the same particles, in a rolling contact problem particles enter and leave the contact patch incessantly. Moreover, in a sliding problem the surface particles in the contact patch are all subjected to more or less the same tangential shift everywhere, whereas in a rolling problem the surface particles are stressed in rather different ways. They are free of stress when entering the contact patch, then stick to a particle of the opposing surface, are strained by the overall motion difference between the two bodies, until the local traction bound is exceeded and local slip sets in. This process is in different stages for different parts of the contact area.
If the overall motion of the bodies is constant, then an overall steady state may be attained. Here the state of each surface particle is varying in time, but the overall distribution can be constant. This is formalised by using a coordinate system that is moving along with the contact patch.
=== Cylinder rolling on a plane, the (2D) Carter-Fromm solution ===
Consider a cylinder that is rolling over a plane (half-space) under steady conditions, with a time-independent longitudinal creepage
ξ
{\displaystyle \xi }
. (Relatively) far away from the ends of the cylinders a situation of plane strain occurs and the problem is 2-dimensional.
If the cylinder and plane consist of the same materials then the normal contact problem is unaffected by the shear stress. The contact area is a strip
x
∈
[
−
a
,
a
]
{\displaystyle x\in [-a,a]}
, and the pressure is described by the (2D) Hertz solution.
p
n
(
x
)
=
p
0
a
a
2
−
x
2
|
x
|
≤
a
a
2
=
4
F
n
R
π
E
∗
p
0
=
2
F
n
π
a
E
∗
=
E
2
(
1
−
ν
2
)
{\displaystyle {\begin{aligned}p_{n}(x)&={\frac {p_{0}}{a}}{\sqrt {a^{2}-x^{2}}}&|x|&\leq a&a^{2}&={\frac {4F_{n}R}{\pi E^{*}}}\\p_{0}&={\frac {2F_{n}}{\pi a}}&&&E^{*}&={\frac {E}{2\left(1-\nu ^{2}\right)}}&\end{aligned}}}
The distribution of the shear stress is described by the Carter-Fromm solution. It consists of an adhesion area at the leading edge of the contact area and a slip area at the trailing edge. The length of the adhesion area is denoted
2
a
′
{\displaystyle 2a'}
. Further the adhesion coordinate is introduced by
x
′
=
x
+
a
−
a
′
{\displaystyle x'=x+a-a'}
. In case of a positive force
F
x
>
0
{\displaystyle F_{x}>0}
(negative creepage
ξ
<
0
{\displaystyle \xi <0}
) it is:
p
x
(
x
)
=
0
|
x
|
≥
a
p
x
(
x
)
=
μ
p
0
a
(
a
2
−
x
2
−
a
′
2
−
x
′
2
)
a
−
2
a
′
≤
x
≤
a
p
x
(
x
)
=
μ
p
n
(
x
)
x
≤
a
−
2
a
′
{\displaystyle {\begin{aligned}p_{x}(x)&=0&|&x|\geq a\\p_{x}(x)&={\frac {\mu p_{0}}{a}}\left({\sqrt {a^{2}-x^{2}}}-{\sqrt {a'^{2}-x'^{2}}}\right)&a-2a'\leq {}&x\leq a\\p_{x}(x)&=\mu p_{n}(x)&&x\leq a-2a'\end{aligned}}}
The size of the adhesion area depends on the creepage, the wheel radius and the friction coefficient.
a
′
=
a
1
−
|
F
x
|
μ
F
n
,
for
|
F
x
|
≤
μ
F
n
ξ
=
−
sign
(
F
x
)
μ
(
a
−
a
′
)
R
,
i.e.
|
ξ
|
≤
μ
a
R
F
x
=
−
sign
(
ξ
)
μ
F
n
(
1
−
(
1
+
R
|
ξ
|
μ
a
)
2
)
{\displaystyle {\begin{aligned}a'&=a{\sqrt {1-{\frac {|F_{x}|}{\mu F_{n}}}}},&{\mbox{for }}|F_{x}|\leq \mu F_{n}\\\xi &=-\operatorname {sign} (F_{x})\,{\frac {\mu (a-a')}{R}},&{\mbox{i.e. }}|\xi |\leq {\frac {\mu a}{R}}\\F_{x}&=-\operatorname {sign} (\xi )\,\mu F_{n}\left(1-\left(1+{\frac {R|\xi |}{\mu a}}\right)^{2}\right)\end{aligned}}}
For larger creepages
a
′
=
0
{\displaystyle a'=0}
such that full sliding occurs.
== Half-space based approaches ==
When considering contact problems at the intermediate spatial scales, the small-scale material inhomogeneities and surface roughness are ignored. The bodies are considered as consisting of smooth surfaces and homogeneous materials. A continuum approach is taken where the stresses, strains and displacements are described by (piecewise) continuous functions.
The half-space approach is an elegant solution strategy for so-called "smooth-edged" or "concentrated" contact problems.
If a massive elastic body is loaded on a small section of its surface, then the elastic stresses attenuate proportional to
1
/
d
i
s
t
a
n
c
e
2
{\displaystyle 1/distance^{2}}
and the elastic displacements by
1
/
d
i
s
t
a
n
c
e
{\displaystyle 1/distance}
when one moves away from this surface area.
If a body has no sharp corners in or near the contact region, then its response to a surface load may be approximated well by the response of an elastic half-space (e.g. all points
(
x
,
y
,
z
)
T
∈
R
3
{\displaystyle (x,y,z)^{\mathsf {T}}\in \mathbb {R} ^{3}\,\!}
with
z
>
0
{\displaystyle z>0\,\!}
).
The elastic half-space problem is solved analytically, see the Boussinesq-Cerruti solution.
Due to the linearity of this approach, multiple partial solutions may be super-imposed.
Using the fundamental solution for the half-space, the full 3D contact problem is reduced to a 2D problem for the bodies' bounding surfaces.
A further simplification occurs if the two bodies are “geometrically and elastically alike”. In general, stress inside a body in one direction induces displacements in perpendicular directions too. Consequently, there is an interaction between the normal stress and tangential displacements in the contact problem, and an interaction between the tangential stress and normal displacements. But if the normal stress in the contact interface induces the same tangential displacements in both contacting bodies, then there is no relative tangential displacement of the two surfaces. In that case, the normal and tangential contact problems are decoupled. If this is the case then the two bodies are called quasi-identical. This happens for instance if the bodies are mirror-symmetric with respect to the contact plane and have the same elastic constants.
Classical solutions based on the half-space approach are:
Hertz solved the contact problem in the absence of friction, for a simple geometry (curved surfaces with constant radii of curvature).
Carter considered the rolling contact between a cylinder and a plane, as described above. A complete analytical solution is provided for the tangential traction.
Cattaneo considered the compression and shifting of two spheres, as described above. Note that this analytical solution is approximate. In reality small tangential tractions
p
y
{\displaystyle p_{y}}
occur which are ignored.
== See also ==
== References ==
== External links ==
[1] Biography of Prof.dr.ir. J.J. Kalker (Delft University of Technology).
[2] Kalker's Hertzian/non-Hertzian CONTACT software. | Wikipedia/Frictional_contact_mechanics |
In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body.
== Formulation ==
Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function:
R
→
0
:
Ω
→
P
{\displaystyle {\vec {R}}_{0}:\Omega \to P}
where
R
→
0
{\displaystyle {\vec {R}}_{0}}
is a placement vector
Ω
{\displaystyle \Omega }
are all the points of the body
P
{\displaystyle P}
are all the points in the space in which the body is present
Most often it is a state of the body in which no forces are applied.
Then given any other state of this body in which coordinates of all its points are described as
R
→
1
{\displaystyle {\vec {R}}_{1}}
the displacement field is the difference between two body states:
u
→
=
R
→
1
−
R
→
0
{\displaystyle {\vec {u}}={\vec {R}}_{1}-{\vec {R}}_{0}}
where
u
→
{\displaystyle {\vec {u}}}
is a displacement field, which for each point of the body specifies a displacement vector.
== Decomposition ==
The displacement of a body has two components: a rigid-body displacement and a deformation.
A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size.
Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration
κ
0
(
B
)
{\displaystyle \kappa _{0}({\mathcal {B}})}
to a current or deformed configuration
κ
t
(
B
)
{\displaystyle \kappa _{t}({\mathcal {B}})}
(Figure 1).
A change in the configuration of a continuum body can be described by a displacement field. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement.
== Displacement gradient tensor ==
Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications.
The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration
P
i
{\displaystyle P_{i}}
and deformed configuration
p
i
{\displaystyle p_{i}}
is called the displacement vector,
p
i
−
P
i
{\displaystyle p_{i}-P_{i}}
, denoted
u
i
{\displaystyle u_{i}}
or
U
i
{\displaystyle U_{i}}
below.
=== Material coordinates (Lagrangian description) ===
Using
X
{\displaystyle \mathbf {X} }
in place of
P
i
{\displaystyle P_{i}}
and
x
{\displaystyle \mathbf {x} }
in place of
p
i
{\displaystyle p_{i}\,\!}
, both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector:
u
(
X
,
t
)
=
u
i
e
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}}
where
e
i
{\displaystyle \mathbf {e} _{i}}
are the unit vectors that define the basis of the material (body-frame) coordinate system.
Expressed in terms of the material coordinates, i.e.
u
{\displaystyle \mathbf {u} }
as a function of
X
{\displaystyle \mathbf {X} }
, the displacement field is:
u
(
X
,
t
)
=
b
(
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
where
b
(
t
)
{\displaystyle \mathbf {b} (t)}
is the displacement vector representing rigid-body translation.
The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor
∇
X
u
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} \,\!}
. Thus we have,
∇
X
u
=
∇
X
x
−
R
=
F
−
R
or
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
α
i
K
=
F
i
K
−
α
i
K
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} =\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {R} =\mathbf {F} -\mathbf {R} \qquad {\text{or}}\qquad {\frac {\partial u_{i}}{\partial X_{K}}}={\frac {\partial x_{i}}{\partial X_{K}}}-\alpha _{iK}=F_{iK}-\alpha _{iK}}
where
F
{\displaystyle \mathbf {F} }
is the material deformation gradient tensor and
R
{\displaystyle \mathbf {R} }
is a rotation.
=== Spatial coordinates (Eulerian description) ===
In the Eulerian description, the vector extending from a particle
P
{\displaystyle P}
in the undeformed configuration to its location in the deformed configuration is called the displacement vector:
U
(
x
,
t
)
=
U
J
E
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=U_{J}\mathbf {E} _{J}}
where
E
i
{\displaystyle \mathbf {E} _{i}}
are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system.
Expressed in terms of spatial coordinates, i.e.
U
{\displaystyle \mathbf {U} }
as a function of
x
{\displaystyle \mathbf {x} }
, the displacement field is:
U
(
x
,
t
)
=
b
(
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor
∇
x
U
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} \,\!}
. Thus we have,
∇
x
U
=
R
T
−
∇
x
X
=
R
T
−
F
−
1
or
∂
U
J
∂
x
k
=
α
J
k
−
∂
X
J
∂
x
k
=
α
J
k
−
F
J
k
−
1
,
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} =\mathbf {R} ^{T}-\nabla _{\mathbf {x} }\mathbf {X} =\mathbf {R} ^{T}-\mathbf {F} ^{-1}\qquad {\text{or}}\qquad {\frac {\partial U_{J}}{\partial x_{k}}}=\alpha _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}=\alpha _{Jk}-F_{Jk}^{-1}\,,}
where
F
−
1
=
H
{\displaystyle \mathbf {F} ^{-1}=\mathbf {H} }
is the spatial deformation gradient tensor.
=== Relationship between the material and spatial coordinate systems ===
α
J
i
{\displaystyle \alpha _{Ji}}
are the direction cosines between the material and spatial coordinate systems with unit vectors
E
J
{\displaystyle \mathbf {E} _{J}}
and
e
i
{\displaystyle \mathbf {e} _{i}\,\!}
, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
The relationship between
u
i
{\displaystyle u_{i}}
and
U
J
{\displaystyle U_{J}}
is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
=== Combining the coordinate systems of deformed and undeformed configurations ===
It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in
b
=
0
{\displaystyle \mathbf {b} =0\,\!}
, and the direction cosines become Kronecker deltas, i.e.,
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus in material (undeformed) coordinates, the displacement may be expressed as:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}}
And in spatial (deformed) coordinates, the displacement may be expressed as:
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}}
== See also ==
Stress
Strain
== References == | Wikipedia/Spatial_displacement_gradient_tensor |
The derivatives of scalars, vectors, and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics. These derivatives are used in the theories of nonlinear elasticity and plasticity, particularly in the design of algorithms for numerical simulations.
The directional derivative provides a systematic way of finding these derivatives.
== Derivatives with respect to vectors and second-order tensors ==
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
=== Derivatives of scalar valued functions of vectors ===
Let f(v) be a real valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the vector defined through its dot product with any vector u being
∂
f
∂
v
⋅
u
=
D
f
(
v
)
[
u
]
=
[
d
d
α
f
(
v
+
α
u
)
]
α
=
0
{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~f(\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u. The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v, in the u direction.
Properties:
If
f
(
v
)
=
f
1
(
v
)
+
f
2
(
v
)
{\displaystyle f(\mathbf {v} )=f_{1}(\mathbf {v} )+f_{2}(\mathbf {v} )}
then
∂
f
∂
v
⋅
u
=
(
∂
f
1
∂
v
+
∂
f
2
∂
v
)
⋅
u
{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial f_{1}}{\partial \mathbf {v} }}+{\frac {\partial f_{2}}{\partial \mathbf {v} }}\right)\cdot \mathbf {u} }
If
f
(
v
)
=
f
1
(
v
)
f
2
(
v
)
{\displaystyle f(\mathbf {v} )=f_{1}(\mathbf {v} )~f_{2}(\mathbf {v} )}
then
∂
f
∂
v
⋅
u
=
(
∂
f
1
∂
v
⋅
u
)
f
2
(
v
)
+
f
1
(
v
)
(
∂
f
2
∂
v
⋅
u
)
{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial f_{1}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)~f_{2}(\mathbf {v} )+f_{1}(\mathbf {v} )~\left({\frac {\partial f_{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}
If
f
(
v
)
=
f
1
(
f
2
(
v
)
)
{\displaystyle f(\mathbf {v} )=f_{1}(f_{2}(\mathbf {v} ))}
then
∂
f
∂
v
⋅
u
=
∂
f
1
∂
f
2
∂
f
2
∂
v
⋅
u
{\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} ={\frac {\partial f_{1}}{\partial f_{2}}}~{\frac {\partial f_{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} }
=== Derivatives of vector valued functions of vectors ===
Let f(v) be a vector valued function of the vector v. Then the derivative of f(v) with respect to v (or at v) is the second order tensor defined through its dot product with any vector u being
∂
f
∂
v
⋅
u
=
D
f
(
v
)
[
u
]
=
[
d
d
α
f
(
v
+
α
u
)
]
α
=
0
{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~\mathbf {f} (\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u. The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v, in the directional u.
Properties:
If
f
(
v
)
=
f
1
(
v
)
+
f
2
(
v
)
{\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {v} )+\mathbf {f} _{2}(\mathbf {v} )}
then
∂
f
∂
v
⋅
u
=
(
∂
f
1
∂
v
+
∂
f
2
∂
v
)
⋅
u
{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {v} }}+{\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\right)\cdot \mathbf {u} }
If
f
(
v
)
=
f
1
(
v
)
×
f
2
(
v
)
{\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {v} )\times \mathbf {f} _{2}(\mathbf {v} )}
then
∂
f
∂
v
⋅
u
=
(
∂
f
1
∂
v
⋅
u
)
×
f
2
(
v
)
+
f
1
(
v
)
×
(
∂
f
2
∂
v
⋅
u
)
{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =\left({\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)\times \mathbf {f} _{2}(\mathbf {v} )+\mathbf {f} _{1}(\mathbf {v} )\times \left({\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}
If
f
(
v
)
=
f
1
(
f
2
(
v
)
)
{\displaystyle \mathbf {f} (\mathbf {v} )=\mathbf {f} _{1}(\mathbf {f} _{2}(\mathbf {v} ))}
then
∂
f
∂
v
⋅
u
=
∂
f
1
∂
f
2
⋅
(
∂
f
2
∂
v
⋅
u
)
{\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} ={\frac {\partial \mathbf {f} _{1}}{\partial \mathbf {f} _{2}}}\cdot \left({\frac {\partial \mathbf {f} _{2}}{\partial \mathbf {v} }}\cdot \mathbf {u} \right)}
=== Derivatives of scalar valued functions of second-order tensors ===
Let
f
(
S
)
{\displaystyle f({\boldsymbol {S}})}
be a real valued function of the second order tensor
S
{\displaystyle {\boldsymbol {S}}}
. Then the derivative of
f
(
S
)
{\displaystyle f({\boldsymbol {S}})}
with respect to
S
{\displaystyle {\boldsymbol {S}}}
(or at
S
{\displaystyle {\boldsymbol {S}}}
) in the direction
T
{\displaystyle {\boldsymbol {T}}}
is the second order tensor defined as
∂
f
∂
S
:
T
=
D
f
(
S
)
[
T
]
=
[
d
d
α
f
(
S
+
α
T
)
]
α
=
0
{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~f({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}
for all second order tensors
T
{\displaystyle {\boldsymbol {T}}}
.
Properties:
If
f
(
S
)
=
f
1
(
S
)
+
f
2
(
S
)
{\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {S}})+f_{2}({\boldsymbol {S}})}
then
∂
f
∂
S
:
T
=
(
∂
f
1
∂
S
+
∂
f
2
∂
S
)
:
T
{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial f_{1}}{\partial {\boldsymbol {S}}}}+{\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}\right):{\boldsymbol {T}}}
If
f
(
S
)
=
f
1
(
S
)
f
2
(
S
)
{\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {S}})~f_{2}({\boldsymbol {S}})}
then
∂
f
∂
S
:
T
=
(
∂
f
1
∂
S
:
T
)
f
2
(
S
)
+
f
1
(
S
)
(
∂
f
2
∂
S
:
T
)
{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial f_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)~f_{2}({\boldsymbol {S}})+f_{1}({\boldsymbol {S}})~\left({\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
If
f
(
S
)
=
f
1
(
f
2
(
S
)
)
{\displaystyle f({\boldsymbol {S}})=f_{1}(f_{2}({\boldsymbol {S}}))}
then
∂
f
∂
S
:
T
=
∂
f
1
∂
f
2
(
∂
f
2
∂
S
:
T
)
{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial f_{1}}{\partial f_{2}}}~\left({\frac {\partial f_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
=== Derivatives of tensor valued functions of second-order tensors ===
Let
F
(
S
)
{\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})}
be a second order tensor valued function of the second order tensor
S
{\displaystyle {\boldsymbol {S}}}
. Then the derivative of
F
(
S
)
{\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})}
with respect to
S
{\displaystyle {\boldsymbol {S}}}
(or at
S
{\displaystyle {\boldsymbol {S}}}
) in the direction
T
{\displaystyle {\boldsymbol {T}}}
is the fourth order tensor defined as
∂
F
∂
S
:
T
=
D
F
(
S
)
[
T
]
=
[
d
d
α
F
(
S
+
α
T
)
]
α
=
0
{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~{\boldsymbol {F}}({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}}
for all second order tensors
T
{\displaystyle {\boldsymbol {T}}}
.
Properties:
If
F
(
S
)
=
F
1
(
S
)
+
F
2
(
S
)
{\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {S}})+{\boldsymbol {F}}_{2}({\boldsymbol {S}})}
then
∂
F
∂
S
:
T
=
(
∂
F
1
∂
S
+
∂
F
2
∂
S
)
:
T
{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}+{\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}\right):{\boldsymbol {T}}}
If
F
(
S
)
=
F
1
(
S
)
⋅
F
2
(
S
)
{\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})}
then
∂
F
∂
S
:
T
=
(
∂
F
1
∂
S
:
T
)
⋅
F
2
(
S
)
+
F
1
(
S
)
⋅
(
∂
F
2
∂
S
:
T
)
{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})+{\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot \left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
If
F
(
S
)
=
F
1
(
F
2
(
S
)
)
{\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})={\boldsymbol {F}}_{1}({\boldsymbol {F}}_{2}({\boldsymbol {S}}))}
then
∂
F
∂
S
:
T
=
∂
F
1
∂
F
2
:
(
∂
F
2
∂
S
:
T
)
{\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {F}}_{2}}}:\left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
If
f
(
S
)
=
f
1
(
F
2
(
S
)
)
{\displaystyle f({\boldsymbol {S}})=f_{1}({\boldsymbol {F}}_{2}({\boldsymbol {S}}))}
then
∂
f
∂
S
:
T
=
∂
f
1
∂
F
2
:
(
∂
F
2
∂
S
:
T
)
{\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}={\frac {\partial f_{1}}{\partial {\boldsymbol {F}}_{2}}}:\left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
== Gradient of a tensor field ==
The gradient,
∇
T
{\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}}
, of a tensor field
T
(
x
)
{\displaystyle {\boldsymbol {T}}(\mathbf {x} )}
in the direction of an arbitrary constant vector c is defined as:
∇
T
⋅
c
=
lim
α
→
0
d
d
α
T
(
x
+
α
c
)
{\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}\cdot \mathbf {c} =\lim _{\alpha \rightarrow 0}\quad {\cfrac {d}{d\alpha }}~{\boldsymbol {T}}(\mathbf {x} +\alpha \mathbf {c} )}
The gradient of a tensor field of order n is a tensor field of order n+1.
=== Cartesian coordinates ===
If
e
1
,
e
2
,
e
3
{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}}
are the basis vectors in a Cartesian coordinate system, with coordinates of points denoted by (
x
1
,
x
2
,
x
3
{\displaystyle x_{1},x_{2},x_{3}}
), then the gradient of the tensor field
T
{\displaystyle {\boldsymbol {T}}}
is given by
∇
T
=
∂
T
∂
x
i
⊗
e
i
{\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}={\cfrac {\partial {\boldsymbol {T}}}{\partial x_{i}}}\otimes \mathbf {e} _{i}}
Since the basis vectors do not vary in a Cartesian coordinate system we have the following relations for the gradients of a scalar field
ϕ
{\displaystyle \phi }
, a vector field v, and a second-order tensor field
S
{\displaystyle {\boldsymbol {S}}}
.
∇
ϕ
=
∂
ϕ
∂
x
i
e
i
=
ϕ
,
i
e
i
∇
v
=
∂
(
v
j
e
j
)
∂
x
i
⊗
e
i
=
∂
v
j
∂
x
i
e
j
⊗
e
i
=
v
j
,
i
e
j
⊗
e
i
∇
S
=
∂
(
S
j
k
e
j
⊗
e
k
)
∂
x
i
⊗
e
i
=
∂
S
j
k
∂
x
i
e
j
⊗
e
k
⊗
e
i
=
S
j
k
,
i
e
j
⊗
e
k
⊗
e
i
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi &={\cfrac {\partial \phi }{\partial x_{i}}}~\mathbf {e} _{i}=\phi _{,i}~\mathbf {e} _{i}\\{\boldsymbol {\nabla }}\mathbf {v} &={\cfrac {\partial (v_{j}\mathbf {e} _{j})}{\partial x_{i}}}\otimes \mathbf {e} _{i}={\cfrac {\partial v_{j}}{\partial x_{i}}}~\mathbf {e} _{j}\otimes \mathbf {e} _{i}=v_{j,i}~\mathbf {e} _{j}\otimes \mathbf {e} _{i}\\{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\cfrac {\partial (S_{jk}\mathbf {e} _{j}\otimes \mathbf {e} _{k})}{\partial x_{i}}}\otimes \mathbf {e} _{i}={\cfrac {\partial S_{jk}}{\partial x_{i}}}~\mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{i}=S_{jk,i}~\mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{i}\end{aligned}}}
=== Curvilinear coordinates ===
If
g
1
,
g
2
,
g
3
{\displaystyle \mathbf {g} ^{1},\mathbf {g} ^{2},\mathbf {g} ^{3}}
are the contravariant basis vectors in a curvilinear coordinate system, with coordinates of points denoted by (
ξ
1
,
ξ
2
,
ξ
3
{\displaystyle \xi ^{1},\xi ^{2},\xi ^{3}}
), then the gradient of the tensor field
T
{\displaystyle {\boldsymbol {T}}}
is given by
∇
T
=
∂
T
∂
ξ
i
⊗
g
i
{\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}={\frac {\partial {\boldsymbol {T}}}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}}
From this definition we have the following relations for the gradients of a scalar field
ϕ
{\displaystyle \phi }
, a vector field v, and a second-order tensor field
S
{\displaystyle {\boldsymbol {S}}}
.
∇
ϕ
=
∂
ϕ
∂
ξ
i
g
i
∇
v
=
∂
(
v
j
g
j
)
∂
ξ
i
⊗
g
i
=
(
∂
v
j
∂
ξ
i
+
v
k
Γ
i
k
j
)
g
j
⊗
g
i
=
(
∂
v
j
∂
ξ
i
−
v
k
Γ
i
j
k
)
g
j
⊗
g
i
∇
S
=
∂
(
S
j
k
g
j
⊗
g
k
)
∂
ξ
i
⊗
g
i
=
(
∂
S
j
k
∂
ξ
i
−
S
l
k
Γ
i
j
l
−
S
j
l
Γ
i
k
l
)
g
j
⊗
g
k
⊗
g
i
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi &={\frac {\partial \phi }{\partial \xi ^{i}}}~\mathbf {g} ^{i}\\[1.2ex]{\boldsymbol {\nabla }}\mathbf {v} &={\frac {\partial \left(v^{j}\mathbf {g} _{j}\right)}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}\\&=\left({\frac {\partial v^{j}}{\partial \xi ^{i}}}+v^{k}~\Gamma _{ik}^{j}\right)~\mathbf {g} _{j}\otimes \mathbf {g} ^{i}=\left({\frac {\partial v_{j}}{\partial \xi ^{i}}}-v_{k}~\Gamma _{ij}^{k}\right)~\mathbf {g} ^{j}\otimes \mathbf {g} ^{i}\\[1.2ex]{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\frac {\partial \left(S_{jk}~\mathbf {g} ^{j}\otimes \mathbf {g} ^{k}\right)}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}\\&=\left({\frac {\partial S_{jk}}{\partial \xi _{i}}}-S_{lk}~\Gamma _{ij}^{l}-S_{jl}~\Gamma _{ik}^{l}\right)~\mathbf {g} ^{j}\otimes \mathbf {g} ^{k}\otimes \mathbf {g} ^{i}\end{aligned}}}
where the Christoffel symbol
Γ
i
j
k
{\displaystyle \Gamma _{ij}^{k}}
is defined using
Γ
i
j
k
g
k
=
∂
g
i
∂
ξ
j
⟹
Γ
i
j
k
=
∂
g
i
∂
ξ
j
⋅
g
k
=
−
g
i
⋅
∂
g
k
∂
ξ
j
{\displaystyle \Gamma _{ij}^{k}~\mathbf {g} _{k}={\frac {\partial \mathbf {g} _{i}}{\partial \xi ^{j}}}\quad \implies \quad \Gamma _{ij}^{k}={\frac {\partial \mathbf {g} _{i}}{\partial \xi ^{j}}}\cdot \mathbf {g} ^{k}=-\mathbf {g} _{i}\cdot {\frac {\partial \mathbf {g} ^{k}}{\partial \xi ^{j}}}}
==== Cylindrical polar coordinates ====
In cylindrical coordinates, the gradient is given by
∇
ϕ
=
∂
ϕ
∂
r
e
r
+
1
r
∂
ϕ
∂
θ
e
θ
+
∂
ϕ
∂
z
e
z
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi ={}\quad &{\frac {\partial \phi }{\partial r}}~\mathbf {e} _{r}+{\frac {1}{r}}~{\frac {\partial \phi }{\partial \theta }}~\mathbf {e} _{\theta }+{\frac {\partial \phi }{\partial z}}~\mathbf {e} _{z}\\\end{aligned}}}
∇
v
=
∂
v
r
∂
r
e
r
⊗
e
r
+
1
r
(
∂
v
r
∂
θ
−
v
θ
)
e
r
⊗
e
θ
+
∂
v
r
∂
z
e
r
⊗
e
z
+
∂
v
θ
∂
r
e
θ
⊗
e
r
+
1
r
(
∂
v
θ
∂
θ
+
v
r
)
e
θ
⊗
e
θ
+
∂
v
θ
∂
z
e
θ
⊗
e
z
+
∂
v
z
∂
r
e
z
⊗
e
r
+
1
r
∂
v
z
∂
θ
e
z
⊗
e
θ
+
∂
v
z
∂
z
e
z
⊗
e
z
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} ={}\quad &{\frac {\partial v_{r}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {1}{r}}\left({\frac {\partial v_{r}}{\partial \theta }}-v_{\theta }\right)~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{r}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\\{}+{}&{\frac {\partial v_{\theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {1}{r}}\left({\frac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{\theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\{}+{}&{\frac {\partial v_{z}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {1}{r}}{\frac {\partial v_{z}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{z}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\\\end{aligned}}}
∇
S
=
∂
S
r
r
∂
r
e
r
⊗
e
r
⊗
e
r
+
∂
S
r
r
∂
z
e
r
⊗
e
r
⊗
e
z
+
1
r
[
∂
S
r
r
∂
θ
−
(
S
θ
r
+
S
r
θ
)
]
e
r
⊗
e
r
⊗
e
θ
+
∂
S
r
θ
∂
r
e
r
⊗
e
θ
⊗
e
r
+
∂
S
r
θ
∂
z
e
r
⊗
e
θ
⊗
e
z
+
1
r
[
∂
S
r
θ
∂
θ
+
(
S
r
r
−
S
θ
θ
)
]
e
r
⊗
e
θ
⊗
e
θ
+
∂
S
r
z
∂
r
e
r
⊗
e
z
⊗
e
r
+
∂
S
r
z
∂
z
e
r
⊗
e
z
⊗
e
z
+
1
r
[
∂
S
r
z
∂
θ
−
S
θ
z
]
e
r
⊗
e
z
⊗
e
θ
+
∂
S
θ
r
∂
r
e
θ
⊗
e
r
⊗
e
r
+
∂
S
θ
r
∂
z
e
θ
⊗
e
r
⊗
e
z
+
1
r
[
∂
S
θ
r
∂
θ
+
(
S
r
r
−
S
θ
θ
)
]
e
θ
⊗
e
r
⊗
e
θ
+
∂
S
θ
θ
∂
r
e
θ
⊗
e
θ
⊗
e
r
+
∂
S
θ
θ
∂
z
e
θ
⊗
e
θ
⊗
e
z
+
1
r
[
∂
S
θ
θ
∂
θ
+
(
S
r
θ
+
S
θ
r
)
]
e
θ
⊗
e
θ
⊗
e
θ
+
∂
S
θ
z
∂
r
e
θ
⊗
e
z
⊗
e
r
+
∂
S
θ
z
∂
z
e
θ
⊗
e
z
⊗
e
z
+
1
r
[
∂
S
θ
z
∂
θ
+
S
r
z
]
e
θ
⊗
e
z
⊗
e
θ
+
∂
S
z
r
∂
r
e
z
⊗
e
r
⊗
e
r
+
∂
S
z
r
∂
z
e
z
⊗
e
r
⊗
e
z
+
1
r
[
∂
S
z
r
∂
θ
−
S
z
θ
]
e
z
⊗
e
r
⊗
e
θ
+
∂
S
z
θ
∂
r
e
z
⊗
e
θ
⊗
e
r
+
∂
S
z
θ
∂
z
e
z
⊗
e
θ
⊗
e
z
+
1
r
[
∂
S
z
θ
∂
θ
+
S
z
r
]
e
z
⊗
e
θ
⊗
e
θ
+
∂
S
z
z
∂
r
e
z
⊗
e
z
⊗
e
r
+
∂
S
z
z
∂
z
e
z
⊗
e
z
⊗
e
z
+
1
r
∂
S
z
z
∂
θ
e
z
⊗
e
z
⊗
e
θ
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}={}\quad &{\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{rr}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{rr}}{\partial \theta }}-(S_{\theta r}+S_{r\theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{rz}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{rz}}{\partial \theta }}-S_{\theta z}\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta r}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta r}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta \theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta \theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta z}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta z}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{zr}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{zr}}{\partial \theta }}-S_{z\theta }\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{z\theta }}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{z\theta }}{\partial \theta }}+S_{zr}\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{zz}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}~{\frac {\partial S_{zz}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\end{aligned}}}
== Divergence of a tensor field ==
The divergence of a tensor field
T
(
x
)
{\displaystyle {\boldsymbol {T}}(\mathbf {x} )}
is defined using the recursive relation
(
∇
⋅
T
)
⋅
c
=
∇
⋅
(
c
⋅
T
T
)
;
∇
⋅
v
=
tr
(
∇
v
)
{\displaystyle ({\boldsymbol {\nabla }}\cdot {\boldsymbol {T}})\cdot \mathbf {c} ={\boldsymbol {\nabla }}\cdot \left(\mathbf {c} \cdot {\boldsymbol {T}}^{\textsf {T}}\right)~;\qquad {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\text{tr}}({\boldsymbol {\nabla }}\mathbf {v} )}
where c is an arbitrary constant vector and v is a vector field. If
T
{\displaystyle {\boldsymbol {T}}}
is a tensor field of order n > 1 then the divergence of the field is a tensor of order n− 1.
=== Cartesian coordinates ===
In a Cartesian coordinate system we have the following relations for a vector field v and a second-order tensor field
S
{\displaystyle {\boldsymbol {S}}}
.
∇
⋅
v
=
∂
v
i
∂
x
i
=
v
i
,
i
∇
⋅
S
=
∂
S
i
k
∂
x
i
e
k
=
S
i
k
,
i
e
k
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &={\frac {\partial v_{i}}{\partial x_{i}}}=v_{i,i}\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\frac {\partial S_{ik}}{\partial x_{i}}}~\mathbf {e} _{k}=S_{ik,i}~\mathbf {e} _{k}\end{aligned}}}
where tensor index notation for partial derivatives is used in the rightmost expressions. Note that
∇
⋅
S
≠
∇
⋅
S
T
.
{\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}\neq {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}^{\textsf {T}}.}
For a symmetric second-order tensor, the divergence is also often written as
∇
⋅
S
=
∂
S
k
i
∂
x
i
e
k
=
S
k
i
,
i
e
k
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\cfrac {\partial S_{ki}}{\partial x_{i}}}~\mathbf {e} _{k}=S_{ki,i}~\mathbf {e} _{k}\end{aligned}}}
The above expression is sometimes used as the definition of
∇
⋅
S
{\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}}
in Cartesian component form (often also written as
div
S
{\displaystyle \operatorname {div} {\boldsymbol {S}}}
). Note that such a definition is not consistent with the rest of this article (see the section on curvilinear co-ordinates).
The difference stems from whether the differentiation is performed with respect to the rows or columns of
S
{\displaystyle {\boldsymbol {S}}}
, and is conventional. This is demonstrated by an example. In a Cartesian coordinate system the second order tensor (matrix)
S
{\displaystyle \mathbf {S} }
is the gradient of a vector function
v
{\displaystyle \mathbf {v} }
.
∇
⋅
(
∇
v
)
=
∇
⋅
(
v
i
,
j
e
i
⊗
e
j
)
=
v
i
,
j
i
e
i
⋅
e
i
⊗
e
j
=
(
∇
⋅
v
)
,
j
e
j
=
∇
(
∇
⋅
v
)
∇
⋅
[
(
∇
v
)
T
]
=
∇
⋅
(
v
j
,
i
e
i
⊗
e
j
)
=
v
j
,
i
i
e
i
⋅
e
i
⊗
e
j
=
∇
2
v
j
e
j
=
∇
2
v
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \left({\boldsymbol {\nabla }}\mathbf {v} \right)&={\boldsymbol {\nabla }}\cdot \left(v_{i,j}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{i,ji}~\mathbf {e} _{i}\cdot \mathbf {e} _{i}\otimes \mathbf {e} _{j}=\left({\boldsymbol {\nabla }}\cdot \mathbf {v} \right)_{,j}~\mathbf {e} _{j}={\boldsymbol {\nabla }}\left({\boldsymbol {\nabla }}\cdot \mathbf {v} \right)\\{\boldsymbol {\nabla }}\cdot \left[\left({\boldsymbol {\nabla }}\mathbf {v} \right)^{\textsf {T}}\right]&={\boldsymbol {\nabla }}\cdot \left(v_{j,i}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{j,ii}~\mathbf {e} _{i}\cdot \mathbf {e} _{i}\otimes \mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}v_{j}~\mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}\mathbf {v} \end{aligned}}}
The last equation is equivalent to the alternative definition / interpretation
(
∇
⋅
)
alt
(
∇
v
)
=
(
∇
⋅
)
alt
(
v
i
,
j
e
i
⊗
e
j
)
=
v
i
,
j
j
e
i
⊗
e
j
⋅
e
j
=
∇
2
v
i
e
i
=
∇
2
v
{\displaystyle {\begin{aligned}\left({\boldsymbol {\nabla }}\cdot \right)_{\text{alt}}\left({\boldsymbol {\nabla }}\mathbf {v} \right)=\left({\boldsymbol {\nabla }}\cdot \right)_{\text{alt}}\left(v_{i,j}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{i,jj}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\cdot \mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}v_{i}~\mathbf {e} _{i}={\boldsymbol {\nabla }}^{2}\mathbf {v} \end{aligned}}}
=== Curvilinear coordinates ===
In curvilinear coordinates, the divergences of a vector field v and a second-order tensor field
S
{\displaystyle {\boldsymbol {S}}}
are
∇
⋅
v
=
(
∂
v
i
∂
ξ
i
+
v
k
Γ
i
k
i
)
∇
⋅
S
=
(
∂
S
i
k
∂
ξ
i
−
S
l
k
Γ
i
i
l
−
S
i
l
Γ
i
k
l
)
g
k
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &=\left({\cfrac {\partial v^{i}}{\partial \xi ^{i}}}+v^{k}~\Gamma _{ik}^{i}\right)\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left({\cfrac {\partial S_{ik}}{\partial \xi _{i}}}-S_{lk}~\Gamma _{ii}^{l}-S_{il}~\Gamma _{ik}^{l}\right)~\mathbf {g} ^{k}\end{aligned}}}
More generally,
∇
⋅
S
=
[
∂
S
i
j
∂
q
k
−
Γ
k
i
l
S
l
j
−
Γ
k
j
l
S
i
l
]
g
i
k
b
j
=
[
∂
S
i
j
∂
q
i
+
Γ
i
l
i
S
l
j
+
Γ
i
l
j
S
i
l
]
b
j
=
[
∂
S
j
i
∂
q
i
+
Γ
i
l
i
S
j
l
−
Γ
i
j
l
S
l
i
]
b
j
=
[
∂
S
i
j
∂
q
k
−
Γ
i
k
l
S
l
j
+
Γ
k
l
j
S
i
l
]
g
i
k
b
j
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left[{\cfrac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~g^{ik}~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S^{ij}}{\partial q^{i}}}+\Gamma _{il}^{i}~S^{lj}+\Gamma _{il}^{j}~S^{il}\right]~\mathbf {b} _{j}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{i}}}+\Gamma _{il}^{i}~S_{~j}^{l}-\Gamma _{ij}^{l}~S_{~l}^{i}\right]~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~g^{ik}~\mathbf {b} _{j}\end{aligned}}}
==== Cylindrical polar coordinates ====
In cylindrical polar coordinates
∇
⋅
v
=
∂
v
r
∂
r
+
1
r
(
∂
v
θ
∂
θ
+
v
r
)
+
∂
v
z
∂
z
∇
⋅
S
=
∂
S
r
r
∂
r
e
r
+
∂
S
r
θ
∂
r
e
θ
+
∂
S
r
z
∂
r
e
z
+
1
r
[
∂
S
θ
r
∂
θ
+
(
S
r
r
−
S
θ
θ
)
]
e
r
+
1
r
[
∂
S
θ
θ
∂
θ
+
(
S
r
θ
+
S
θ
r
)
]
e
θ
+
1
r
[
∂
S
θ
z
∂
θ
+
S
r
z
]
e
z
+
∂
S
z
r
∂
z
e
r
+
∂
S
z
θ
∂
z
e
θ
+
∂
S
z
z
∂
z
e
z
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} =\quad &{\frac {\partial v_{r}}{\partial r}}+{\frac {1}{r}}\left({\frac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)+{\frac {\partial v_{z}}{\partial z}}\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}=\quad &{\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{z}\\{}+{}&{\frac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }+{\frac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{z}\\{}+{}&{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\end{aligned}}}
== Curl of a tensor field ==
The curl of an order-n > 1 tensor field
T
(
x
)
{\displaystyle {\boldsymbol {T}}(\mathbf {x} )}
is also defined using the recursive relation
(
∇
×
T
)
⋅
c
=
∇
×
(
c
⋅
T
)
;
(
∇
×
v
)
⋅
c
=
∇
⋅
(
v
×
c
)
{\displaystyle ({\boldsymbol {\nabla }}\times {\boldsymbol {T}})\cdot \mathbf {c} ={\boldsymbol {\nabla }}\times (\mathbf {c} \cdot {\boldsymbol {T}})~;\qquad ({\boldsymbol {\nabla }}\times \mathbf {v} )\cdot \mathbf {c} ={\boldsymbol {\nabla }}\cdot (\mathbf {v} \times \mathbf {c} )}
where c is an arbitrary constant vector and v is a vector field.
=== Curl of a first-order tensor (vector) field ===
Consider a vector field v and an arbitrary constant vector c. In index notation, the cross product is given by
v
×
c
=
ε
i
j
k
v
j
c
k
e
i
{\displaystyle \mathbf {v} \times \mathbf {c} =\varepsilon _{ijk}~v_{j}~c_{k}~\mathbf {e} _{i}}
where
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
is the permutation symbol, otherwise known as the Levi-Civita symbol. Then,
∇
⋅
(
v
×
c
)
=
ε
i
j
k
v
j
,
i
c
k
=
(
ε
i
j
k
v
j
,
i
e
k
)
⋅
c
=
(
∇
×
v
)
⋅
c
{\displaystyle {\boldsymbol {\nabla }}\cdot (\mathbf {v} \times \mathbf {c} )=\varepsilon _{ijk}~v_{j,i}~c_{k}=(\varepsilon _{ijk}~v_{j,i}~\mathbf {e} _{k})\cdot \mathbf {c} =({\boldsymbol {\nabla }}\times \mathbf {v} )\cdot \mathbf {c} }
Therefore,
∇
×
v
=
ε
i
j
k
v
j
,
i
e
k
{\displaystyle {\boldsymbol {\nabla }}\times \mathbf {v} =\varepsilon _{ijk}~v_{j,i}~\mathbf {e} _{k}}
=== Curl of a second-order tensor field ===
For a second-order tensor
S
{\displaystyle {\boldsymbol {S}}}
c
⋅
S
=
c
m
S
m
j
e
j
{\displaystyle \mathbf {c} \cdot {\boldsymbol {S}}=c_{m}~S_{mj}~\mathbf {e} _{j}}
Hence, using the definition of the curl of a first-order tensor field,
∇
×
(
c
⋅
S
)
=
ε
i
j
k
c
m
S
m
j
,
i
e
k
=
(
ε
i
j
k
S
m
j
,
i
e
k
⊗
e
m
)
⋅
c
=
(
∇
×
S
)
⋅
c
{\displaystyle {\boldsymbol {\nabla }}\times (\mathbf {c} \cdot {\boldsymbol {S}})=\varepsilon _{ijk}~c_{m}~S_{mj,i}~\mathbf {e} _{k}=(\varepsilon _{ijk}~S_{mj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{m})\cdot \mathbf {c} =({\boldsymbol {\nabla }}\times {\boldsymbol {S}})\cdot \mathbf {c} }
Therefore, we have
∇
×
S
=
ε
i
j
k
S
m
j
,
i
e
k
⊗
e
m
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {S}}=\varepsilon _{ijk}~S_{mj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{m}}
=== Identities involving the curl of a tensor field ===
The most commonly used identity involving the curl of a tensor field,
T
{\displaystyle {\boldsymbol {T}}}
, is
∇
×
(
∇
T
)
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}{\boldsymbol {T}})={\boldsymbol {0}}}
This identity holds for tensor fields of all orders. For the important case of a second-order tensor,
S
{\displaystyle {\boldsymbol {S}}}
, this identity implies that
∇
×
(
∇
S
)
=
0
⟹
S
m
i
,
j
−
S
m
j
,
i
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}{\boldsymbol {S}})={\boldsymbol {0}}\quad \implies \quad S_{mi,j}-S_{mj,i}=0}
== Derivative of the determinant of a second-order tensor ==
The derivative of the determinant of a second order tensor
A
{\displaystyle {\boldsymbol {A}}}
is given by
∂
∂
A
det
(
A
)
=
det
(
A
)
[
A
−
1
]
T
.
{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det({\boldsymbol {A}})=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}~.}
In an orthonormal basis, the components of
A
{\displaystyle {\boldsymbol {A}}}
can be written as a matrix A. In that case, the right hand side corresponds the cofactors of the matrix.
== Derivatives of the invariants of a second-order tensor ==
The principal invariants of a second order tensor are
I
1
(
A
)
=
tr
A
I
2
(
A
)
=
1
2
[
(
tr
A
)
2
−
tr
A
2
]
I
3
(
A
)
=
det
(
A
)
{\displaystyle {\begin{aligned}I_{1}({\boldsymbol {A}})&={\text{tr}}{\boldsymbol {A}}\\I_{2}({\boldsymbol {A}})&={\tfrac {1}{2}}\left[({\text{tr}}{\boldsymbol {A}})^{2}-{\text{tr}}{{\boldsymbol {A}}^{2}}\right]\\I_{3}({\boldsymbol {A}})&=\det({\boldsymbol {A}})\end{aligned}}}
The derivatives of these three invariants with respect to
A
{\displaystyle {\boldsymbol {A}}}
are
∂
I
1
∂
A
=
1
∂
I
2
∂
A
=
I
1
1
−
A
T
∂
I
3
∂
A
=
det
(
A
)
[
A
−
1
]
T
=
I
2
1
−
A
T
(
I
1
1
−
A
T
)
=
(
A
2
−
I
1
A
+
I
2
1
)
T
{\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\[3pt]{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}\,{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\\[3pt]{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}\\&=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}~\left(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\right)=\left({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}}\right)^{\textsf {T}}\end{aligned}}}
== Derivative of the second-order identity tensor ==
Let
1
{\displaystyle {\boldsymbol {\mathit {1}}}}
be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor
A
{\displaystyle {\boldsymbol {A}}}
is given by
∂
1
∂
A
:
T
=
0
:
T
=
0
{\displaystyle {\frac {\partial {\boldsymbol {\mathit {1}}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {0}}}:{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}
This is because
1
{\displaystyle {\boldsymbol {\mathit {1}}}}
is independent of
A
{\displaystyle {\boldsymbol {A}}}
.
== Derivative of a second-order tensor with respect to itself ==
Let
A
{\displaystyle {\boldsymbol {A}}}
be a second order tensor. Then
∂
A
∂
A
:
T
=
[
∂
∂
α
(
A
+
α
T
)
]
α
=
0
=
T
=
I
:
T
{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\left[{\frac {\partial }{\partial \alpha }}({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}={\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}:{\boldsymbol {T}}}
Therefore,
∂
A
∂
A
=
I
{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\boldsymbol {\mathsf {I}}}}
Here
I
{\displaystyle {\boldsymbol {\mathsf {I}}}}
is the fourth order identity tensor. In index notation with respect to an orthonormal basis
I
=
δ
i
k
δ
j
l
e
i
⊗
e
j
⊗
e
k
⊗
e
l
{\displaystyle {\boldsymbol {\mathsf {I}}}=\delta _{ik}~\delta _{jl}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
This result implies that
∂
A
T
∂
A
:
T
=
I
T
:
T
=
T
T
{\displaystyle {\frac {\partial {\boldsymbol {A}}^{\textsf {T}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}^{\textsf {T}}:{\boldsymbol {T}}={\boldsymbol {T}}^{\textsf {T}}}
where
I
T
=
δ
j
k
δ
i
l
e
i
⊗
e
j
⊗
e
k
⊗
e
l
{\displaystyle {\boldsymbol {\mathsf {I}}}^{\textsf {T}}=\delta _{jk}~\delta _{il}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
Therefore, if the tensor
A
{\displaystyle {\boldsymbol {A}}}
is symmetric, then the derivative is also symmetric and we get
∂
A
∂
A
=
I
(
s
)
=
1
2
(
I
+
I
T
)
{\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\boldsymbol {\mathsf {I}}}^{(s)}={\frac {1}{2}}~\left({\boldsymbol {\mathsf {I}}}+{\boldsymbol {\mathsf {I}}}^{\textsf {T}}\right)}
where the symmetric fourth order identity tensor is
I
(
s
)
=
1
2
(
δ
i
k
δ
j
l
+
δ
i
l
δ
j
k
)
e
i
⊗
e
j
⊗
e
k
⊗
e
l
{\displaystyle {\boldsymbol {\mathsf {I}}}^{(s)}={\frac {1}{2}}~(\delta _{ik}~\delta _{jl}+\delta _{il}~\delta _{jk})~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
== Derivative of the inverse of a second-order tensor ==
Let
A
{\displaystyle {\boldsymbol {A}}}
and
T
{\displaystyle {\boldsymbol {T}}}
be two second order tensors, then
∂
∂
A
(
A
−
1
)
:
T
=
−
A
−
1
⋅
T
⋅
A
−
1
{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-1}}
In index notation with respect to an orthonormal basis
∂
A
i
j
−
1
∂
A
k
l
T
k
l
=
−
A
i
k
−
1
T
k
l
A
l
j
−
1
⟹
∂
A
i
j
−
1
∂
A
k
l
=
−
A
i
k
−
1
A
l
j
−
1
{\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{ik}^{-1}~T_{kl}~A_{lj}^{-1}\implies {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-A_{ik}^{-1}~A_{lj}^{-1}}
We also have
∂
∂
A
(
A
−
T
)
:
T
=
−
A
−
T
⋅
T
T
⋅
A
−
T
{\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-{\textsf {T}}}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-{\textsf {T}}}\cdot {\boldsymbol {T}}^{\textsf {T}}\cdot {\boldsymbol {A}}^{-{\textsf {T}}}}
In index notation
∂
A
j
i
−
1
∂
A
k
l
T
k
l
=
−
A
j
k
−
1
T
l
k
A
l
i
−
1
⟹
∂
A
j
i
−
1
∂
A
k
l
=
−
A
l
i
−
1
A
j
k
−
1
{\displaystyle {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{jk}^{-1}~T_{lk}~A_{li}^{-1}\implies {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}=-A_{li}^{-1}~A_{jk}^{-1}}
If the tensor
A
{\displaystyle {\boldsymbol {A}}}
is symmetric then
∂
A
i
j
−
1
∂
A
k
l
=
−
1
2
(
A
i
k
−
1
A
j
l
−
1
+
A
i
l
−
1
A
j
k
−
1
)
{\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-{\cfrac {1}{2}}\left(A_{ik}^{-1}~A_{jl}^{-1}+A_{il}^{-1}~A_{jk}^{-1}\right)}
== Integration by parts ==
Another important operation related to tensor derivatives in continuum mechanics is integration by parts. The formula for integration by parts can be written as
∫
Ω
F
⊗
∇
G
d
Ω
=
∫
Γ
n
⊗
(
F
⊗
G
)
d
Γ
−
∫
Ω
G
⊗
∇
F
d
Ω
{\displaystyle \int _{\Omega }{\boldsymbol {F}}\otimes {\boldsymbol {\nabla }}{\boldsymbol {G}}\,d\Omega =\int _{\Gamma }\mathbf {n} \otimes ({\boldsymbol {F}}\otimes {\boldsymbol {G}})\,d\Gamma -\int _{\Omega }{\boldsymbol {G}}\otimes {\boldsymbol {\nabla }}{\boldsymbol {F}}\,d\Omega }
where
F
{\displaystyle {\boldsymbol {F}}}
and
G
{\displaystyle {\boldsymbol {G}}}
are differentiable tensor fields of arbitrary order,
n
{\displaystyle \mathbf {n} }
is the unit outward normal to the domain over which the tensor fields are defined,
⊗
{\displaystyle \otimes }
represents a generalized tensor product operator, and
∇
{\displaystyle {\boldsymbol {\nabla }}}
is a generalized gradient operator. When
F
{\displaystyle {\boldsymbol {F}}}
is equal to the identity tensor, we get the divergence theorem
∫
Ω
∇
G
d
Ω
=
∫
Γ
n
⊗
G
d
Γ
.
{\displaystyle \int _{\Omega }{\boldsymbol {\nabla }}{\boldsymbol {G}}\,d\Omega =\int _{\Gamma }\mathbf {n} \otimes {\boldsymbol {G}}\,d\Gamma \,.}
We can express the formula for integration by parts in Cartesian index notation as
∫
Ω
F
i
j
k
.
.
.
.
G
l
m
n
.
.
.
,
p
d
Ω
=
∫
Γ
n
p
F
i
j
k
.
.
.
G
l
m
n
.
.
.
d
Γ
−
∫
Ω
G
l
m
n
.
.
.
F
i
j
k
.
.
.
,
p
d
Ω
.
{\displaystyle \int _{\Omega }F_{ijk....}\,G_{lmn...,p}\,d\Omega =\int _{\Gamma }n_{p}\,F_{ijk...}\,G_{lmn...}\,d\Gamma -\int _{\Omega }G_{lmn...}\,F_{ijk...,p}\,d\Omega \,.}
For the special case where the tensor product operation is a contraction of one index and the gradient operation is a divergence, and both
F
{\displaystyle {\boldsymbol {F}}}
and
G
{\displaystyle {\boldsymbol {G}}}
are second order tensors, we have
∫
Ω
F
⋅
(
∇
⋅
G
)
d
Ω
=
∫
Γ
n
⋅
(
G
⋅
F
T
)
d
Γ
−
∫
Ω
(
∇
F
)
:
G
T
d
Ω
.
{\displaystyle \int _{\Omega }{\boldsymbol {F}}\cdot ({\boldsymbol {\nabla }}\cdot {\boldsymbol {G}})\,d\Omega =\int _{\Gamma }\mathbf {n} \cdot \left({\boldsymbol {G}}\cdot {\boldsymbol {F}}^{\textsf {T}}\right)\,d\Gamma -\int _{\Omega }({\boldsymbol {\nabla }}{\boldsymbol {F}}):{\boldsymbol {G}}^{\textsf {T}}\,d\Omega \,.}
<a>evelina</a>
In index notation,
∫
Ω
F
i
j
G
p
j
,
p
d
Ω
=
∫
Γ
n
p
F
i
j
G
p
j
d
Γ
−
∫
Ω
G
p
j
F
i
j
,
p
d
Ω
.
{\displaystyle \int _{\Omega }F_{ij}\,G_{pj,p}\,d\Omega =\int _{\Gamma }n_{p}\,F_{ij}\,G_{pj}\,d\Gamma -\int _{\Omega }G_{pj}\,F_{ij,p}\,d\Omega \,.}
== See also ==
Covariant derivative
Ricci calculus
== References == | Wikipedia/Tensor_derivative_(continuum_mechanics) |
In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body.
== Formulation ==
Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function:
R
→
0
:
Ω
→
P
{\displaystyle {\vec {R}}_{0}:\Omega \to P}
where
R
→
0
{\displaystyle {\vec {R}}_{0}}
is a placement vector
Ω
{\displaystyle \Omega }
are all the points of the body
P
{\displaystyle P}
are all the points in the space in which the body is present
Most often it is a state of the body in which no forces are applied.
Then given any other state of this body in which coordinates of all its points are described as
R
→
1
{\displaystyle {\vec {R}}_{1}}
the displacement field is the difference between two body states:
u
→
=
R
→
1
−
R
→
0
{\displaystyle {\vec {u}}={\vec {R}}_{1}-{\vec {R}}_{0}}
where
u
→
{\displaystyle {\vec {u}}}
is a displacement field, which for each point of the body specifies a displacement vector.
== Decomposition ==
The displacement of a body has two components: a rigid-body displacement and a deformation.
A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size.
Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration
κ
0
(
B
)
{\displaystyle \kappa _{0}({\mathcal {B}})}
to a current or deformed configuration
κ
t
(
B
)
{\displaystyle \kappa _{t}({\mathcal {B}})}
(Figure 1).
A change in the configuration of a continuum body can be described by a displacement field. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement.
== Displacement gradient tensor ==
Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications.
The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration
P
i
{\displaystyle P_{i}}
and deformed configuration
p
i
{\displaystyle p_{i}}
is called the displacement vector,
p
i
−
P
i
{\displaystyle p_{i}-P_{i}}
, denoted
u
i
{\displaystyle u_{i}}
or
U
i
{\displaystyle U_{i}}
below.
=== Material coordinates (Lagrangian description) ===
Using
X
{\displaystyle \mathbf {X} }
in place of
P
i
{\displaystyle P_{i}}
and
x
{\displaystyle \mathbf {x} }
in place of
p
i
{\displaystyle p_{i}\,\!}
, both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector:
u
(
X
,
t
)
=
u
i
e
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}}
where
e
i
{\displaystyle \mathbf {e} _{i}}
are the unit vectors that define the basis of the material (body-frame) coordinate system.
Expressed in terms of the material coordinates, i.e.
u
{\displaystyle \mathbf {u} }
as a function of
X
{\displaystyle \mathbf {X} }
, the displacement field is:
u
(
X
,
t
)
=
b
(
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
where
b
(
t
)
{\displaystyle \mathbf {b} (t)}
is the displacement vector representing rigid-body translation.
The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor
∇
X
u
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} \,\!}
. Thus we have,
∇
X
u
=
∇
X
x
−
R
=
F
−
R
or
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
α
i
K
=
F
i
K
−
α
i
K
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} =\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {R} =\mathbf {F} -\mathbf {R} \qquad {\text{or}}\qquad {\frac {\partial u_{i}}{\partial X_{K}}}={\frac {\partial x_{i}}{\partial X_{K}}}-\alpha _{iK}=F_{iK}-\alpha _{iK}}
where
F
{\displaystyle \mathbf {F} }
is the material deformation gradient tensor and
R
{\displaystyle \mathbf {R} }
is a rotation.
=== Spatial coordinates (Eulerian description) ===
In the Eulerian description, the vector extending from a particle
P
{\displaystyle P}
in the undeformed configuration to its location in the deformed configuration is called the displacement vector:
U
(
x
,
t
)
=
U
J
E
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=U_{J}\mathbf {E} _{J}}
where
E
i
{\displaystyle \mathbf {E} _{i}}
are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system.
Expressed in terms of spatial coordinates, i.e.
U
{\displaystyle \mathbf {U} }
as a function of
x
{\displaystyle \mathbf {x} }
, the displacement field is:
U
(
x
,
t
)
=
b
(
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor
∇
x
U
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} \,\!}
. Thus we have,
∇
x
U
=
R
T
−
∇
x
X
=
R
T
−
F
−
1
or
∂
U
J
∂
x
k
=
α
J
k
−
∂
X
J
∂
x
k
=
α
J
k
−
F
J
k
−
1
,
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} =\mathbf {R} ^{T}-\nabla _{\mathbf {x} }\mathbf {X} =\mathbf {R} ^{T}-\mathbf {F} ^{-1}\qquad {\text{or}}\qquad {\frac {\partial U_{J}}{\partial x_{k}}}=\alpha _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}=\alpha _{Jk}-F_{Jk}^{-1}\,,}
where
F
−
1
=
H
{\displaystyle \mathbf {F} ^{-1}=\mathbf {H} }
is the spatial deformation gradient tensor.
=== Relationship between the material and spatial coordinate systems ===
α
J
i
{\displaystyle \alpha _{Ji}}
are the direction cosines between the material and spatial coordinate systems with unit vectors
E
J
{\displaystyle \mathbf {E} _{J}}
and
e
i
{\displaystyle \mathbf {e} _{i}\,\!}
, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
The relationship between
u
i
{\displaystyle u_{i}}
and
U
J
{\displaystyle U_{J}}
is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
=== Combining the coordinate systems of deformed and undeformed configurations ===
It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in
b
=
0
{\displaystyle \mathbf {b} =0\,\!}
, and the direction cosines become Kronecker deltas, i.e.,
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus in material (undeformed) coordinates, the displacement may be expressed as:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}}
And in spatial (deformed) coordinates, the displacement may be expressed as:
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}}
== See also ==
Stress
Strain
== References == | Wikipedia/Displacement_gradient_tensor |
Material failure theory is an interdisciplinary field of materials science and solid mechanics which attempts to predict the conditions under which solid materials fail under the action of external loads. The failure of a material is usually classified into brittle failure (fracture) or ductile failure (yield). Depending on the conditions (such as temperature, state of stress, loading rate) most materials can fail in a brittle or ductile manner or both. However, for most practical situations, a material may be classified as either brittle or ductile.
In mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate "failed" states from "unfailed" states. A precise physical definition of a "failed" state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the same form are used to predict brittle failure and ductile yields.
== Material failure ==
In materials science, material failure is the loss of load carrying capacity of a material unit. This definition introduces to the fact that material failure can be examined in different scales, from microscopic, to macroscopic. In structural problems, where the structural response may be beyond the initiation of nonlinear material behaviour, material failure is of profound importance for the determination of the integrity of the structure. On the other hand, due to the lack of globally accepted fracture criteria, the determination of the structure's damage, due to material failure, is still under intensive research.
== Types of material failure ==
Material failure can be distinguished in two broader categories depending on the scale in which the material is examined:
=== Microscopic failure ===
Microscopic material failure is defined in terms of crack initiation and propagation. Such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack. Failure criteria in this case are related to microscopic fracture. Some of the most popular failure models in this area are the micromechanical failure models, which combine the advantages of continuum mechanics and classical fracture mechanics. Such models are based on the concept that during plastic deformation, microvoids nucleate and grow until a local plastic neck or fracture of the intervoid matrix occurs, which causes the coalescence of neighbouring voids. Such a model, proposed by Gurson and extended by Tvergaard and Needleman, is known as GTN. Another approach, proposed by Rousselier, is based on continuum damage mechanics (CDM) and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar damage quantity, which represents the void volume fraction of cavities, the porosity f.
=== Macroscopic failure ===
Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, equivalently. Li presents a classification of macroscopic failure criteria in four categories:
Stress or strain failure
Energy type failure (S-criterion, T-criterion)
Damage failure
Empirical failure
Five general levels are considered, at which the meaning of deformation and failure is interpreted differently: the structural element scale, the macroscopic scale where macroscopic stress and strain are defined, the mesoscale which is represented by a typical void, the microscale and the atomic scale. The material behavior at one level is considered as a collective of its behavior at a sub-level. An efficient deformation and failure model should be consistent at every level.
== Brittle material failure criteria ==
Failure of brittle materials can be determined using several approaches:
Phenomenological failure criteria
Linear elastic fracture mechanics
Elastic-plastic fracture mechanics
Energy-based methods
Cohesive zone methods
=== Phenomenological failure criteria ===
The failure criteria that were developed for brittle solids were the maximum stress/strain criteria. The maximum stress criterion assumes that a material fails when the maximum principal stress
σ
1
{\displaystyle \sigma _{1}}
in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the minimum principal stress
σ
3
{\displaystyle \sigma _{3}}
is less than the uniaxial compressive strength of the material. If the uniaxial tensile strength of the material is
σ
t
{\displaystyle \sigma _{t}}
and the uniaxial compressive strength is
σ
c
{\displaystyle \sigma _{c}}
, then the safe region for the material is assumed to be
σ
c
<
σ
3
<
σ
1
<
σ
t
{\displaystyle \sigma _{c}<\sigma _{3}<\sigma _{1}<\sigma _{t}\,}
Note that the convention that tension is positive has been used in the above expression.
The maximum strain criterion has a similar form except that the principal strains are compared with experimentally determined uniaxial strains at failure, i.e.,
ε
c
<
ε
3
<
ε
1
<
ε
t
{\displaystyle \varepsilon _{c}<\varepsilon _{3}<\varepsilon _{1}<\varepsilon _{t}\,}
The maximum principal stress and strain criteria continue to be widely used in spite of severe shortcomings.
Numerous other phenomenological failure criteria can be found in the engineering literature. The degree of success of these criteria in predicting failure has been limited. Some popular failure criteria for various type of materials are:
criteria based on invariants of the Cauchy stress tensor
the Tresca or maximum shear stress failure criterion
the von Mises or maximum elastic distortional energy criterion
the Mohr-Coulomb failure criterion for cohesive-frictional solids
the Drucker-Prager failure criterion for pressure-dependent solids
the Bresler-Pister failure criterion for concrete
the Willam-Warnke failure criterion for concrete
the Hankinson criterion, an empirical failure criterion that is used for orthotropic materials such as wood
the Hill yield criteria for anisotropic solids
the Tsai-Wu failure criterion for anisotropic composites
the Johnson–Holmquist damage model for high-rate deformations of isotropic solids
the Hoek-Brown failure criterion for rock masses
the Cam-Clay failure theory for soil
=== Linear elastic fracture mechanics ===
The approach taken in linear elastic fracture mechanics is to estimate the amount of energy needed to grow a preexisting crack in a brittle material. The earliest fracture mechanics approach for unstable crack growth is Griffiths' theory. When applied to the mode I opening of a crack, Griffiths' theory predicts that the critical stress (
σ
{\displaystyle \sigma }
) needed to propagate the crack is given by
σ
=
2
E
γ
π
a
{\displaystyle \sigma ={\sqrt {\cfrac {2E\gamma }{\pi a}}}}
where
E
{\displaystyle E}
is the Young's modulus of the material,
γ
{\displaystyle \gamma }
is the surface energy per unit area of the crack, and
a
{\displaystyle a}
is the crack length for edge cracks or
2
a
{\displaystyle 2a}
is the crack length for plane cracks. The quantity
σ
π
a
{\displaystyle \sigma {\sqrt {\pi a}}}
is postulated as a material parameter called the fracture toughness. The mode I fracture toughness for plane strain is defined as
K
I
c
=
Y
σ
c
π
a
{\displaystyle K_{\rm {Ic}}=Y\sigma _{c}{\sqrt {\pi a}}}
where
σ
c
{\displaystyle \sigma _{c}}
is a critical value of the far field stress and
Y
{\displaystyle Y}
is a dimensionless factor that depends on the geometry, material properties, and loading condition. The quantity
K
I
c
{\displaystyle K_{\rm {Ic}}}
is related to the stress intensity factor and is determined experimentally. Similar quantities
K
I
I
c
{\displaystyle K_{\rm {IIc}}}
and
K
I
I
I
c
{\displaystyle K_{\rm {IIIc}}}
can be determined for mode II and model III loading conditions.
The state of stress around cracks of various shapes can be expressed in terms of their stress intensity factors. Linear elastic fracture mechanics predicts that a crack will extend when the stress intensity factor at the crack tip is greater than the fracture toughness of the material. Therefore, the critical applied stress can also be determined once the stress intensity factor at a crack tip is known.
=== Energy-based methods ===
The linear elastic fracture mechanics method is difficult to apply for anisotropic materials (such as composites) or for situations where the loading or the geometry are complex. The strain energy release rate approach has proved quite useful for such situations. The strain energy release rate for a mode I crack which runs through the thickness of a plate is defined as
G
I
=
P
2
t
d
u
d
a
{\displaystyle G_{I}={\cfrac {P}{2t}}~{\cfrac {du}{da}}}
where
P
{\displaystyle P}
is the applied load,
t
{\displaystyle t}
is the thickness of the plate,
u
{\displaystyle u}
is the displacement at the point of application of the load due to crack growth, and
a
{\displaystyle a}
is the crack length for edge cracks or
2
a
{\displaystyle 2a}
is the crack length for plane cracks. The crack is expected to propagate when the strain energy release rate exceeds a critical value
G
I
c
{\displaystyle G_{\rm {Ic}}}
- called the critical strain energy release rate.
The fracture toughness and the critical strain energy release rate for plane stress are related by
G
I
c
=
1
E
K
I
c
2
{\displaystyle G_{\rm {Ic}}={\cfrac {1}{E}}~K_{\rm {Ic}}^{2}}
where
E
{\displaystyle E}
is the Young's modulus. If an initial crack size is known, then a critical stress can be determined using the strain energy release rate criterion.
== Ductile material failure (yield) criteria ==
A yield criterion often expressed as yield surface, or yield locus, is a hypothesis concerning the limit of elasticity under any combination of stresses. There are two interpretations of yield criterion: one is purely mathematical in taking a statistical approach while other models attempt to provide a justification based on established physical principles. Since stress and strain are tensor qualities they can be described on the basis of three principal directions, in the case of stress these are denoted by
σ
1
{\displaystyle \sigma _{1}\,\!}
,
σ
2
{\displaystyle \sigma _{2}\,\!}
, and
σ
3
{\displaystyle \sigma _{3}\,\!}
.
The following represent the most common yield criterion as applied to an isotropic material (uniform properties in all directions). Other equations have been proposed or are used in specialist situations.
=== Isotropic yield criteria ===
Maximum principal stress theory – by William Rankine (1850). Yield occurs when the largest principal stress exceeds the uniaxial tensile yield strength. Although this criterion allows for a quick and easy comparison with experimental data it is rarely suitable for design purposes. This theory gives good predictions for brittle materials.
Maximum principal strain theory – by St.Venant. Yield occurs when the maximum principal strain reaches the strain corresponding to the yield point during a simple tensile test. In terms of the principal stresses this is determined by the equation:
Maximum shear stress theory – Also known as the Tresca yield criterion, after the French scientist Henri Tresca. This assumes that yield occurs when the shear stress
τ
{\displaystyle \tau \!}
exceeds the shear yield strength
τ
y
{\displaystyle \tau _{y}\!}
:
Total strain energy theory – This theory assumes that the stored energy associated with elastic deformation at the point of yield is independent of the specific stress tensor. Thus yield occurs when the strain energy per unit volume is greater than the strain energy at the elastic limit in simple tension. For a 3-dimensional stress state this is given by:
Maximum distortion energy theory (von Mises yield criterion) also referred to as octahedral shear stress theory. – This theory proposes that the total strain energy can be separated into two components: the volumetric (hydrostatic) strain energy and the shape (distortion or shear) strain energy. It is proposed that yield occurs when the distortion component exceeds that at the yield point for a simple tensile test. This theory is also known as the von Mises yield criterion.
The yield surfaces corresponding to these criteria have a range of forms. However, most isotropic yield criteria correspond to convex yield surfaces.
=== Anisotropic yield criteria ===
When a metal is subjected to large plastic deformations the grain sizes and orientations change in the direction of deformation. As a result, the plastic yield behavior of the material shows directional dependency. Under such circumstances, the isotropic yield criteria such as the von Mises yield criterion are unable to predict the yield behavior accurately. Several anisotropic yield criteria have been developed to deal with such situations.
Some of the more popular anisotropic yield criteria are:
Hill's quadratic yield criterion
Generalized Hill yield criterion
Hosford yield criterion
=== Yield surface ===
The yield surface of a ductile material usually changes as the material experiences increased deformation. Models for the evolution of the yield surface with increasing strain, temperature, and strain rate are used in conjunction with the above failure criteria for isotropic hardening, kinematic hardening, and viscoplasticity. Some such models are:
the Johnson-Cook model
the Steinberg-Guinan model
the Zerilli-Armstrong model
the Mechanical threshold stress model
the Preston-Tonks-Wallace model
There is another important aspect to ductile materials - the prediction of the ultimate failure strength of a ductile material. Several models for predicting the ultimate strength have been used by the engineering community with varying levels of success. For metals, such failure criteria are usually expressed in terms of a combination of porosity and strain to failure or in terms of a damage parameter.
== See also ==
Fracture mechanics
Fracture
Stress intensity factor
Yield (engineering)
Yield surface
Plasticity (physics)
Structural failure
Strength of materials
Ultimate failure
Damage mechanics
Size effect on structural strength
Concrete fracture analysis
== References == | Wikipedia/Material_failure_theory |
In mechanics, strain is defined as relative deformation, compared to a reference position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered.
Strain has dimension of a length ratio, with SI base units of meter per meter (m/m).
Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage.
Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm/m and nm/m.
Strain can be formulated as the spatial derivative of displacement:
ε
≐
∂
∂
X
(
x
−
X
)
=
F
′
−
I
,
{\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},}
where I is the identity tensor.
The displacement of a body may be expressed in the form x = F(X), where X is the reference position of material points of the body;
displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body.
The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion.
A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion.
The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions.
If there is an increase in length of the material line, the normal strain is called tensile strain; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain.
== Strain regimes ==
Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories:
Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.
Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel.
Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements.
== Strain measures ==
In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain.
=== Engineering strain ===
Engineering strain, also known as Cauchy strain, is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain e, which equals the relative elongation or the change in length ΔL per unit of the original length L of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have
e
=
Δ
L
L
=
l
−
L
L
{\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}}
,
where e is the engineering normal strain, L is the original length of the fiber and l is the final length of the fiber.
The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate.
=== Stretch ratio ===
The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length l and the initial length L of the material line.
λ
=
l
L
{\displaystyle \lambda ={\frac {l}{L}}}
The extension ratio λ is related to the engineering strain e by
e
=
λ
−
1
{\displaystyle e=\lambda -1}
This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity.
The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios.
=== Logarithmic strain ===
The logarithmic strain ε, also called, true strain or Hencky strain. Considering an incremental strain (Ludwik)
δ
ε
=
δ
l
l
{\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}}
the logarithmic strain is obtained by integrating this incremental strain:
∫
δ
ε
=
∫
L
l
δ
l
l
ε
=
ln
(
l
L
)
=
ln
(
λ
)
=
ln
(
1
+
e
)
=
e
−
e
2
2
+
e
3
3
−
⋯
{\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}}
where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path.
=== Green strain ===
The Green strain is defined as:
ε
G
=
1
2
(
l
2
−
L
2
L
2
)
=
1
2
(
λ
2
−
1
)
{\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)}
=== Almansi strain ===
The Euler-Almansi strain is defined as
ε
E
=
1
2
(
l
2
−
L
2
l
2
)
=
1
2
(
1
−
1
λ
2
)
{\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)}
== Strain tensor ==
The (infinitesimal) strain tensor (symbol
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components."
ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer".
Thus, strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress.
The strain tensor can then be expressed in terms of normal and shear components as:
ε
_
_
=
[
ε
x
x
ε
x
y
ε
x
z
ε
y
x
ε
y
y
ε
y
z
ε
z
x
ε
z
y
ε
z
z
]
=
[
ε
x
x
1
2
γ
x
y
1
2
γ
x
z
1
2
γ
y
x
ε
y
y
1
2
γ
y
z
1
2
γ
z
x
1
2
γ
z
y
ε
z
z
]
{\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}}
=== Geometric setting ===
Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy, which, after deformation, takes the form of a rhombus. The deformation is described by the displacement field u. From the geometry of the adjacent figure we have
l
e
n
g
t
h
(
A
B
)
=
d
x
{\displaystyle \mathrm {length} (AB)=dx}
and
l
e
n
g
t
h
(
a
b
)
=
(
d
x
+
∂
u
x
∂
x
d
x
)
2
+
(
∂
u
y
∂
x
d
x
)
2
=
d
x
2
(
1
+
∂
u
x
∂
x
)
2
+
d
x
2
(
∂
u
y
∂
x
)
2
=
d
x
(
1
+
∂
u
x
∂
x
)
2
+
(
∂
u
y
∂
x
)
2
{\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}}
For very small displacement gradients the squares of the derivative of
u
y
{\displaystyle u_{y}}
and
u
x
{\displaystyle u_{x}}
are negligible and we have
l
e
n
g
t
h
(
a
b
)
≈
d
x
(
1
+
∂
u
x
∂
x
)
=
d
x
+
∂
u
x
∂
x
d
x
{\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx}
=== Normal strain ===
For an isotropic material that obeys Hooke's law, a normal stress will cause a normal strain. Normal strains produce dilations.
The normal strain in the x-direction of the rectangular element is defined by
ε
x
=
extension
original length
=
l
e
n
g
t
h
(
a
b
)
−
l
e
n
g
t
h
(
A
B
)
l
e
n
g
t
h
(
A
B
)
=
∂
u
x
∂
x
{\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}}
Similarly, the normal strain in the y- and z-directions becomes
ε
y
=
∂
u
y
∂
y
,
ε
z
=
∂
u
z
∂
z
{\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}}
=== Shear strain ===
The engineering shear strain (γxy) is defined as the change in angle between lines AC and AB. Therefore,
γ
x
y
=
α
+
β
{\displaystyle \gamma _{xy}=\alpha +\beta }
From the geometry of the figure, we have
tan
α
=
∂
u
y
∂
x
d
x
d
x
+
∂
u
x
∂
x
d
x
=
∂
u
y
∂
x
1
+
∂
u
x
∂
x
tan
β
=
∂
u
x
∂
y
d
y
d
y
+
∂
u
y
∂
y
d
y
=
∂
u
x
∂
y
1
+
∂
u
y
∂
y
{\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}}
For small displacement gradients we have
∂
u
x
∂
x
≪
1
;
∂
u
y
∂
y
≪
1
{\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1}
For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α, tan β ≈ β. Therefore,
α
≈
∂
u
y
∂
x
;
β
≈
∂
u
x
∂
y
{\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}}
thus
γ
x
y
=
α
+
β
=
∂
u
y
∂
x
+
∂
u
x
∂
y
{\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}}
By interchanging x and y and ux and uy, it can be shown that γxy = γyx.
Similarly, for the yz- and xz-planes, we have
γ
y
z
=
γ
z
y
=
∂
u
y
∂
z
+
∂
u
z
∂
y
,
γ
z
x
=
γ
x
z
=
∂
u
z
∂
x
+
∂
u
x
∂
z
{\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}}
=== Volume strain ===
== Metric tensor ==
A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor.
== See also ==
Stress measures
Strain rate
Strain tensor
== References == | Wikipedia/Strain_(mechanics) |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Finite_strain_theory |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Eulerian_finite_strain_tensor |
In fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section.
It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845.
The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation.
Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen).
Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop.
== Equation ==
In standard fluid-kinetics notation:
Δ
p
=
8
μ
L
Q
π
R
4
=
8
π
μ
L
Q
A
2
,
{\displaystyle \Delta p={\frac {8\mu LQ}{\pi R^{4}}}={\frac {8\pi \mu LQ}{A^{2}}},}
where
Δp is the pressure difference between the two ends,
L is the length of pipe,
μ is the dynamic viscosity,
Q is the volumetric flow rate,
R is the pipe radius,
A is the cross-sectional area of pipe.
The equation does not hold close to the pipe entrance.: 3
The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by
Δ
p
=
1
2
ρ
v
¯
max
2
=
1
2
ρ
(
Q
max
π
R
2
)
2
⇒
Q
max
=
π
R
2
2
Δ
p
ρ
,
{\displaystyle {\begin{aligned}\Delta p={\frac {1}{2}}\rho {\overline {v}}_{\text{max}}^{2}&={\frac {1}{2}}\rho \left({\frac {Q_{\text{max}}}{\pi R^{2}}}\right)^{2}\\\Rightarrow \quad Q_{\max }{}&=\pi R^{2}{\sqrt {\frac {2\Delta p}{\rho }}},\end{aligned}}}
because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow.
== Relation to the Darcy–Weisbach equation ==
Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section:
Λ
=
64
R
e
,
R
e
=
ρ
v
d
μ
,
{\displaystyle \Lambda ={\frac {64}{\mathrm {Re} }},\quad \mathrm {Re} ={\frac {\rho vd}{\mu }},}
where Re is the Reynolds number, ρ is the fluid density, and v is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor Λ in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law.
The law is also very important in hemorheology and hemodynamics, both fields of physiology.
Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work.
== Derivation ==
The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates (r,θ,x) by making the following set of assumptions:
The flow is steady ( ∂.../∂t = 0 ).
The radial and azimuthal components of the fluid velocity are zero ( ur = uθ = 0 ).
The flow is axisymmetric ( ∂.../∂θ = 0 ).
The flow is fully developed ( ∂ux/∂x = 0 ). Here however, this can be proved via mass conservation, and the above assumptions.
Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to ∂p/∂r = 0, i.e., the pressure p is a function of the axial coordinate x only. For brevity, use u instead of
u
x
{\displaystyle u_{x}}
. The axial momentum equation reduces to
1
r
∂
∂
r
(
r
∂
u
∂
r
)
=
1
μ
d
p
d
x
{\displaystyle {\frac {1}{r}}{\frac {\partial }{\partial r}}\left(r{\frac {\partial u}{\partial r}}\right)={\frac {1}{\mu }}{\frac {\mathrm {d} p}{\mathrm {d} x}}}
where μ is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of r and the right-hand side term is only a function of x, implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be L and denote the pressure difference between the two ends of the pipe by Δp (high pressure minus low pressure), then the constant is simply
−
d
p
d
x
=
Δ
p
L
=
G
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {\Delta p}{L}}=G}
defined such that G is positive. The solution is
u
=
−
G
r
2
4
μ
+
c
1
ln
r
+
c
2
{\displaystyle u=-{\frac {Gr^{2}}{4\mu }}+c_{1}\ln r+c_{2}}
Since u needs to be finite at r = 0, c1 = 0. The no slip boundary condition at the pipe wall requires that u = 0 at r = R (radius of the pipe), which yields c2 = GR2/4μ. Thus we have finally the following parabolic velocity profile:
u
=
G
4
μ
(
R
2
−
r
2
)
.
{\displaystyle u={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right).}
The maximum velocity occurs at the pipe centerline (r = 0), umax = GR2/4μ. The average velocity can be obtained by integrating over the pipe cross section,
u
a
v
g
=
1
π
R
2
∫
0
R
2
π
r
u
d
r
=
1
2
u
m
a
x
.
{\displaystyle {u}_{\mathrm {avg} }={\frac {1}{\pi R^{2}}}\int _{0}^{R}2\pi ru\mathrm {d} r={\tfrac {1}{2}}{u}_{\mathrm {max} }.}
The easily measurable quantity in experiments is the volumetric flow rate Q = πR2 uavg. Rearrangement of this gives the Hagen–Poiseuille equation
Δ
p
=
8
μ
Q
L
π
R
4
.
{\displaystyle \Delta p={\frac {8\mu QL}{\pi R^{4}}}.}
=== Startup of Poiseuille flow in a pipe ===
When a constant pressure gradient G = −dp/dx is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to
∂
u
∂
t
=
G
ρ
+
ν
(
∂
2
u
∂
r
2
+
1
r
∂
u
∂
r
)
{\displaystyle {\frac {\partial u}{\partial t}}={\frac {G}{\rho }}+\nu \left({\frac {\partial ^{2}u}{\partial r^{2}}}+{\frac {1}{r}}{\frac {\partial u}{\partial r}}\right)}
with initial and boundary conditions,
u
(
r
,
0
)
=
0
,
u
(
R
,
t
)
=
0.
{\displaystyle u(r,0)=0,\quad u(R,t)=0.}
The velocity distribution is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
−
2
G
R
2
μ
∑
n
=
1
∞
1
λ
n
3
J
0
(
λ
n
r
/
R
)
J
1
(
λ
n
)
e
−
λ
n
2
ν
t
/
R
2
,
J
0
(
λ
n
)
=
0
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)-{\frac {2GR^{2}}{\mu }}\sum _{n=1}^{\infty }{\frac {1}{\lambda _{n}^{3}}}{\frac {J_{0}(\lambda _{n}r/R)}{J_{1}(\lambda _{n})}}e^{-\lambda _{n}^{2}\nu t/R^{2}},\quad J_{0}\left(\lambda _{n}\right)=0}
where J0(λnr/R) is the Bessel function of the first kind of order zero and λn are the positive roots of this function and J1(λn) is the Bessel function of the first kind of order one. As t → ∞, Poiseuille solution is recovered.
== Poiseuille flow in an annular section ==
If R1 is the inner cylinder radii and R2 is the outer cylinder radii, with constant applied pressure gradient between the two ends G = −dp/dx, the velocity distribution and the volume flux through the annular pipe are
u
(
r
)
=
G
4
μ
(
R
1
2
−
r
2
)
+
G
4
μ
(
R
2
2
−
R
1
2
)
ln
r
/
R
1
ln
R
2
/
R
1
,
Q
=
G
π
8
μ
[
R
2
4
−
R
1
4
−
(
R
2
2
−
R
1
2
)
2
ln
R
2
/
R
1
]
.
{\displaystyle {\begin{aligned}u(r)&={\frac {G}{4\mu }}\left(R_{1}^{2}-r^{2}\right)+{\frac {G}{4\mu }}\left(R_{2}^{2}-R_{1}^{2}\right){\frac {\ln r/R_{1}}{\ln R_{2}/R_{1}}},\\[6pt]Q&={\frac {G\pi }{8\mu }}\left[R_{2}^{4}-R_{1}^{4}-{\frac {\left(R_{2}^{2}-R_{1}^{2}\right)^{2}}{\ln R_{2}/R_{1}}}\right].\end{aligned}}}
When R2 = R, R1 = 0, the original problem is recovered.
== Poiseuille flow in a pipe with an oscillating pressure gradient ==
Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by
∂
p
∂
x
=
−
G
−
α
cos
ω
t
−
β
sin
ω
t
{\displaystyle {\frac {\partial p}{\partial x}}=-G-\alpha \cos \omega t-\beta \sin \omega t}
where G, α and β are constants and ω is the frequency. The velocity field is given by
u
(
r
,
t
)
=
G
4
μ
(
R
2
−
r
2
)
+
[
α
F
2
+
β
(
F
1
−
1
)
]
cos
ω
t
ρ
ω
+
[
β
F
2
−
α
(
F
1
−
1
)
]
sin
ω
t
ρ
ω
{\displaystyle u(r,t)={\frac {G}{4\mu }}\left(R^{2}-r^{2}\right)+[\alpha F_{2}+\beta (F_{1}-1)]{\frac {\cos \omega t}{\rho \omega }}+[\beta F_{2}-\alpha (F_{1}-1)]{\frac {\sin \omega t}{\rho \omega }}}
where
F
1
(
k
r
)
=
b
e
r
(
k
r
)
b
e
r
(
k
R
)
+
b
e
i
(
k
r
)
b
e
i
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
F
2
(
k
r
)
=
b
e
r
(
k
r
)
b
e
i
(
k
R
)
−
b
e
i
(
k
r
)
b
e
r
(
k
R
)
b
e
r
2
(
k
R
)
+
b
e
i
2
(
k
R
)
,
{\displaystyle {\begin{aligned}F_{1}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {ber} (kR)+\mathrm {bei} (kr)\mathrm {bei} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\\[6pt]F_{2}(kr)&={\frac {\mathrm {ber} (kr)\mathrm {bei} (kR)-\mathrm {bei} (kr)\mathrm {ber} (kR)}{\mathrm {ber} ^{2}(kR)+\mathrm {bei} ^{2}(kR)}},\end{aligned}}}
where ber and bei are the Kelvin functions and k2 = ρω/μ.
== Plane Poiseuille flow ==
Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance h with a constant pressure gradient G = −dp/dx is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to
d
2
u
d
y
2
=
−
G
μ
{\displaystyle {\frac {\mathrm {d} ^{2}u}{\mathrm {d} y^{2}}}=-{\frac {G}{\mu }}}
with no-slip condition on both walls
u
(
0
)
=
0
,
u
(
h
)
=
0
{\displaystyle u(0)=0,\quad u(h)=0}
Therefore, the velocity distribution and the volume flow rate per unit length are
u
(
y
)
=
G
2
μ
y
(
h
−
y
)
,
Q
=
G
h
3
12
μ
.
{\displaystyle u(y)={\frac {G}{2\mu }}y(h-y),\quad Q={\frac {Gh^{3}}{12\mu }}.}
== Poiseuille flow through some non-circular cross-sections ==
Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let G = −dp/dx be the constant pressure gradient acting in direction parallel to the motion.
The velocity and the volume flow rate in a rectangular channel of height 0 ≤ y ≤ h and width 0 ≤ z ≤ l are
u
(
y
,
z
)
=
G
2
μ
y
(
h
−
y
)
−
4
G
h
2
μ
π
3
∑
n
=
1
∞
1
(
2
n
−
1
)
3
sinh
(
β
n
z
)
+
sinh
[
β
n
(
l
−
z
)
]
sinh
(
β
n
l
)
sin
(
β
n
y
)
,
β
n
=
(
2
n
−
1
)
π
h
,
Q
=
G
h
3
l
12
μ
−
16
G
h
4
π
5
μ
∑
n
=
1
∞
1
(
2
n
−
1
)
5
cosh
(
β
n
l
)
−
1
sinh
(
β
n
l
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}y(h-y)-{\frac {4Gh^{2}}{\mu \pi ^{3}}}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{3}}}{\frac {\sinh(\beta _{n}z)+\sinh[\beta _{n}(l-z)]}{\sinh(\beta _{n}l)}}\sin(\beta _{n}y),\quad \beta _{n}={\frac {(2n-1)\pi }{h}},\\[6pt]Q&={\frac {Gh^{3}l}{12\mu }}-{\frac {16Gh^{4}}{\pi ^{5}\mu }}\sum _{n=1}^{\infty }{\frac {1}{(2n-1)^{5}}}{\frac {\cosh(\beta _{n}l)-1}{\sinh(\beta _{n}l)}}.\end{aligned}}}
The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length 2h/√3 are
u
(
y
,
z
)
=
−
G
4
μ
h
(
y
−
h
)
(
y
2
−
3
z
2
)
,
Q
=
G
h
4
60
3
μ
.
{\displaystyle {\begin{aligned}u(y,z)&=-{\frac {G}{4\mu h}}(y-h)\left(y^{2}-3z^{2}\right),\\[6pt]Q&={\frac {Gh^{4}}{60{\sqrt {3}}\mu }}.\end{aligned}}}
The velocity and the volume flow rate in the right-angled isosceles triangle y = π, y ± z = 0 are
u
(
y
,
z
)
=
G
2
μ
(
y
+
z
)
(
π
−
y
)
−
G
π
μ
∑
n
=
1
∞
1
β
n
3
sinh
(
2
π
β
n
)
{
sinh
[
β
n
(
2
π
−
y
+
z
)
]
sin
[
β
n
(
y
+
z
)
]
−
sinh
[
β
n
(
y
+
z
)
]
sin
[
β
n
(
y
−
z
)
]
}
,
β
n
=
n
+
1
2
,
Q
=
G
π
4
12
μ
−
G
2
π
μ
∑
n
=
1
∞
1
β
n
5
[
coth
(
2
π
β
n
)
+
csc
(
2
π
β
n
)
]
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu }}(y+z)(\pi -y)-{\frac {G}{\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{3}\sinh(2\pi \beta _{n})}}\left\{\sinh[\beta _{n}(2\pi -y+z)]\sin[\beta _{n}(y+z)]-\sinh[\beta _{n}(y+z)]\sin[\beta _{n}(y-z)]\right\},\quad \beta _{n}=n+{\tfrac {1}{2}},\\[6pt]Q&={\frac {G\pi ^{4}}{12\mu }}-{\frac {G}{2\pi \mu }}\sum _{n=1}^{\infty }{\frac {1}{\beta _{n}^{5}}}\left[\coth(2\pi \beta _{n})+\csc(2\pi \beta _{n})\right].\end{aligned}}}
The velocity distribution for tubes of elliptical cross-section with semiaxes a and b is
u
(
y
,
z
)
=
G
2
μ
(
1
a
2
+
1
b
2
)
(
1
−
y
2
a
2
−
z
2
b
2
)
,
Q
=
π
G
a
3
b
3
4
μ
(
a
2
+
b
2
)
.
{\displaystyle {\begin{aligned}u(y,z)&={\frac {G}{2\mu \left({\frac {1}{a^{2}}}+{\frac {1}{b^{2}}}\right)}}\left(1-{\frac {y^{2}}{a^{2}}}-{\frac {z^{2}}{b^{2}}}\right),\\[6pt]Q&={\frac {\pi Ga^{3}b^{3}}{4\mu \left(a^{2}+b^{2}\right)}}.\end{aligned}}}
Here, when a = b, Poiseuille flow for circular pipe is recovered and when a → ∞, plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by Ratip Berker.
== Poiseuille flow through arbitrary cross-section ==
The flow through arbitrary cross-section u(y,z) satisfies the condition that u = 0 on the walls. The governing equation reduces to
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
=
−
G
μ
.
{\displaystyle {\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}=-{\frac {G}{\mu }}.}
If we introduce a new dependent variable as
U
=
u
+
G
4
μ
(
y
2
+
z
2
)
,
{\displaystyle U=u+{\frac {G}{4\mu }}\left(y^{2}+z^{2}\right),}
then it is easy to see that the problem reduces to that integrating a Laplace equation
∂
2
U
∂
y
2
+
∂
2
U
∂
z
2
=
0
{\displaystyle {\frac {\partial ^{2}U}{\partial y^{2}}}+{\frac {\partial ^{2}U}{\partial z^{2}}}=0}
satisfying the condition
U
=
G
4
μ
(
y
2
+
z
2
)
{\displaystyle U={\frac {G}{4\mu }}\left(y^{2}+z^{2}\right)}
on the wall.
== Poiseuille's equation for an ideal isothermal gas ==
For a compressible fluid in a tube the volumetric flow rate Q(x) and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e.,
p
/
ρ
{\displaystyle p/\rho }
is constant) and the conservation of mass flow rate (i.e.,
m
˙
=
ρ
Q
{\displaystyle {\dot {m}}=\rho Q}
is constant), the relation Qp = Q1p1 = Q2p2 can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally,
−
d
p
d
x
=
8
μ
Q
π
R
4
=
8
μ
Q
2
p
2
π
p
R
4
⇒
−
p
d
p
d
x
=
8
μ
Q
2
p
2
π
R
4
.
{\displaystyle -{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q}{\pi R^{4}}}={\frac {8\mu Q_{2}p_{2}}{\pi pR^{4}}}\quad \Rightarrow \quad -p{\frac {\mathrm {d} p}{\mathrm {d} x}}={\frac {8\mu Q_{2}p_{2}}{\pi R^{4}}}.}
Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since μ is independent of pressure, the above equation can be integrated over the length L to give
p
1
2
−
p
2
2
=
16
μ
L
Q
2
p
2
π
R
4
.
{\displaystyle p_{1}^{2}-p_{2}^{2}={\frac {16\mu LQ_{2}p_{2}}{\pi R^{4}}}.}
Hence the volumetric flow rate at the pipe outlet is given by
Q
2
=
π
R
4
16
μ
L
(
p
1
2
−
p
2
2
p
2
)
=
π
R
4
(
p
1
−
p
2
)
8
μ
L
(
p
1
+
p
2
)
2
p
2
.
{\displaystyle Q_{2}={\frac {\pi R^{4}}{16\mu L}}\left({\frac {p_{1}^{2}-p_{2}^{2}}{p_{2}}}\right)={\frac {\pi R^{4}\left(p_{1}-p_{2}\right)}{8\mu L}}{\frac {\left(p_{1}+p_{2}\right)}{2p_{2}}}.}
This equation can be seen as Poiseuille's law with an extra correction factor p1 + p2/2p2 expressing the average pressure relative to the outlet pressure.
== Electrical circuits analogy ==
Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, V = IR. Since the net force acting on the fluid is equal to ΔF = SΔp, where S = πr2, i.e. ΔF = πr2 ΔP, then from Poiseuille's law, it follows that
Δ
F
=
8
μ
L
Q
r
2
{\displaystyle \Delta F={\frac {8\mu LQ}{r^{2}}}}
.
For electrical circuits, let n be the concentration of free charged particles (in m−3) and let q* be the charge of each particle (in coulombs). (For electrons, q* = e = 1.6×10−19 C.) Then nQ is the number of particles in the volume Q, and nQq* is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current I. Therefore, I = nQq*. Consequently, Q = I/nq*, and
Δ
F
=
8
μ
L
I
n
r
2
q
∗
.
{\displaystyle \Delta F={\frac {8\mu LI}{nr^{2}q^{*}}}.}
But ΔF = Eq, where q is the total charge in the volume of the tube. The volume of the tube is equal to πr2L, so the number of charged particles in this volume is equal to nπr2L, and their total charge is q = nπr2 Lq*. Since the voltage V = EL, it follows then
V
=
8
μ
L
I
n
2
π
r
4
(
q
∗
)
2
.
{\displaystyle V={\frac {8\mu LI}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}.}
This is exactly Ohm's law, where the resistance R = V/I is described by the formula
R
=
8
μ
L
n
2
π
r
4
(
q
∗
)
2
{\displaystyle R={\frac {8\mu L}{n^{2}\pi r^{4}\left(q^{*}\right)^{2}}}}
.
It follows that the resistance R is proportional to the length L of the resistor, which is true. However, it also follows that the resistance R is inversely proportional to the fourth power of the radius r, i.e. the resistance R is inversely proportional to the second power of the cross section area S = πr2 of the resistor, which is different from the electrical formula. The electrical relation for the resistance is
R
=
ρ
L
S
,
{\displaystyle R={\frac {\rho L}{S}},}
where ρ is the resistivity; i.e. the resistance R is inversely proportional to the cross section area S of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance R is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena.
== Medical applications – intravenous access and fluid delivery ==
The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure (∆p) — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion).
== See also ==
Couette flow
Darcy's law
Pulse
Wave
Hydraulic circuit
== Cited references ==
== References ==
Sutera, S. P.; Skalak, R. (1993). "The history of Poiseuille's law". Annual Review of Fluid Mechanics. 25: 1–19. Bibcode:1993AnRFM..25....1S. doi:10.1146/annurev.fl.25.010193.000245..
Pfitzner, J (1976). "Poiseuille and his law". Anaesthesia. Vol. 31, no. 2 (published Mar 1976). pp. 273–5. doi:10.1111/j.1365-2044.1976.tb11804.x. PMID 779509..
Bennett, C. O.; Myers, J. E. (1962). Momentum, Heat, and Mass Transfer. McGraw-Hill..
== External links ==
Poiseuille's law for power-law non-Newtonian fluid
Poiseuille's law in a slightly tapered tube
Hagen–Poiseuille equation calculator | Wikipedia/Hagen–Poiseuille_equation |
In continuum mechanics, a compatible deformation (or strain) tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
In the continuum description of a solid body we imagine the body to be composed of a set of infinitesimal volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps. Certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body. Compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state.
In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venant's tensor (or incompatibility tensor)
R
(
ε
)
{\displaystyle {\boldsymbol {R}}({\boldsymbol {\varepsilon }})}
vanishes in a simply-connected body where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor and
R
:=
∇
×
(
∇
×
ε
)
T
=
0
.
{\displaystyle {\boldsymbol {R}}:={\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }})^{T}={\boldsymbol {0}}~.}
For finite deformations the compatibility conditions take the form
R
:=
∇
×
F
=
0
{\displaystyle {\boldsymbol {R}}:={\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
where
F
{\displaystyle {\boldsymbol {F}}}
is the deformation gradient.
== Compatibility conditions for infinitesimal strains ==
The compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information. The resulting expressions in terms of only the strains provide constraints on the possible forms of a strain field.
=== 2-dimensions ===
For two-dimensional, plane strain problems the strain-displacement relations are
ε
11
=
∂
u
1
∂
x
1
;
ε
12
=
1
2
[
∂
u
1
∂
x
2
+
∂
u
2
∂
x
1
]
;
ε
22
=
∂
u
2
∂
x
2
{\displaystyle \varepsilon _{11}={\cfrac {\partial u_{1}}{\partial x_{1}}}~;~~\varepsilon _{12}={\cfrac {1}{2}}\left[{\cfrac {\partial u_{1}}{\partial x_{2}}}+{\cfrac {\partial u_{2}}{\partial x_{1}}}\right]~;~~\varepsilon _{22}={\cfrac {\partial u_{2}}{\partial x_{2}}}}
Repeated differentiation of these relations, in order to remove the displacements
u
1
{\displaystyle u_{1}}
and
u
2
{\displaystyle u_{2}}
, gives us the two-dimensional compatibility condition for strains
∂
2
ε
11
∂
x
2
2
−
2
∂
2
ε
12
∂
x
1
∂
x
2
+
∂
2
ε
22
∂
x
1
2
=
0
{\displaystyle {\cfrac {\partial ^{2}\varepsilon _{11}}{\partial x_{2}^{2}}}-2{\cfrac {\partial ^{2}\varepsilon _{12}}{\partial x_{1}\partial x_{2}}}+{\cfrac {\partial ^{2}\varepsilon _{22}}{\partial x_{1}^{2}}}=0}
The only displacement field that is allowed by a compatible plane strain field is a plane displacement field, i.e.,
u
=
u
(
x
1
,
x
2
)
{\displaystyle \mathbf {u} =\mathbf {u} (x_{1},x_{2})}
.
=== 3-dimensions ===
In three dimensions, in addition to two more equations of the form seen for two dimensions, there are
three more equations of the form
∂
2
ε
33
∂
x
1
∂
x
2
=
∂
∂
x
3
[
∂
ε
23
∂
x
1
+
∂
ε
31
∂
x
2
−
∂
ε
12
∂
x
3
]
{\displaystyle {\cfrac {\partial ^{2}\varepsilon _{33}}{\partial x_{1}\partial x_{2}}}={\cfrac {\partial }{\partial x_{3}}}\left[{\cfrac {\partial \varepsilon _{23}}{\partial x_{1}}}+{\cfrac {\partial \varepsilon _{31}}{\partial x_{2}}}-{\cfrac {\partial \varepsilon _{12}}{\partial x_{3}}}\right]}
Therefore, there are 34=81 partial differential equations, however due to symmetry conditions, this number reduces to six different compatibility conditions. We can write these conditions in index notation as
e
i
k
r
e
j
l
s
ε
i
j
,
k
l
=
0
{\displaystyle e_{ikr}~e_{jls}~\varepsilon _{ij,kl}=0}
where
e
i
j
k
{\displaystyle e_{ijk}}
is the permutation symbol. In direct tensor notation
∇
×
(
∇
×
ε
)
T
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }})^{T}={\boldsymbol {0}}}
where the curl operator can be expressed in an orthonormal coordinate system as
∇
×
ε
=
e
i
j
k
ε
r
j
,
i
e
k
⊗
e
r
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }}=e_{ijk}\varepsilon _{rj,i}\mathbf {e} _{k}\otimes \mathbf {e} _{r}}
.
The second-order tensor
R
:=
∇
×
(
∇
×
ε
)
T
;
R
r
s
:=
e
i
k
r
e
j
l
s
ε
i
j
,
k
l
{\displaystyle {\boldsymbol {R}}:={\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\times {\boldsymbol {\varepsilon }})^{T}~;~~R_{rs}:=e_{ikr}~e_{jls}~\varepsilon _{ij,kl}}
is known as the incompatibility tensor (or specifically the Kröner tensor) and is a reduced form of the rank 4 Saint-Venant compatibility tensor
== Compatibility conditions for finite strains ==
For solids in which the deformations are not required to be small, the compatibility conditions take the form
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
where
F
{\displaystyle {\boldsymbol {F}}}
is the deformation gradient. In terms of components with respect to a Cartesian coordinate system we can write these compatibility relations as
e
A
B
C
∂
F
i
B
∂
X
A
=
0
{\displaystyle e_{ABC}~{\cfrac {\partial F_{iB}}{\partial X_{A}}}=0}
This condition is necessary if the deformation is to be continuous and derived from the mapping
x
=
χ
(
X
,
t
)
{\displaystyle \mathbf {x} ={\boldsymbol {\chi }}(\mathbf {X} ,t)}
(see Finite strain theory). The same condition is also sufficient to ensure compatibility in a simply connected body.
=== Compatibility condition for the right Cauchy-Green deformation tensor ===
The compatibility condition for the right Cauchy-Green deformation tensor can be expressed as
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
Γ
α
β
γ
]
−
∂
∂
X
β
[
Γ
α
ρ
γ
]
+
Γ
μ
ρ
γ
Γ
α
β
μ
−
Γ
μ
β
γ
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\Gamma _{\alpha \rho }^{\gamma }]+\Gamma _{\mu \rho }^{\gamma }~\Gamma _{\alpha \beta }^{\mu }-\Gamma _{\mu \beta }^{\gamma }~\Gamma _{\alpha \rho }^{\mu }=0}
where
Γ
i
j
k
{\displaystyle \Gamma _{ij}^{k}}
is the Christoffel symbol of the second kind. The quantity
R
i
j
k
m
{\displaystyle R_{ijk}^{m}}
represents the mixed components of the Riemann-Christoffel curvature tensor.
== The general compatibility problem ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on simply connected bodies. More precisely, the problem may be stated in the following manner.
Consider the deformation of a body shown in Figure 1. If we express all vectors in terms of the reference coordinate system
{
(
E
1
,
E
2
,
E
3
)
,
O
}
{\displaystyle \{(\mathbf {E} _{1},\mathbf {E} _{2},\mathbf {E} _{3}),O\}}
, the displacement of a point in the body is given by
u
=
x
−
X
;
u
i
=
x
i
−
X
i
{\displaystyle \mathbf {u} =\mathbf {x} -\mathbf {X} ~;~~u_{i}=x_{i}-X_{i}}
Also
∇
u
=
∂
u
∂
X
;
∇
x
=
∂
x
∂
X
{\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\frac {\partial \mathbf {u} }{\partial \mathbf {X} }}~;~~{\boldsymbol {\nabla }}\mathbf {x} ={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}}
What conditions on a given second-order tensor field
A
(
X
)
{\displaystyle {\boldsymbol {A}}(\mathbf {X} )}
on a body are necessary and sufficient so that there exists a unique vector field
v
(
X
)
{\displaystyle \mathbf {v} (\mathbf {X} )}
that satisfies
∇
v
=
A
≡
v
i
,
j
=
A
i
j
{\displaystyle {\boldsymbol {\nabla }}\mathbf {v} ={\boldsymbol {A}}\quad \equiv \quad v_{i,j}=A_{ij}}
=== Necessary conditions ===
For the necessary conditions we assume that the field
v
{\displaystyle \mathbf {v} }
exists and satisfies
v
i
,
j
=
A
i
j
{\displaystyle v_{i,j}=A_{ij}}
. Then
v
i
,
j
k
=
A
i
j
,
k
;
v
i
,
k
j
=
A
i
k
,
j
{\displaystyle v_{i,jk}=A_{ij,k}~;~~v_{i,kj}=A_{ik,j}}
Since changing the order of differentiation does not affect the result we have
v
i
,
j
k
=
v
i
,
k
j
{\displaystyle v_{i,jk}=v_{i,kj}}
Hence
A
i
j
,
k
=
A
i
k
,
j
{\displaystyle A_{ij,k}=A_{ik,j}}
From the well known identity for the curl of a tensor we get the necessary condition
∇
×
A
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {A}}={\boldsymbol {0}}}
=== Sufficient conditions ===
To prove that this condition is sufficient to guarantee existence of a compatible second-order tensor field, we start with the assumption that a field
A
{\displaystyle {\boldsymbol {A}}}
exists such that
∇
×
A
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {A}}={\boldsymbol {0}}}
. We will integrate this field to find the vector field
v
{\displaystyle \mathbf {v} }
along a line between points
A
{\displaystyle A}
and
B
{\displaystyle B}
(see Figure 2), i.e.,
v
(
X
B
)
−
v
(
X
A
)
=
∫
X
A
X
B
∇
v
⋅
d
X
=
∫
X
A
X
B
A
(
X
)
⋅
d
X
{\displaystyle {\begin{aligned}\mathbf {v} (\mathbf {X} _{B})-\mathbf {v} (\mathbf {X} _{A})&=\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}{\boldsymbol {\nabla }}\mathbf {v} \cdot ~d\mathbf {X} \\[1ex]&=\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}{\boldsymbol {A}}(\mathbf {X} )\cdot d\mathbf {X} \end{aligned}}}
If the vector field
v
{\displaystyle \mathbf {v} }
is to be single-valued then the value of the integral should be independent of the path taken to go from
A
{\displaystyle A}
to
B
{\displaystyle B}
.
From Stokes' theorem, the integral of a second order tensor along a closed path is given by
∮
∂
Ω
A
⋅
d
s
=
∫
Ω
n
⋅
(
∇
×
A
)
d
a
{\displaystyle \oint _{\partial \Omega }{\boldsymbol {A}}\cdot d\mathbf {s} =\int _{\Omega }\mathbf {n} \cdot ({\boldsymbol {\nabla }}\times {\boldsymbol {A}})~da}
Using the assumption that the curl of
A
{\displaystyle {\boldsymbol {A}}}
is zero, we get
∮
∂
Ω
A
⋅
d
s
=
0
⟹
∫
A
B
A
⋅
d
X
+
∫
B
A
A
⋅
d
X
=
0
{\displaystyle {\begin{aligned}&\oint _{\partial \Omega }{\boldsymbol {A}}\cdot d\mathbf {s} =0\\[1ex]\implies \quad &\int _{AB}{\boldsymbol {A}}\cdot d\mathbf {X} +\int _{BA}{\boldsymbol {A}}\cdot d\mathbf {X} =0\end{aligned}}}
Hence the integral is path independent and the compatibility condition is sufficient to ensure a unique
v
{\displaystyle \mathbf {v} }
field, provided that the body is simply connected.
== Compatibility of the deformation gradient ==
The compatibility condition for the deformation gradient is obtained directly from the above proof by observing that
F
=
∂
x
∂
X
=
∇
x
{\displaystyle {\boldsymbol {F}}={\cfrac {\partial \mathbf {x} }{\partial \mathbf {X} }}={\boldsymbol {\nabla }}\mathbf {x} }
Then the necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
== Compatibility of infinitesimal strains ==
The compatibility problem for small strains can be stated as follows.
Given a symmetric second order tensor field
ϵ
{\displaystyle {\boldsymbol {\epsilon }}}
when is it possible to construct a vector field
u
{\displaystyle \mathbf {u} }
such that
ϵ
=
1
2
[
∇
u
+
(
∇
u
)
T
]
{\displaystyle {\boldsymbol {\epsilon }}={\tfrac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} +({\boldsymbol {\nabla }}\mathbf {u} )^{T}]}
=== Necessary conditions ===
Suppose that there exists
u
{\displaystyle \mathbf {u} }
such that the expression for
ϵ
{\displaystyle {\boldsymbol {\epsilon }}}
holds. Now
∇
u
=
ϵ
+
ω
{\displaystyle {\boldsymbol {\nabla }}\mathbf {u} ={\boldsymbol {\epsilon }}+{\boldsymbol {\omega }}}
where
ω
:=
1
2
[
∇
u
−
(
∇
u
)
T
]
{\displaystyle {\boldsymbol {\omega }}:={\tfrac {1}{2}}[{\boldsymbol {\nabla }}\mathbf {u} -({\boldsymbol {\nabla }}\mathbf {u} )^{T}]}
Therefore, in index notation,
∇
ω
≡
ω
i
j
,
k
=
1
2
(
u
i
,
j
k
−
u
j
,
i
k
)
=
1
2
(
u
i
,
j
k
+
u
k
,
j
i
−
u
j
,
i
k
−
u
k
,
j
i
)
=
ε
i
k
,
j
−
ε
j
k
,
i
{\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {\omega }}\equiv \omega _{ij,k}&={\tfrac {1}{2}}(u_{i,jk}-u_{j,ik})\\[2pt]&={\tfrac {1}{2}}(u_{i,jk}+u_{k,ji}-u_{j,ik}-u_{k,ji})\\[2pt]&=\varepsilon _{ik,j}-\varepsilon _{jk,i}\end{aligned}}}
If
ω
{\displaystyle {\boldsymbol {\omega }}}
is continuously differentiable we have
ω
i
j
,
k
l
=
ω
i
j
,
l
k
{\displaystyle \omega _{ij,kl}=\omega _{ij,lk}}
. Hence,
ε
i
k
,
j
l
−
ε
j
k
,
i
l
−
ε
i
l
,
j
k
+
ε
j
l
,
i
k
=
0
{\displaystyle \varepsilon _{ik,jl}-\varepsilon _{jk,il}-\varepsilon _{il,jk}+\varepsilon _{jl,ik}=0}
In direct tensor notation
∇
×
(
∇
×
ϵ
)
T
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})^{T}={\boldsymbol {0}}}
The above are necessary conditions. If
w
{\displaystyle \mathbf {w} }
is the infinitesimal rotation vector then
∇
×
ϵ
=
∇
w
+
∇
w
T
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }}={\boldsymbol {\nabla }}\mathbf {w} +{\boldsymbol {\nabla }}\mathbf {w} ^{T}}
. Hence the necessary condition may also be written as
∇
×
(
∇
w
+
∇
w
T
)
T
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\mathbf {w} +{\boldsymbol {\nabla }}\mathbf {w} ^{T})^{T}={\boldsymbol {0}}}
.
=== Sufficient conditions ===
Let us now assume that the condition
∇
×
(
∇
×
ϵ
)
T
=
0
{\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})^{T}={\boldsymbol {0}}}
is satisfied in a portion of a body. Is this condition sufficient to guarantee the existence of a continuous, single-valued displacement field
u
{\displaystyle \mathbf {u} }
?
The first step in the process is to show that this condition implies that the infinitesimal rotation tensor
ω
{\displaystyle {\boldsymbol {\omega }}}
is uniquely defined. To do that we integrate
∇
w
{\displaystyle {\boldsymbol {\nabla }}\mathbf {w} }
along the path
X
A
{\displaystyle \mathbf {X} _{A}}
to
X
B
{\displaystyle \mathbf {X} _{B}}
, i.e.,
w
(
X
B
)
−
w
(
X
A
)
=
∫
X
A
X
B
∇
w
⋅
d
X
=
∫
X
A
X
B
(
∇
×
ϵ
)
⋅
d
X
{\displaystyle \mathbf {w} (\mathbf {X} _{B})-\mathbf {w} (\mathbf {X} _{A})=\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}{\boldsymbol {\nabla }}\mathbf {w} \cdot d\mathbf {X} =\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})\cdot d\mathbf {X} }
Note that we need to know a reference
w
(
X
A
)
{\displaystyle \mathbf {w} (\mathbf {X} _{A})}
to fix the rigid body rotation. The field
w
(
X
)
{\displaystyle \mathbf {w} (\mathbf {X} )}
is uniquely determined only if the contour integral along a closed contour between
X
A
{\displaystyle \mathbf {X} _{A}}
and
X
b
{\displaystyle \mathbf {X} _{b}}
is zero, i.e.,
∮
X
A
X
B
(
∇
×
ϵ
)
⋅
d
X
=
0
{\displaystyle \oint _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})\cdot d\mathbf {X} ={\boldsymbol {0}}}
But from Stokes' theorem for a simply-connected body and the necessary condition for compatibility
∮
X
A
X
B
(
∇
×
ϵ
)
⋅
d
X
=
∫
Ω
A
B
n
⋅
(
∇
×
∇
×
ϵ
)
d
a
=
0
{\displaystyle \oint _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})\cdot d\mathbf {X} =\int _{\Omega _{AB}}\mathbf {n} \cdot ({\boldsymbol {\nabla }}\times {\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }})~da={\boldsymbol {0}}}
Therefore, the field
w
{\displaystyle \mathbf {w} }
is uniquely defined which implies that the infinitesimal rotation tensor
ω
{\displaystyle {\boldsymbol {\omega }}}
is also uniquely defined, provided the body is simply connected.
In the next step of the process we will consider the uniqueness of the displacement field
u
{\displaystyle \mathbf {u} }
. As before we integrate the displacement gradient
u
(
X
B
)
−
u
(
X
A
)
=
∫
X
A
X
B
∇
u
⋅
d
X
=
∫
X
A
X
B
(
ϵ
+
ω
)
⋅
d
X
{\displaystyle \mathbf {u} (\mathbf {X} _{B})-\mathbf {u} (\mathbf {X} _{A})=\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}{\boldsymbol {\nabla }}\mathbf {u} \cdot d\mathbf {X} =\int _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}({\boldsymbol {\epsilon }}+{\boldsymbol {\omega }})\cdot d\mathbf {X} }
From Stokes' theorem and using the relations
∇
×
ϵ
=
∇
w
=
−
∇
×
ω
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }}={\boldsymbol {\nabla }}\mathbf {w} =-{\boldsymbol {\nabla }}\times \omega }
we have
∮
X
A
X
B
(
ϵ
+
ω
)
⋅
d
X
=
∫
Ω
A
B
n
⋅
(
∇
×
ϵ
+
∇
×
ω
)
d
a
=
0
{\displaystyle \oint _{\mathbf {X} _{A}}^{\mathbf {X} _{B}}({\boldsymbol {\epsilon }}+{\boldsymbol {\omega }})\cdot d\mathbf {X} =\int _{\Omega _{AB}}\mathbf {n} \cdot ({\boldsymbol {\nabla }}\times {\boldsymbol {\epsilon }}+{\boldsymbol {\nabla }}\times {\boldsymbol {\omega }})~da={\boldsymbol {0}}}
Hence the displacement field
u
{\displaystyle \mathbf {u} }
is also determined uniquely. Hence the compatibility conditions are sufficient to guarantee the existence of a unique displacement field
u
{\displaystyle \mathbf {u} }
in a simply-connected body.
== Compatibility for Right Cauchy-Green Deformation field ==
The compatibility problem for the Right Cauchy-Green deformation field can be posed as follows.
Problem: Let
C
(
X
)
{\displaystyle {\boldsymbol {C}}(\mathbf {X} )}
be a positive definite symmetric tensor field defined on the reference configuration. Under what conditions on
C
{\displaystyle {\boldsymbol {C}}}
does there exist a deformed configuration marked by the position field
x
(
X
)
{\displaystyle \mathbf {x} (\mathbf {X} )}
such that
(
1
)
(
∂
x
∂
X
)
T
(
∂
x
∂
X
)
=
C
{\displaystyle (1)\quad \left({\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\right)^{T}\left({\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\right)={\boldsymbol {C}}}
=== Necessary conditions ===
Suppose that a field
x
(
X
)
{\displaystyle \mathbf {x} (\mathbf {X} )}
exists that satisfies condition (1). In terms of components with respect to a rectangular Cartesian basis
∂
x
i
∂
X
α
∂
x
i
∂
X
β
=
C
α
β
{\displaystyle {\frac {\partial x^{i}}{\partial X^{\alpha }}}{\frac {\partial x^{i}}{\partial X^{\beta }}}=C_{\alpha \beta }}
From finite strain theory we know that
C
α
β
=
g
α
β
{\displaystyle C_{\alpha \beta }=g_{\alpha \beta }}
. Hence we can write
δ
i
j
∂
x
i
∂
X
α
∂
x
j
∂
X
β
=
g
α
β
{\displaystyle \delta _{ij}~{\frac {\partial x^{i}}{\partial X^{\alpha }}}~{\frac {\partial x^{j}}{\partial X^{\beta }}}=g_{\alpha \beta }}
For two symmetric second-order tensor field that are mapped one-to-one we also have the relation
G
i
j
=
∂
X
α
∂
x
i
∂
X
β
∂
x
j
g
α
β
{\displaystyle G_{ij}={\frac {\partial X^{\alpha }}{\partial x^{i}}}~{\frac {\partial X^{\beta }}{\partial x^{j}}}~g_{\alpha \beta }}
From the relation between of
G
i
j
{\displaystyle G_{ij}}
and
g
α
β
{\displaystyle g_{\alpha \beta }}
that
δ
i
j
=
G
i
j
{\displaystyle \delta _{ij}=G_{ij}}
, we have
(
x
)
Γ
i
j
k
=
0
{\displaystyle _{(x)}\Gamma _{ij}^{k}=0}
Then, from the relation
∂
2
x
m
∂
X
α
∂
X
β
=
∂
x
m
∂
X
μ
(
X
)
Γ
α
β
μ
−
∂
x
i
∂
X
α
∂
x
j
∂
X
β
(
x
)
Γ
i
j
m
{\displaystyle {\frac {\partial ^{2}x^{m}}{\partial X^{\alpha }\partial X^{\beta }}}={\frac {\partial x^{m}}{\partial X^{\mu }}}\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-{\frac {\partial x^{i}}{\partial X^{\alpha }}}~{\frac {\partial x^{j}}{\partial X^{\beta }}}\,_{(x)}\Gamma _{ij}^{m}}
we have
∂
F
α
m
∂
X
β
=
F
μ
m
(
X
)
Γ
α
β
μ
;
F
α
i
:=
∂
x
i
∂
X
α
{\displaystyle {\frac {\partial F_{~\alpha }^{m}}{\partial X^{\beta }}}=F_{~\mu }^{m}\,_{(X)}\Gamma _{\alpha \beta }^{\mu }\qquad ;~~F_{~\alpha }^{i}:={\frac {\partial x^{i}}{\partial X^{\alpha }}}}
From finite strain theory we also have
(
X
)
Γ
α
β
γ
=
1
2
(
∂
g
α
γ
∂
X
β
+
∂
g
β
γ
∂
X
α
−
∂
g
α
β
∂
X
γ
)
;
(
X
)
Γ
α
β
ν
=
g
ν
γ
(
X
)
Γ
α
β
γ
;
g
α
β
=
C
α
β
;
g
α
β
=
C
α
β
{\displaystyle {\begin{aligned}_{(X)}\Gamma _{\alpha \beta \gamma }&={\frac {1}{2}}\left({\frac {\partial g_{\alpha \gamma }}{\partial X^{\beta }}}+{\frac {\partial g_{\beta \gamma }}{\partial X^{\alpha }}}-{\frac {\partial g_{\alpha \beta }}{\partial X^{\gamma }}}\right);\\[2pt]_{(X)}\Gamma _{\alpha \beta }^{\nu }&=g^{\nu \gamma }\,_{(X)}\Gamma _{\alpha \beta \gamma }~;\\[2pt]g_{\alpha \beta }&=C_{\alpha \beta }~;~~g^{\alpha \beta }=C^{\alpha \beta }\end{aligned}}}
Therefore,
(
X
)
Γ
α
β
μ
=
C
μ
γ
2
(
∂
C
α
γ
∂
X
β
+
∂
C
β
γ
∂
X
α
−
∂
C
α
β
∂
X
γ
)
{\displaystyle \,_{(X)}\Gamma _{\alpha \beta }^{\mu }={\cfrac {C^{\mu \gamma }}{2}}\left({\frac {\partial C_{\alpha \gamma }}{\partial X^{\beta }}}+{\frac {\partial C_{\beta \gamma }}{\partial X^{\alpha }}}-{\frac {\partial C_{\alpha \beta }}{\partial X^{\gamma }}}\right)}
and we have
∂
F
α
m
∂
X
β
=
F
μ
m
C
μ
γ
2
(
∂
C
α
γ
∂
X
β
+
∂
C
β
γ
∂
X
α
−
∂
C
α
β
∂
X
γ
)
{\displaystyle {\frac {\partial F_{~\alpha }^{m}}{\partial X^{\beta }}}=F_{~\mu }^{m}~{\cfrac {C^{\mu \gamma }}{2}}\left({\frac {\partial C_{\alpha \gamma }}{\partial X^{\beta }}}+{\frac {\partial C_{\beta \gamma }}{\partial X^{\alpha }}}-{\frac {\partial C_{\alpha \beta }}{\partial X^{\gamma }}}\right)}
Again, using the commutative nature of the order of differentiation, we have
∂
2
F
α
m
∂
X
β
∂
X
ρ
=
∂
2
F
α
m
∂
X
ρ
∂
X
β
⟹
∂
F
μ
m
∂
X
ρ
(
X
)
Γ
α
β
μ
+
F
μ
m
∂
∂
X
ρ
[
(
X
)
Γ
α
β
μ
]
=
∂
F
μ
m
∂
X
β
(
X
)
Γ
α
ρ
μ
+
F
μ
m
∂
∂
X
β
[
(
X
)
Γ
α
ρ
μ
]
{\displaystyle {\begin{aligned}&{\frac {\partial ^{2}F_{~\alpha }^{m}}{\partial X^{\beta }\partial X^{\rho }}}={\frac {\partial ^{2}F_{~\alpha }^{m}}{\partial X^{\rho }\partial X^{\beta }}}\\[1.2ex]\implies &{\frac {\partial F_{~\mu }^{m}}{\partial X^{\rho }}}\,_{(X)}\Gamma _{\alpha \beta }^{\mu }+F_{~\mu }^{m}~{\frac {\partial }{\partial X^{\rho }}}\left[\,_{(X)}\Gamma _{\alpha \beta }^{\mu }\right]={\frac {\partial F_{~\mu }^{m}}{\partial X^{\beta }}}\,_{(X)}\Gamma _{\alpha \rho }^{\mu }+F_{~\mu }^{m}~{\frac {\partial }{\partial X^{\beta }}}\left[\,_{(X)}\Gamma _{\alpha \rho }^{\mu }\right]\end{aligned}}}
or
F
γ
m
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
+
F
μ
m
∂
∂
X
ρ
[
(
X
)
Γ
α
β
μ
]
=
F
γ
m
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
+
F
μ
m
∂
∂
X
β
[
(
X
)
Γ
α
ρ
μ
]
{\displaystyle F_{~\gamma }^{m}\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }+F_{~\mu }^{m}~{\frac {\partial }{\partial X^{\rho }}}\left[\,_{(X)}\Gamma _{\alpha \beta }^{\mu }\right]=F_{~\gamma }^{m}\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }+F_{~\mu }^{m}~{\frac {\partial }{\partial X^{\beta }}}\left[\,_{(X)}\Gamma _{\alpha \rho }^{\mu }\right]}
After collecting terms we get
F
γ
m
(
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
+
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
)
=
0
{\displaystyle F_{~\gamma }^{m}\left(\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }+{\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]\right)=0}
From the definition of
F
γ
m
{\displaystyle F_{\gamma }^{m}}
we observe that it is invertible and hence cannot be zero. Therefore,
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann-Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann-Christoffel curvature of the deformation is zero.
=== Sufficient conditions ===
The proof of sufficiency is a bit more involved. We start with the assumption that
R
α
β
ρ
γ
=
0
;
g
α
β
=
C
α
β
{\displaystyle R_{\alpha \beta \rho }^{\gamma }=0~;~~g_{\alpha \beta }=C_{\alpha \beta }}
We have to show that there exist
x
{\displaystyle \mathbf {x} }
and
X
{\displaystyle \mathbf {X} }
such that
∂
x
i
∂
X
α
∂
x
i
∂
X
β
=
C
α
β
{\displaystyle {\frac {\partial x^{i}}{\partial X^{\alpha }}}{\frac {\partial x^{i}}{\partial X^{\beta }}}=C_{\alpha \beta }}
From a theorem by T.Y.Thomas we know that the system of equations
∂
F
α
i
∂
X
β
=
F
γ
i
(
X
)
Γ
α
β
γ
{\displaystyle {\frac {\partial F_{~\alpha }^{i}}{\partial X^{\beta }}}=F_{~\gamma }^{i}~\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }}
has unique solutions
F
α
i
{\displaystyle F_{~\alpha }^{i}}
over simply connected domains if
(
X
)
Γ
α
β
γ
=
(
X
)
Γ
β
α
γ
;
R
α
β
ρ
γ
=
0
{\displaystyle _{(X)}\Gamma _{\alpha \beta }^{\gamma }=_{(X)}\Gamma _{\beta \alpha }^{\gamma }~;~~R_{\alpha \beta \rho }^{\gamma }=0}
The first of these is true from the defining of
Γ
j
k
i
{\displaystyle \Gamma _{jk}^{i}}
and the second is assumed. Hence the assumed condition gives us a unique
F
α
i
{\displaystyle F_{~\alpha }^{i}}
that is
C
2
{\displaystyle C^{2}}
continuous.
Next consider the system of equations
∂
x
i
∂
X
α
=
F
α
i
{\displaystyle {\frac {\partial x^{i}}{\partial X^{\alpha }}}=F_{~\alpha }^{i}}
Since
F
α
i
{\displaystyle F_{~\alpha }^{i}}
is
C
2
{\displaystyle C^{2}}
and the body is simply connected there exists some solution
x
i
(
X
α
)
{\displaystyle x^{i}(X^{\alpha })}
to the above equations. We can show that the
x
i
{\displaystyle x^{i}}
also satisfy the property that
det
|
∂
x
i
∂
X
α
|
≠
0
{\displaystyle \det \left|{\frac {\partial x^{i}}{\partial X^{\alpha }}}\right|\neq 0}
We can also show that the relation
∂
x
i
∂
X
α
g
α
β
∂
x
j
∂
X
β
=
δ
i
j
{\displaystyle {\frac {\partial x^{i}}{\partial X^{\alpha }}}~g^{\alpha \beta }~{\frac {\partial x^{j}}{\partial X^{\beta }}}=\delta ^{ij}}
implies that
g
α
β
=
C
α
β
=
∂
x
k
∂
X
α
∂
x
k
∂
X
β
{\displaystyle g_{\alpha \beta }=C_{\alpha \beta }={\frac {\partial x^{k}}{\partial X^{\alpha }}}~{\frac {\partial x^{k}}{\partial X^{\beta }}}}
If we associate these quantities with tensor fields we can show that
∂
x
∂
X
{\displaystyle {\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}}
is invertible and the constructed tensor field satisfies the expression for
C
{\displaystyle {\boldsymbol {C}}}
.
== See also ==
Saint-Venant's compatibility condition
Linear elasticity
Deformation (mechanics)
Infinitesimal strain theory
Finite strain theory
Tensor derivative (continuum mechanics)
Curvilinear coordinates
== References ==
== External links ==
Amit Acharya's notes on compatibility on iMechanica
Plasticity by J. Lubliner, sec. 1.2.4 p. 35 | Wikipedia/Compatibility_(mechanics) |
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture.
Theoretically, the stress ahead of a sharp crack tip becomes infinite and cannot be used to describe the state around a crack. Fracture mechanics is used to characterise the loads on a crack, typically using a single parameter to describe the complete loading state at the crack tip. A number of different parameters have been developed. When the plastic zone at the tip of the crack is small relative to the crack length the stress state at the crack tip is the result of elastic forces within the material and is termed linear elastic fracture mechanics (LEFM) and can be characterised using the stress intensity factor
K
{\displaystyle K}
. Although the load on a crack can be arbitrary, in 1957 G. Irwin found any state could be reduced to a combination of three independent stress intensity factors:
Mode I – Opening mode (a tensile stress normal to the plane of the crack),
Mode II – Sliding mode (a shear stress acting parallel to the plane of the crack and perpendicular to the crack front), and
Mode III – Tearing mode (a shear stress acting parallel to the plane of the crack and parallel to the crack front).
When the size of the plastic zone at the crack tip is too large, elastic-plastic fracture mechanics can be used with parameters such as the J-integral or the crack tip opening displacement.
The characterising parameter describes the state of the crack tip which can then be related to experimental conditions to ensure similitude. Crack growth occurs when the parameters typically exceed certain critical values. Corrosion may cause a crack to slowly grow when the stress corrosion stress intensity threshold is exceeded. Similarly, small flaws may result in crack growth when subjected to cyclic loading. Known as fatigue, it was found that for long cracks, the rate of growth is largely governed by the range of the stress intensity
Δ
K
{\displaystyle \Delta K}
experienced by the crack due to the applied loading. Fast fracture will occur when the stress intensity exceeds the fracture toughness of the material. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline.
== Motivation ==
The processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the manufacturing process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Fracture mechanics is the analysis of flaws to discover those that are safe (that is, do not grow) and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure. Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new.
Fracture mechanics should attempt to provide quantitative answers to the following questions:
What is the strength of the component as a function of crack size?
What crack size can be tolerated under service loading, i.e. what is the maximum permissible crack size?
How long does it take for a crack to grow from a certain initial size, for example the minimum detectable crack size, to the maximum permissible crack size?
What is the service life of a structure when a certain pre-existing flaw size (e.g. a manufacturing defect) is assumed to exist?
During the period available for crack detection how often should the structure be inspected for cracks?
== Linear elastic fracture mechanics ==
=== Griffith's criterion ===
Fracture mechanics was developed during World War I by English aeronautical engineer A. A. Griffith – thus the term Griffith crack – to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts:
The stress needed to fracture bulk glass is around 100 MPa (15,000 psi).
The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa (1,500,000 psi).
A theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the fracture stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had been used extensively to predict material failure before Griffith, could not be a specimen-independent material property. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material.
To verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens. The artificial flaw was in the form of a surface crack which was much larger than other flaws in a specimen. The experiments showed that the product of the square root of the flaw length (
a
{\displaystyle a}
) and the stress at fracture (
σ
f
{\displaystyle \sigma _{f}}
) was nearly constant, which is expressed by the equation:
σ
f
a
≈
C
{\displaystyle \sigma _{f}{\sqrt {a}}\approx C}
An explanation of this relation in terms of linear elasticity theory is problematic. Linear elasticity theory predicts that stress (and hence the strain) at the tip of a sharp flaw in a linear elastic material is infinite. To avoid that problem, Griffith developed a thermodynamic approach to explain the relation that he observed.
The growth of a crack, the extension of the surfaces on either side of the crack, requires an increase in the surface energy. Griffith found an expression for the constant
C
{\displaystyle C}
in terms of the surface energy of the crack by solving the elasticity problem of a finite crack in an elastic plate. Briefly, the approach was:
Compute the potential energy stored in a perfect specimen under a uniaxial tensile load.
Fix the boundary so that the applied load does no work and then introduce a crack into the specimen. The crack relaxes the stress and hence reduces the elastic energy near the crack faces. On the other hand, the crack increases the total surface energy of the specimen.
Compute the change in the free energy (surface energy − elastic energy) as a function of the crack length. Failure occurs when the free energy attains a peak value at a critical crack length, beyond which the free energy decreases as the crack length increases, i.e. by causing fracture. Using this procedure, Griffith found that
C
=
2
E
γ
π
{\displaystyle C={\sqrt {\cfrac {2E\gamma }{\pi }}}}
where
E
{\displaystyle E}
is the Young's modulus of the material and
γ
{\displaystyle \gamma }
is the surface energy density of the material. Assuming
E
=
62
GPa
{\displaystyle E=62\ {\text{GPa}}}
and
γ
=
1
J/m
2
{\displaystyle \gamma =1\ {\text{J/m}}^{2}}
gives excellent agreement of Griffith's predicted fracture stress with experimental results for glass.
For the simple case of a thin rectangular plate with a crack perpendicular to the load, the energy release rate,
G
{\displaystyle G}
, becomes:
G
=
π
σ
2
a
E
{\displaystyle G={\frac {\pi \sigma ^{2}a}{E}}\,}
where
σ
{\displaystyle \sigma }
is the applied stress,
a
{\displaystyle a}
is half the crack length, and
E
{\displaystyle E}
is the Young's modulus, which for the case of plane strain should be divided by the plate stiffness factor
(
1
−
ν
2
)
{\displaystyle (1-\nu ^{2})}
. The strain energy release rate can physically be understood as: the rate at which energy is absorbed by growth of the crack.
However, we also have that:
G
c
=
π
σ
f
2
a
E
{\displaystyle G_{c}={\frac {\pi \sigma _{f}^{2}a}{E}}\,}
If
G
{\displaystyle G}
≥
G
c
{\displaystyle G_{c}}
, this is the criterion for which the crack will begin to propagate.
For materials highly deformed before crack propagation, the linear elastic fracture mechanics formulation is no longer applicable and an adapted model is necessary to describe the stress and displacement field close to crack tip, such as on fracture of soft materials.
=== Irwin's modification ===
Griffith's work was largely ignored by the engineering community until the early 1950s. The reasons for this appear to be (a) in the actual structural materials the level of energy needed to cause fracture is orders of magnitude higher than the corresponding surface energy, and (b) in structural materials there are always some inelastic deformations around the crack front that would make the assumption of linear elastic medium with infinite stresses at the crack tip highly unrealistic.
Griffith's theory provides excellent agreement with experimental data for brittle materials such as glass. For ductile materials such as steel, although the relation
σ
f
a
=
C
{\displaystyle \sigma _{f}{\sqrt {a}}=C}
still holds, the surface energy (γ) predicted by Griffith's theory is usually unrealistically high. A group working under G. R. Irwin at the U.S. Naval Research Laboratory (NRL) during World War II realized that plasticity must play a significant role in the fracture of ductile materials.
In ductile materials (and even in materials that appear to be brittle), a plastic zone develops at the tip of the crack. As the applied load increases, the plastic zone increases in size until the crack grows and the elastically strained material behind the crack tip unloads. The plastic loading and unloading cycle near the crack tip leads to the dissipation of energy as heat. Hence, a dissipative term has to be added to the energy balance relation devised by Griffith for brittle materials. In physical terms, additional energy is needed for crack growth in ductile materials as compared to brittle materials.
Irwin's strategy was to partition the energy into two parts:
the stored elastic strain energy which is released as a crack grows. This is the thermodynamic driving force for fracture.
the dissipated energy which includes plastic dissipation and the surface energy (and any other dissipative forces that may be at work). The dissipated energy provides the thermodynamic resistance to fracture.
Then the total energy is:
G
=
2
γ
+
G
p
{\displaystyle G=2\gamma +G_{p}}
where
γ
{\displaystyle \gamma }
is the surface energy and
G
p
{\displaystyle G_{p}}
is the plastic dissipation (and dissipation from other sources) per unit area of crack growth.
The modified version of Griffith's energy criterion can then be written as
σ
f
a
=
E
G
π
.
{\displaystyle \sigma _{f}{\sqrt {a}}={\sqrt {\cfrac {E~G}{\pi }}}.}
For brittle materials such as glass, the surface energy term dominates and
G
≈
2
γ
=
2
J/m
2
{\displaystyle G\approx 2\gamma =2\,\,{\text{J/m}}^{2}}
. For ductile materials such as steel, the plastic dissipation term dominates and
G
≈
G
p
=
1000
J/m
2
{\displaystyle G\approx G_{p}=1000\,\,{\text{J/m}}^{2}}
. For polymers close to the glass transition temperature, we have intermediate values of
G
{\displaystyle G}
between 2 and 1000
J/m
2
{\displaystyle {\text{J/m}}^{2}}
.
=== Stress intensity factor ===
Another significant achievement of Irwin and his colleagues was to find a method of calculating the amount of energy available for fracture in terms of the asymptotic stress and displacement fields around a crack front in a linear elastic solid. This asymptotic expression for the stress field in mode I loading is related to the stress intensity factor
K
I
{\displaystyle K_{I}}
following:
σ
i
j
=
(
K
I
2
π
r
)
f
i
j
(
θ
)
{\displaystyle \sigma _{ij}=\left({\cfrac {K_{I}}{\sqrt {2\pi r}}}\right)~f_{ij}(\theta )}
where
σ
i
j
{\displaystyle \sigma _{ij}}
are the Cauchy stresses,
r
{\displaystyle r}
is the distance from the crack tip,
θ
{\displaystyle \theta }
is the angle with respect to the plane of the crack, and
f
i
j
{\displaystyle f_{ij}}
are functions that depend on the crack geometry and loading conditions. Irwin called the quantity
K
{\displaystyle K}
the stress intensity factor. Since the quantity
f
i
j
{\displaystyle f_{ij}}
is dimensionless, the stress intensity factor can be expressed in units of
MPa
m
{\displaystyle {\text{MPa}}{\sqrt {\text{m}}}}
.
Stress intensity replaced strain energy release rate and a term called fracture toughness replaced surface weakness energy. Both of these terms are simply related to the energy terms that Griffith used:
K
I
=
σ
π
a
{\displaystyle K_{I}=\sigma {\sqrt {\pi a}}\,}
and
K
c
=
{
E
G
c
for plane stress
E
G
c
1
−
ν
2
for plane strain
{\displaystyle K_{c}={\begin{cases}{\sqrt {EG_{c}}}&{\text{for plane stress}}\\\\{\sqrt {\cfrac {EG_{c}}{1-\nu ^{2}}}}&{\text{for plane strain}}\end{cases}}}
where
K
I
{\displaystyle K_{I}}
is the mode
I
{\displaystyle I}
stress intensity,
K
c
{\displaystyle K_{c}}
the fracture toughness, and
ν
{\displaystyle \nu }
is Poisson's ratio.
Fracture occurs when
K
I
≥
K
c
{\displaystyle K_{I}\geq K_{c}}
. For the special case of plane strain deformation,
K
c
{\displaystyle K_{c}}
becomes
K
I
c
{\displaystyle K_{Ic}}
and is considered a material property. The subscript
I
{\displaystyle I}
arises because of the different ways of loading a material to enable a crack to propagate. It refers to so-called "mode
I
{\displaystyle I}
" loading as opposed to mode
I
I
{\displaystyle II}
or
I
I
I
{\displaystyle III}
:
The expression for
K
I
{\displaystyle K_{I}}
will be different for geometries other than the center-cracked infinite plate, as discussed in the article on the stress intensity factor. Consequently, it is necessary to introduce a dimensionless correction factor,
Y
{\displaystyle Y}
, in order to characterize the geometry. This correction factor, also often referred to as the geometric shape factor, is given by empirically determined series and accounts for the type and geometry of the crack or notch. We thus have:
K
I
=
Y
σ
π
a
{\displaystyle K_{I}=Y\sigma {\sqrt {\pi a}}\,}
where
Y
{\displaystyle Y}
is a function of the crack length and width of sheet given, for a sheet of finite width
W
{\displaystyle W}
containing a through-thickness crack of length
2
a
{\displaystyle 2a}
, by:
Y
(
a
W
)
=
sec
(
π
a
W
)
{\displaystyle Y\left({\frac {a}{W}}\right)={\sqrt {\sec \left({\frac {\pi a}{W}}\right)}}\,}
=== Strain energy release ===
Irwin was the first to observe that if the size of the plastic zone around a crack is small compared to the size of the crack, the energy required to grow the crack will not be critically dependent on the state of stress (the plastic zone) at the crack tip. In other words, a purely elastic solution may be used to calculate the amount of energy available for fracture.
The energy release rate for crack growth or strain energy release rate may then be calculated as the change in elastic strain energy per unit area of crack growth, i.e.,
G
:=
[
∂
U
∂
a
]
P
=
−
[
∂
U
∂
a
]
u
{\displaystyle G:=\left[{\cfrac {\partial U}{\partial a}}\right]_{P}=-\left[{\cfrac {\partial U}{\partial a}}\right]_{u}}
where U is the elastic energy of the system and a is the crack length. Either the load P or the displacement u are constant while evaluating the above expressions.
Irwin showed that for a mode I crack (opening mode) the strain energy release rate and the stress intensity factor are related by:
G
=
G
I
=
{
K
I
2
E
plane stress
(
1
−
ν
2
)
K
I
2
E
plane strain
{\displaystyle G=G_{I}={\begin{cases}{\cfrac {K_{I}^{2}}{E}}&{\text{plane stress}}\\{\cfrac {(1-\nu ^{2})K_{I}^{2}}{E}}&{\text{plane strain}}\end{cases}}}
where E is the Young's modulus, ν is Poisson's ratio, and KI is the stress intensity factor in mode I. Irwin also showed that the strain energy release rate of a planar crack in a linear elastic body can be expressed in terms of the mode I, mode II (sliding mode), and mode III (tearing mode) stress intensity factors for the most general loading conditions.
Next, Irwin adopted the additional assumption that the size and shape of the energy dissipation zone remains approximately constant during brittle fracture. This assumption suggests that the energy needed to create a unit fracture surface is a constant that depends only on the material. This new material property was given the name fracture toughness and designated GIc. Today, it is the critical stress intensity factor KIc, found in the plane strain condition, which is accepted as the defining property in linear elastic fracture mechanics.
=== Crack tip plastic zone ===
In theory the stress at the crack tip where the radius is nearly zero, would tend to infinity. This would be considered a stress singularity, which is not possible in real-world applications. For this reason, in numerical studies in the field of fracture mechanics, it is often appropriate to represent cracks as round tipped notches, with a geometry dependent region of stress concentration replacing the crack-tip singularity. In actuality, the stress concentration at the tip of a crack within real materials has been found to have a finite value but larger than the nominal stress applied to the specimen.
Nevertheless, there must be some sort of mechanism or property of the material that prevents such a crack from propagating spontaneously. The assumption is, the plastic deformation at the crack tip effectively blunts the crack tip. This deformation depends primarily on the applied stress in the applicable direction (in most cases, this is the y-direction of a regular Cartesian coordinate system), the crack length, and the geometry of the specimen. To estimate how this plastic deformation zone extended from the crack tip, Irwin equated the yield strength of the material to the far-field stresses of the y-direction along the crack (x direction) and solved for the effective radius. From this relationship, and assuming that the crack is loaded to the critical stress intensity factor, Irwin developed the following expression for the idealized radius of the zone of plastic deformation at the crack tip:
r
p
=
K
C
2
2
π
σ
Y
2
{\displaystyle r_{p}={\frac {K_{C}^{2}}{2\pi \sigma _{Y}^{2}}}}
Models of ideal materials have shown that this zone of plasticity is centered at the crack tip. This equation gives the approximate ideal radius of the plastic zone deformation beyond the crack tip, which is useful to many structural scientists because it gives a good estimate of how the material behaves when subjected to stress. In the above equation, the parameters of the stress intensity factor and indicator of material toughness,
K
C
{\displaystyle K_{C}}
, and the yield stress,
σ
Y
{\displaystyle \sigma _{Y}}
, are of importance because they illustrate many things about the material and its properties, as well as about the plastic zone size. For example, if
K
c
{\displaystyle K_{c}}
is high, then it can be deduced that the material is tough, and if
σ
Y
{\displaystyle \sigma _{Y}}
is low, one knows that the material is more ductile. The ratio of these two parameters is important to the radius of the plastic zone. For instance, if
σ
Y
{\displaystyle \sigma _{Y}}
is small, then the squared ratio of
K
C
{\displaystyle K_{C}}
to
σ
Y
{\displaystyle \sigma _{Y}}
is large, which results in a larger plastic radius. This implies that the material can plastically deform, and, therefore, is tough. This estimate of the size of the plastic zone beyond the crack tip can then be used to more accurately analyze how a material will behave in the presence of a crack.
The same process as described above for a single event loading also applies and to cyclic loading. If a crack is present in a specimen that undergoes cyclic loading, the specimen will plastically deform at the crack tip and delay the crack growth. In the event of an overload or excursion, this model changes slightly to accommodate the sudden increase in stress from that which the material previously experienced. At a sufficiently high load (overload), the crack grows out of the plastic zone that contained it and leaves behind the pocket of the original plastic deformation. Now, assuming that the overload stress is not sufficiently high as to completely fracture the specimen, the crack will undergo further plastic deformation around the new crack tip, enlarging the zone of residual plastic stresses. This process further toughens and prolongs the life of the material because the new plastic zone is larger than what it would be under the usual stress conditions. This allows the material to undergo more cycles of loading. This idea can be illustrated further by the graph of Aluminum with a center crack undergoing overloading events.
=== Limitations ===
But a problem arose for the NRL researchers because naval materials, e.g., ship-plate steel, are not perfectly elastic but undergo significant plastic deformation at the tip of a crack. One basic assumption in Irwin's linear elastic fracture mechanics is small scale yielding, the condition that the size of the plastic zone is small compared to the crack length. However, this assumption is quite restrictive for certain types of failure in structural steels though such steels can be prone to brittle fracture, which has led to a number of catastrophic failures.
Linear-elastic fracture mechanics is of limited practical use for structural steels and Fracture toughness testing can be expensive.
== Elastic–plastic fracture mechanics ==
Most engineering materials show some nonlinear elastic and inelastic behavior under operating conditions that involve large loads. In such materials the assumptions of linear elastic fracture mechanics may not hold, that is,
the plastic zone at a crack tip may have a size of the same order of magnitude as the crack size
the size and shape of the plastic zone may change as the applied load is increased and also as the crack length increases.
Therefore, a more general theory of crack growth is needed for elastic-plastic materials that can account for:
the local conditions for initial crack growth which include the nucleation, growth, and coalescence of voids (decohesion) at a crack tip.
a global energy balance criterion for further crack growth and unstable fracture.
=== CTOD ===
Historically, the first parameter for the determination of fracture toughness in the elasto-plastic region was the crack tip opening displacement (CTOD) or "opening at the apex of the crack" indicated. This parameter was determined by Wells during the studies of structural steels, which due to the high toughness could not be characterized with the linear elastic fracture mechanics model. He noted that, before the fracture happened, the walls of the crack were leaving and that the crack tip, after fracture, ranged from acute to rounded off due to plastic deformation. In addition, the rounding of the crack tip was more pronounced in steels with superior toughness.
There are a number of alternative definitions of CTOD. In the two most common definitions, CTOD is the displacement at the original crack tip and the 90 degree intercept. The latter definition was suggested by Rice and is commonly used to infer CTOD in finite element models of such. Note that these two definitions are equivalent if the crack tip blunts in a semicircle.
Most laboratory measurements of CTOD have been made on edge-cracked specimens loaded in three-point bending. Early experiments used a flat paddle-shaped gage that was inserted into the crack; as the crack opened, the paddle gage rotated, and an electronic signal was sent to an x-y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gage. Today, the displacement V at the crack mouth is measured, and the CTOD is inferred by assuming the specimen halves are rigid and rotate about a hinge point (the crack tip).
=== R-curve ===
An early attempt in the direction of elastic-plastic fracture mechanics was Irwin's crack extension resistance curve, Crack growth resistance curve or R-curve. This curve acknowledges the fact that the resistance to fracture increases with growing crack size in elastic-plastic materials. The R-curve is a plot of the total energy dissipation rate as a function of the crack size and can be used to examine the processes of slow stable crack growth and unstable fracture. However, the R-curve was not widely used in applications until the early 1970s. The main reasons appear to be that the R-curve depends on the geometry of the specimen and the crack driving force may be difficult to calculate.
=== J-integral ===
In the mid-1960s James R. Rice (then at Brown University) and G. P. Cherepanov independently developed a new toughness measure to describe the case where there is sufficient crack-tip deformation that the part no longer obeys the linear-elastic approximation. Rice's analysis, which assumes non-linear elastic (or monotonic deformation theory plastic) deformation ahead of the crack tip, is designated the J-integral. This analysis is limited to situations where plastic deformation at the crack tip does not extend to the furthest edge of the loaded part. It also demands that the assumed non-linear elastic behavior of the material is a reasonable approximation in shape and magnitude to the real material's load response. The elastic-plastic failure parameter is designated JIc and is conventionally converted to KIc using the equation below. Also note that the J integral approach reduces to the Griffith theory for linear-elastic behavior.
The mathematical definition of J-integral is as follows:
J
=
∫
Γ
(
w
d
y
−
T
i
∂
u
i
∂
x
d
s
)
with
w
=
∫
0
ε
i
j
σ
i
j
d
ε
i
j
{\displaystyle J=\int _{\Gamma }(w\,dy-T_{i}{\frac {\partial u_{i}}{\partial x}}\,ds)\quad {\text{with}}\quad w=\int _{0}^{\varepsilon _{ij}}\sigma _{ij}\,d\varepsilon _{ij}}
where
Γ
{\displaystyle \Gamma }
is an arbitrary path clockwise around the apex of the crack,
w
{\displaystyle w}
is the density of strain energy,
T
i
{\displaystyle T_{i}}
are the components of the vectors of traction,
u
i
{\displaystyle u_{i}}
are the components of the displacement vectors,
d
s
{\displaystyle ds}
is an incremental length along the path
Γ
{\displaystyle \Gamma }
, and
σ
i
j
{\displaystyle \sigma _{ij}}
and
ε
i
j
{\displaystyle \varepsilon _{ij}}
are the stress and strain tensors.
Since engineers became accustomed to using KIc to characterise fracture toughness, a relation has been used to reduce JIc to it:
K
I
c
=
E
∗
J
I
c
{\displaystyle K_{Ic}={\sqrt {E^{*}J_{Ic}}}\,}
where
E
∗
=
E
{\displaystyle E^{*}=E}
for plane stress and
E
∗
=
E
1
−
ν
2
{\displaystyle E^{*}={\frac {E}{1-\nu ^{2}}}}
for plane strain.
=== Cohesive zone model ===
When a significant region around a crack tip has undergone plastic deformation, other approaches can be used to determine the possibility of further crack extension and the direction of crack growth and branching. A simple technique that is easily incorporated into numerical calculations is the cohesive zone model method which is based on concepts proposed independently by Barenblatt and Dugdale in the early 1960s. The relationship between the Dugdale-Barenblatt models and Griffith's theory was first discussed by Willis in 1967. The equivalence of the two approaches in the context of brittle fracture was shown by Rice in 1968.
=== Transition flaw size ===
Let a material have a yield strength
σ
Y
{\displaystyle \sigma _{Y}}
and a fracture toughness in mode I
K
I
c
{\displaystyle K_{Ic}}
. Based on fracture mechanics, the material will fail at stress
σ
fail
=
K
I
c
/
π
a
{\displaystyle \sigma _{\text{fail}}=K_{Ic}/{\sqrt {\pi a}}}
. Based on plasticity, the material will yield when
σ
f
a
i
l
=
σ
Y
{\displaystyle \sigma _{fail}=\sigma _{Y}}
. These curves intersect when
a
=
K
I
c
2
/
π
σ
Y
2
{\displaystyle a=K_{Ic}^{2}/\pi \sigma _{Y}^{2}}
. This value of
a
{\displaystyle a}
is called as transition flaw size
a
t
{\displaystyle a_{t}}
., and depends on the material properties of the structure. When the
a
<
a
t
{\displaystyle a<a_{t}}
, the failure is governed by plastic yielding, and when
a
>
a
t
{\displaystyle a>a_{t}}
the failure is governed by fracture mechanics. The value of
a
t
{\displaystyle a_{t}}
for engineering alloys is 100 mm and for ceramics is 0.001 mm. If we assume that manufacturing processes can give rise to flaws in the order of micrometers, then, it can be seen that ceramics are more likely to fail by fracture, whereas engineering alloys would fail by plastic deformation.
== Concrete fracture analysis ==
Concrete fracture analysis is part of fracture mechanics that studies crack propagation and related failure modes in concrete. As it is widely used in construction, fracture analysis and modes of reinforcement are an important part of the study of concrete, and different concretes are characterized in part by their fracture properties. Common fractures include the cone-shaped fractures that form around anchors under tensile strength.
Bažant (1983) proposed a crack band model for materials like concrete whose homogeneous nature changes randomly over a certain range. He also observed that in plain concrete, the size effect has a strong influence on the critical stress intensity factor, and proposed the relation
σ
{\displaystyle \sigma }
=
τ
{\displaystyle \tau }
/ √(1+{
d
{\displaystyle d}
/
λ
{\displaystyle \lambda }
δ
{\displaystyle \delta }
}),
where
σ
{\displaystyle \sigma }
= stress intensity factor,
τ
{\displaystyle \tau }
= tensile strength,
d
{\displaystyle d}
= size of specimen,
δ
{\displaystyle \delta }
= maximum aggregate size, and
λ
{\displaystyle \lambda }
= an empirical constant.
== Atomistic Fracture Mechanics ==
Atomistic Fracture Mechanics (AFM) is a relatively new field that studies the behavior and properties of materials at the atomic scale when subjected to fracture. It integrates concepts from fracture mechanics with atomistic simulations to understand how cracks initiate, propagate, and interact with the microstructure of materials. By using techniques like Molecular Dynamics (MD) simulations, AFM can provide insights into the fundamental mechanisms of crack formation and growth, the role of atomic bonds, and the influence of material defects and impurities on fracture behavior.
== See also ==
AFGROW – Fracture mechanics and fatigue crack growth analysis software
Concrete cone failure – Failure mode of anchors in concrete submitted to tensile force
Concrete degradation – Damage to concrete affecting its mechanical strength and its durability
Earthquake – Sudden movement of the Earth's crust
Fatigue – Initiation and propagation of cracks in a material due to cyclic loading
Fault (geology) – Fracture or discontinuity in displaced rock
Material failure theory – Science of predicting if, when, and how a given material will fail under loading
Notch (engineering) – Externally-produced indentation in a planar material
Peridynamics – Non-local formulation of continuum mechanics, a formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures
Shock (mechanics) – Sudden transient acceleration
Strength of materials
Stress corrosion cracking – Growth of cracks in a corrosive environment
Structural fracture mechanics – Field of structural engineering
== References ==
== Further reading ==
Buckley, C.P. "Material Failure", Lecture Notes (2005), University of Oxford.
Davidge, R.W., Mechanical Behavior of Ceramics, Cambridge Solid State Science Series, (1979)
Demaid, Adrian, Fail Safe, Open University (2004)
Green, D., An Introduction to the Mechanical Properties of Ceramics, Cambridge Solid State Science Series, Eds. Clarke, D.R., Suresh, S., Ward, I.M. (1998)
Tipper, Constance Fligg (1962). The brittle fracture story. Cambridge U.P.
Lawn, B.R., Fracture of Brittle Solids, Cambridge Solid State Science Series, 2nd Edn. (1993)
Farahmand, B., Bockrath, G., and Glassco, J. (1997) Fatigue and Fracture Mechanics of High-Risk Parts, Chapman & Hall. ISBN 978-0-412-12991-9.
Chen, X., Mai, Y.-W., Fracture Mechanics of Electromagnetic Materials: Nonlinear Field Theory and Applications, Imperial College Press, (2012)
A.N. Gent, W.V. Mars, In: James E. Mark, Burak Erman and Mike Roland, Editor(s), Chapter 10 – Strength of Elastomers, The Science and Technology of Rubber, Fourth edition, Academic Press, Boston, 2013, pp. 473–516, ISBN 9780123945846, 10.1016/B978-0-12-394584-6.00010-8
Zehnder, Alan. Fracture Mechanics, SpringerLink, (2012).
== External links ==
Nonlinear Fracture Mechanics Notes by Prof. John Hutchinson, Harvard University
Notes on Fracture of Thin Films and Multilayers by Prof. John Hutchinson, Harvard University
Fracture Mechanics by Piet Schreurs, TU Eindhoven, The Netherlands | Wikipedia/Fracture_mechanics |
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
== Fundamental theory of matrix eigenvectors and eigenvalues ==
A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form
A
v
=
λ
v
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} }
for some scalar λ. Then λ is called the eigenvalue corresponding to v. Geometrically speaking, the eigenvectors of A are the vectors that A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.
This yields an equation for the eigenvalues
p
(
λ
)
=
det
(
A
−
λ
I
)
=
0.
{\displaystyle p\left(\lambda \right)=\det \left(\mathbf {A} -\lambda \mathbf {I} \right)=0.}
We call p(λ) the characteristic polynomial, and the equation, called the characteristic equation, is an Nth-order polynomial equation in the unknown λ. This equation will have Nλ distinct solutions, where 1 ≤ Nλ ≤ N. The set of solutions, that is, the eigenvalues, is called the spectrum of A.
If the field of scalars is algebraically closed, then we can factor p as
p
(
λ
)
=
(
λ
−
λ
1
)
n
1
(
λ
−
λ
2
)
n
2
⋯
(
λ
−
λ
N
λ
)
n
N
λ
=
0.
{\displaystyle p(\lambda )=\left(\lambda -\lambda _{1}\right)^{n_{1}}\left(\lambda -\lambda _{2}\right)^{n_{2}}\cdots \left(\lambda -\lambda _{N_{\lambda }}\right)^{n_{N_{\lambda }}}=0.}
The integer ni is termed the algebraic multiplicity of eigenvalue λi. The algebraic multiplicities sum to N:
∑
i
=
1
N
λ
n
i
=
N
.
{\textstyle \sum _{i=1}^{N_{\lambda }}{n_{i}}=N.}
For each eigenvalue λi, we have a specific eigenvalue equation
(
A
−
λ
i
I
)
v
=
0.
{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} =0.}
There will be 1 ≤ mi ≤ ni linearly independent solutions to each eigenvalue equation. The linear combinations of the mi solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue λi. The integer mi is termed the geometric multiplicity of λi. It is important to keep in mind that the algebraic multiplicity ni and geometric multiplicity mi may or may not be equal, but we always have mi ≤ ni. The simplest case is of course when mi = ni = 1. The total number of linearly independent eigenvectors, Nv, can be calculated by summing the geometric multiplicities
∑
i
=
1
N
λ
m
i
=
N
v
.
{\displaystyle \sum _{i=1}^{N_{\lambda }}{m_{i}}=N_{\mathbf {v} }.}
The eigenvectors can be indexed by eigenvalues, using a double index, with vij being the jth eigenvector for the ith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index vk, with k = 1, 2, ..., Nv.
== Eigendecomposition of a matrix ==
Let A be a square n × n matrix with n linearly independent eigenvectors qi (where i = 1, ..., n). Then A can be factored as
A
=
Q
Λ
Q
−
1
{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}
where Q is the square n × n matrix whose ith column is the eigenvector qi of A, and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, Λii = λi. Note that only diagonalizable matrices can be factorized in this way. For example, the defective matrix
[
1
1
0
1
]
{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}
(which is a shear matrix) cannot be diagonalized.
The n eigenvectors qi are usually normalized, but they don't have to be. A non-normalized set of n eigenvectors, vi can also be used as the columns of Q. That can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1. If one of the eigenvalues λi has multiple linearly independent eigenvectors (that is, the geometric multiplicity of λi is greater than 1), then these eigenvectors for this eigenvalue λi can be chosen to be mutually orthogonal; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if A is a normal matrix, then by the spectral theorem, it's always possible to diagonalize A in an orthonormal basis {qi}.
The decomposition can be derived from the fundamental property of eigenvectors:
A
v
=
λ
v
A
Q
=
Q
Λ
A
=
Q
Λ
Q
−
1
.
{\displaystyle {\begin{aligned}\mathbf {A} \mathbf {v} &=\lambda \mathbf {v} \\\mathbf {A} \mathbf {Q} &=\mathbf {Q} \mathbf {\Lambda } \\\mathbf {A} &=\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}.\end{aligned}}}
The linearly independent eigenvectors qi with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products Ax, for x ∈ Cn, which is the same as the image (or range) of the corresponding matrix transformation, and also the column space of the matrix A. The number of linearly independent eigenvectors qi with nonzero eigenvalues is equal to the rank of the matrix A, and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space.
The linearly independent eigenvectors qi with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the null space (also known as the kernel) of the matrix transformation A.
=== Example ===
The 2 × 2 real matrix A
A
=
[
1
0
1
3
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}1&0\\1&3\\\end{bmatrix}}}
may be decomposed into a diagonal matrix through multiplication of a non-singular matrix Q
Q
=
[
a
b
c
d
]
∈
R
2
×
2
.
{\displaystyle \mathbf {Q} ={\begin{bmatrix}a&b\\c&d\end{bmatrix}}\in \mathbb {R} ^{2\times 2}.}
Then
[
a
b
c
d
]
−
1
[
1
0
1
3
]
[
a
b
c
d
]
=
[
x
0
0
y
]
,
{\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}x&0\\0&y\end{bmatrix}},}
for some real diagonal matrix
[
x
0
0
y
]
{\displaystyle \left[{\begin{smallmatrix}x&0\\0&y\end{smallmatrix}}\right]}
.
Multiplying both sides of the equation on the left by Q:
[
1
0
1
3
]
[
a
b
c
d
]
=
[
a
b
c
d
]
[
x
0
0
y
]
.
{\displaystyle {\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}a&b\\c&d\end{bmatrix}}{\begin{bmatrix}x&0\\0&y\end{bmatrix}}.}
The above equation can be decomposed into two simultaneous equations:
{
[
1
0
1
3
]
[
a
c
]
=
[
a
x
c
x
]
[
1
0
1
3
]
[
b
d
]
=
[
b
y
d
y
]
.
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}={\begin{bmatrix}ax\\cx\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}={\begin{bmatrix}by\\dy\end{bmatrix}}\end{cases}}.}
Factoring out the eigenvalues x and y:
{
[
1
0
1
3
]
[
a
c
]
=
x
[
a
c
]
[
1
0
1
3
]
[
b
d
]
=
y
[
b
d
]
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=x{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=y{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Letting
a
=
[
a
c
]
,
b
=
[
b
d
]
,
{\displaystyle \mathbf {a} ={\begin{bmatrix}a\\c\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b\\d\end{bmatrix}},}
this gives us two vector equations:
{
A
a
=
x
a
A
b
=
y
b
{\displaystyle {\begin{cases}\mathbf {A} \mathbf {a} =x\mathbf {a} \\\mathbf {A} \mathbf {b} =y\mathbf {b} \end{cases}}}
And can be represented by a single vector equation involving two solutions as eigenvalues:
A
u
=
λ
u
{\displaystyle \mathbf {A} \mathbf {u} =\lambda \mathbf {u} }
where λ represents the two eigenvalues x and y, and u represents the vectors a and b.
Shifting λu to the left hand side and factoring u out
(
A
−
λ
I
)
u
=
0
{\displaystyle \left(\mathbf {A} -\lambda \mathbf {I} \right)\mathbf {u} =\mathbf {0} }
Since Q is non-singular, it is essential that u is nonzero. Therefore,
det
(
A
−
λ
I
)
=
0
{\displaystyle \det(\mathbf {A} -\lambda \mathbf {I} )=0}
Thus
(
1
−
λ
)
(
3
−
λ
)
=
0
{\displaystyle (1-\lambda )(3-\lambda )=0}
giving us the solutions of the eigenvalues for the matrix A as λ = 1 or λ = 3, and the resulting diagonal matrix from the eigendecomposition of A is thus
[
1
0
0
3
]
{\displaystyle \left[{\begin{smallmatrix}1&0\\0&3\end{smallmatrix}}\right]}
.
Putting the solutions back into the above simultaneous equations
{
[
1
0
1
3
]
[
a
c
]
=
1
[
a
c
]
[
1
0
1
3
]
[
b
d
]
=
3
[
b
d
]
{\displaystyle {\begin{cases}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}a\\c\end{bmatrix}}=1{\begin{bmatrix}a\\c\end{bmatrix}}\\[1.2ex]{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}b\\d\end{bmatrix}}=3{\begin{bmatrix}b\\d\end{bmatrix}}\end{cases}}}
Solving the equations, we have
a
=
−
2
c
and
b
=
0
,
c
,
d
∈
R
.
{\displaystyle a=-2c\quad {\text{and}}\quad b=0,\qquad c,d\in \mathbb {R} .}
Thus the matrix Q required for the eigendecomposition of A is
Q
=
[
−
2
c
0
c
d
]
,
c
,
d
∈
R
,
{\displaystyle \mathbf {Q} ={\begin{bmatrix}-2c&0\\c&d\end{bmatrix}},\qquad c,d\in \mathbb {R} ,}
that is:
[
−
2
c
0
c
d
]
−
1
[
1
0
1
3
]
[
−
2
c
0
c
d
]
=
[
1
0
0
3
]
,
c
,
d
∈
R
{\displaystyle {\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}^{-1}{\begin{bmatrix}1&0\\1&3\end{bmatrix}}{\begin{bmatrix}-2c&0\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\0&3\end{bmatrix}},\qquad c,d\in \mathbb {R} }
=== Matrix inverse via eigendecomposition ===
If a matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is invertible and its inverse is given by
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
If
A
{\displaystyle \mathbf {A} }
is a symmetric matrix, since
Q
{\displaystyle \mathbf {Q} }
is formed from the eigenvectors of
A
{\displaystyle \mathbf {A} }
,
Q
{\displaystyle \mathbf {Q} }
is guaranteed to be an orthogonal matrix, therefore
Q
−
1
=
Q
T
{\displaystyle \mathbf {Q} ^{-1}=\mathbf {Q} ^{\mathrm {T} }}
. Furthermore, because Λ is a diagonal matrix, its inverse is easy to calculate:
[
Λ
−
1
]
i
i
=
1
λ
i
{\displaystyle \left[\mathbf {\Lambda } ^{-1}\right]_{ii}={\frac {1}{\lambda _{i}}}}
==== Practical implications ====
When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.
Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise.
The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.
The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.
The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).
If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues:
min
|
∇
2
λ
s
|
{\displaystyle \min \left|\nabla ^{2}\lambda _{\mathrm {s} }\right|}
where the eigenvalues are subscripted with an s to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.
== Functional calculus ==
The eigendecomposition allows for much easier computation of power series of matrices. If f (x) is given by
f
(
x
)
=
a
0
+
a
1
x
+
a
2
x
2
+
⋯
{\displaystyle f(x)=a_{0}+a_{1}x+a_{2}x^{2}+\cdots }
then we know that
f
(
A
)
=
Q
f
(
Λ
)
Q
−
1
{\displaystyle f\!\left(\mathbf {A} \right)=\mathbf {Q} \,f\!\left(\mathbf {\Lambda } \right)\mathbf {Q} ^{-1}}
Because Λ is a diagonal matrix, functions of Λ are very easy to calculate:
[
f
(
Λ
)
]
i
i
=
f
(
λ
i
)
{\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
The off-diagonal elements of f (Λ) are zero; that is, f (Λ) is also a diagonal matrix. Therefore, calculating f (A) reduces to just calculating the function on each of the eigenvalues.
A similar technique works more generally with the holomorphic functional calculus, using
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
from above. Once again, we find that
[
f
(
Λ
)
]
i
i
=
f
(
λ
i
)
{\displaystyle \left[f\left(\mathbf {\Lambda } \right)\right]_{ii}=f\left(\lambda _{i}\right)}
=== Examples ===
A
2
=
(
Q
Λ
Q
−
1
)
(
Q
Λ
Q
−
1
)
=
Q
Λ
(
Q
−
1
Q
)
Λ
Q
−
1
=
Q
Λ
2
Q
−
1
A
n
=
Q
Λ
n
Q
−
1
exp
A
=
Q
exp
(
Λ
)
Q
−
1
{\displaystyle {\begin{aligned}\mathbf {A} ^{2}&=\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)\left(\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}\right)=\mathbf {Q} \mathbf {\Lambda } \left(\mathbf {Q} ^{-1}\mathbf {Q} \right)\mathbf {\Lambda } \mathbf {Q} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{2}\mathbf {Q} ^{-1}\\[1.2ex]\mathbf {A} ^{n}&=\mathbf {Q} \mathbf {\Lambda } ^{n}\mathbf {Q} ^{-1}\\[1.2ex]\exp \mathbf {A} &=\mathbf {Q} \exp(\mathbf {\Lambda } )\mathbf {Q} ^{-1}\end{aligned}}}
which are examples for the functions
f
(
x
)
=
x
2
,
f
(
x
)
=
x
n
,
f
(
x
)
=
exp
x
{\displaystyle f(x)=x^{2},\;f(x)=x^{n},\;f(x)=\exp {x}}
. Furthermore,
exp
A
{\displaystyle \exp {\mathbf {A} }}
is the matrix exponential.
== Decomposition for spectral matrices ==
Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.
=== Normal matrices ===
A complex-valued square matrix
A
{\displaystyle A}
is normal (meaning ,
A
∗
A
=
A
A
∗
{\displaystyle \mathbf {A} ^{*}\mathbf {A} =\mathbf {A} \mathbf {A} ^{*}}
, where
A
∗
{\displaystyle \mathbf {A} ^{*}}
is the conjugate transpose) if and only if it can be decomposed as
A
=
U
Λ
U
∗
{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}
, where
U
{\displaystyle \mathbf {U} }
is a unitary matrix (meaning
U
∗
=
U
−
1
{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}}
) and
Λ
=
{\displaystyle \mathbf {\Lambda } =}
diag(
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
) is a diagonal matrix. The columns
u
1
,
⋯
,
u
n
{\displaystyle \mathbf {u} _{1},\cdots ,\mathbf {u} _{n}}
of
U
{\displaystyle \mathbf {U} }
form an orthonormal basis and are eigenvectors of
A
{\displaystyle \mathbf {A} }
with corresponding eigenvalues
λ
1
,
…
,
λ
n
{\displaystyle \lambda _{1},\ldots ,\lambda _{n}}
.
For example, consider the 2 x 2 normal matrix
A
=
[
1
2
2
1
]
{\displaystyle \mathbf {A} ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}}
.
The eigenvalues are
λ
1
=
3
{\displaystyle \lambda _{1}=3}
and
λ
2
=
−
1
{\displaystyle \lambda _{2}=-1}
.
The (normalized) eigenvectors corresponding to these eigenvalues are
u
1
=
1
2
[
1
1
]
{\displaystyle \mathbf {u} _{1}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}}
and
u
2
=
1
2
[
−
1
1
]
{\displaystyle \mathbf {u} _{2}={\frac {1}{\sqrt {2}}}{\begin{bmatrix}-1\\1\end{bmatrix}}}
.
The diagonalization is
A
=
U
Λ
U
∗
{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}}
, where
U
=
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle \mathbf {U} ={\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
,
Λ
=
{\displaystyle \mathbf {\Lambda } =}
[
3
0
0
−
1
]
{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}
and
U
∗
=
U
−
1
=
{\displaystyle \mathbf {U} ^{*}=\mathbf {U} ^{-1}=}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
.
The verification is
U
Λ
U
∗
=
{\displaystyle \mathbf {U} \mathbf {\Lambda } \mathbf {U} ^{*}=}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
[
3
0
0
−
1
]
{\displaystyle {\begin{bmatrix}3&0\\0&-1\end{bmatrix}}}
[
1
/
2
1
/
2
1
/
2
−
1
/
2
]
{\displaystyle {\begin{bmatrix}1/{\sqrt {2}}&1/{\sqrt {2}}\\1/{\sqrt {2}}&-1/{\sqrt {2}}\end{bmatrix}}}
=
[
1
2
2
1
]
=
A
{\displaystyle ={\begin{bmatrix}1&2\\2&1\end{bmatrix}}=\mathbf {A} }
.
This example illustrates the process of diagonalizing a normal matrix
A
{\displaystyle \mathbf {A} }
by finding its eigenvalues and eigenvectors, forming the unitary matrix
U
{\displaystyle \mathbf {U} }
, the diagonal matrix
Λ
{\displaystyle \mathbf {\Lambda } }
, and verifying the decomposition.
=== Real symmetric matrices ===
As a special case, for every n × n real symmetric matrix, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix A can be decomposed as
A
=
Q
Λ
Q
T
{\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{\mathsf {T}}}
, where Q is an orthogonal matrix whose columns are the real, orthonormal eigenvectors of A, and Λ is a diagonal matrix whose entries are the eigenvalues of A.
=== Diagonalizable matrices ===
Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as
A
=
P
D
P
−
1
{\displaystyle \mathbf {A} =\mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}
, where
P
{\displaystyle \mathbf {P} }
is a matrix whose columns are eigenvectors of
A
{\displaystyle \mathbf {A} }
and
D
{\displaystyle \mathbf {D} }
is a diagonal matrix consisting of the corresponding eigenvalues of
A
{\displaystyle \mathbf {A} }
.
=== Positive definite matrices ===
Positive definite matrices are matrices for which all eigenvalues are positive. They can be decomposed as
A
=
L
L
T
{\displaystyle \mathbf {A} =\mathbf {L} \mathbf {L} ^{\mathsf {T}}}
using the Cholesky decomposition, where
L
{\displaystyle \mathbf {L} }
is a lower triangular matrix.
=== Unitary and Hermitian matrices ===
Unitary matrices satisfy
U
U
∗
=
I
{\displaystyle \mathbf {U} \mathbf {U} ^{*}=\mathbf {I} }
(real case) or
U
U
†
=
I
{\displaystyle \mathbf {U} \mathbf {U} ^{\dagger }=\mathbf {I} }
(complex case), where
U
∗
{\displaystyle \mathbf {U} ^{*}}
denotes the conjugate transpose and
U
†
{\displaystyle \mathbf {U} ^{\dagger }}
denotes the conjugate transpose. They diagonalize using unitary transformations.
Hermitian matrices satisfy
H
=
H
†
{\displaystyle \mathbf {H} =\mathbf {H} ^{\dagger }}
, where
H
†
{\displaystyle \mathbf {H} ^{\dagger }}
denotes the conjugate transpose. They can be diagonalized using unitary or orthogonal matrices.
== Useful facts ==
=== Useful facts regarding eigenvalues ===
The product of the eigenvalues is equal to the determinant of A
det
(
A
)
=
∏
i
=
1
N
λ
λ
i
n
i
{\displaystyle \det \left(\mathbf {A} \right)=\prod _{i=1}^{N_{\lambda }}{\lambda _{i}^{n_{i}}}}
Note that each eigenvalue is raised to the power ni, the algebraic multiplicity.
The sum of the eigenvalues is equal to the trace of A
tr
(
A
)
=
∑
i
=
1
N
λ
n
i
λ
i
{\displaystyle \operatorname {tr} \left(\mathbf {A} \right)=\sum _{i=1}^{N_{\lambda }}{{n_{i}}\lambda _{i}}}
Note that each eigenvalue is multiplied by ni, the algebraic multiplicity.
If the eigenvalues of A are λi, and A is invertible, then the eigenvalues of A−1 are simply λ−1i.
If the eigenvalues of A are λi, then the eigenvalues of f (A) are simply f (λi), for any holomorphic function f.
=== Useful facts regarding eigenvectors ===
If A is Hermitian and full-rank, the basis of eigenvectors may be chosen to be mutually orthogonal. The eigenvalues are real.
The eigenvectors of A−1 are the same as the eigenvectors of A.
Eigenvectors are only defined up to a multiplicative constant. That is, if Av = λv then cv is also an eigenvector for any scalar c ≠ 0. In particular, −v and eiθv (for any θ) are also eigenvectors.
In the case of degenerate eigenvalues (an eigenvalue having more than one eigenvector), the eigenvectors have an additional freedom of linear transformation, that is to say, any linear (orthonormal) combination of eigenvectors sharing an eigenvalue (in the degenerate subspace) is itself an eigenvector (in the subspace).
=== Useful facts regarding eigendecomposition ===
A can be eigendecomposed if and only if the number of linearly independent eigenvectors, Nv, equals the dimension of an eigenvector: Nv = N
If the field of scalars is algebraically closed and if p(λ) has no repeated roots, that is, if
N
λ
=
N
,
{\displaystyle N_{\lambda }=N,}
then A can be eigendecomposed.
The statement "A can be eigendecomposed" does not imply that A has an inverse as some eigenvalues may be zero, which is not invertible.
The statement "A has an inverse" does not imply that A can be eigendecomposed. A counterexample is
[
1
1
0
1
]
{\displaystyle \left[{\begin{smallmatrix}1&1\\0&1\end{smallmatrix}}\right]}
, which is an invertible defective matrix.
=== Useful facts regarding matrix inverse ===
A can be inverted if and only if all eigenvalues are nonzero:
λ
i
≠
0
∀
i
{\displaystyle \lambda _{i}\neq 0\quad \forall \,i}
If λi ≠ 0 and Nv = N, the inverse is given by
A
−
1
=
Q
Λ
−
1
Q
−
1
{\displaystyle \mathbf {A} ^{-1}=\mathbf {Q} \mathbf {\Lambda } ^{-1}\mathbf {Q} ^{-1}}
== Numerical computations ==
=== Numerical computation of eigenvalues ===
Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.
In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using nth roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative.
Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.
A simple and accurate iterative method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as
A
v
‖
A
v
‖
,
A
2
v
‖
A
2
v
‖
,
A
3
v
‖
A
3
v
‖
,
…
{\displaystyle {\frac {\mathbf {A} \mathbf {v} }{\left\|\mathbf {A} \mathbf {v} \right\|}},{\frac {\mathbf {A} ^{2}\mathbf {v} }{\left\|\mathbf {A} ^{2}\mathbf {v} \right\|}},{\frac {\mathbf {A} ^{3}\mathbf {v} }{\left\|\mathbf {A} ^{3}\mathbf {v} \right\|}},\ldots }
This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that v has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine. Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of all the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration. Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.
=== Numerical computation of eigenvectors ===
Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation
(
A
−
λ
i
I
)
v
i
,
j
=
0
{\displaystyle \left(\mathbf {A} -\lambda _{i}\mathbf {I} \right)\mathbf {v} _{i,j}=\mathbf {0} }
using Gaussian elimination or any other method for solving matrix equations.
However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector). In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Q matrices from the steps in the algorithm. (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.
== Additional topics ==
=== Generalized eigenspaces ===
Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. That is, it is the space of generalized eigenvectors (first sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.
This usage should not be confused with the generalized eigenvalue problem described below.
=== Conjugate eigenvector ===
A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is
A
v
=
λ
v
∗
.
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {v} ^{*}.}
For example, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.
=== Generalized eigenvalue problem ===
A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector v that obeys
A
v
=
λ
B
v
{\displaystyle \mathbf {A} \mathbf {v} =\lambda \mathbf {B} \mathbf {v} }
where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B (in the second sense), and λ is called the generalized eigenvalue of A and B (in the second sense) which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation
det
(
A
−
λ
B
)
=
0.
{\displaystyle \det(\mathbf {A} -\lambda \mathbf {B} )=0.}
If n linearly independent vectors {v1, …, vn} can be found, such that for every i ∈ {1, …, n}, Avi = λiBvi, then we define the matrices P and D such that
P
=
[
|
|
v
1
⋯
v
n
|
|
]
≡
[
(
v
1
)
1
⋯
(
v
n
)
1
⋮
⋮
(
v
1
)
n
⋯
(
v
n
)
n
]
{\displaystyle P={\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}\equiv {\begin{bmatrix}(\mathbf {v} _{1})_{1}&\cdots &(\mathbf {v} _{n})_{1}\\\vdots &&\vdots \\(\mathbf {v} _{1})_{n}&\cdots &(\mathbf {v} _{n})_{n}\end{bmatrix}}}
(
D
)
i
j
=
{
λ
i
,
if
i
=
j
0
,
otherwise
{\displaystyle (D)_{ij}={\begin{cases}\lambda _{i},&{\text{if }}i=j\\0,&{\text{otherwise}}\end{cases}}}
Then the following equality holds
A
=
B
P
D
P
−
1
{\displaystyle \mathbf {A} =\mathbf {B} \mathbf {P} \mathbf {D} \mathbf {P} ^{-1}}
And the proof is
A
P
=
A
[
|
|
v
1
⋯
v
n
|
|
]
=
[
|
|
A
v
1
⋯
A
v
n
|
|
]
=
[
|
|
λ
1
B
v
1
⋯
λ
n
B
v
n
|
|
]
=
[
|
|
B
v
1
⋯
B
v
n
|
|
]
D
=
B
P
D
{\displaystyle \mathbf {A} \mathbf {P} =\mathbf {A} {\begin{bmatrix}|&&|\\\mathbf {v} _{1}&\cdots &\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\A\mathbf {v} _{1}&\cdots &A\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\\lambda _{1}B\mathbf {v} _{1}&\cdots &\lambda _{n}B\mathbf {v} _{n}\\|&&|\end{bmatrix}}={\begin{bmatrix}|&&|\\B\mathbf {v} _{1}&\cdots &B\mathbf {v} _{n}\\|&&|\end{bmatrix}}\mathbf {D} =\mathbf {B} \mathbf {P} \mathbf {D} }
And since P is invertible, we multiply the equation from the right by its inverse, finishing the proof.
The set of matrices of the form A − λB, where λ is a complex number, is called a pencil; the term matrix pencil can also refer to the pair (A, B) of matrices.
If B is invertible, then the original problem can be written in the form
B
−
1
A
v
=
λ
v
{\displaystyle \mathbf {B} ^{-1}\mathbf {A} \mathbf {v} =\lambda \mathbf {v} }
which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if A and B are Hermitian matrices, since in this case B−1A is not generally Hermitian and important properties of the solution are no longer apparent.
If A and B are both symmetric or Hermitian, and B is also a positive-definite matrix, the eigenvalues λi are real and eigenvectors v1 and v2 with distinct eigenvalues are B-orthogonal (v1*Bv2 = 0). In this case, eigenvectors can be chosen so that the matrix P defined above satisfies
P
∗
B
P
=
I
{\displaystyle \mathbf {P} ^{*}\mathbf {B} \mathbf {P} =\mathbf {I} }
or
P
P
∗
B
=
I
,
{\displaystyle \mathbf {P} \mathbf {P} ^{*}\mathbf {B} =\mathbf {I} ,}
and there exists a basis of generalized eigenvectors (it is not a defective problem). This case is sometimes called a Hermitian definite pencil or definite pencil.
== See also ==
Eigenvalue perturbation
Frobenius covariant
Householder transformation
Jordan normal form
List of matrices
Matrix decomposition
Singular value decomposition
Sylvester's formula
== Notes ==
== References ==
== External links ==
Interactive program & tutorial of Spectral Decomposition. | Wikipedia/Eigenvalue_decomposition |
In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration.
In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume.
Technically, a material is under a state of compression, at some specific point and along a specific direction
x
{\displaystyle x}
, if the normal component of the stress vector across a surface with normal direction
x
{\displaystyle x}
is directed opposite to
x
{\displaystyle x}
. If the stress vector itself is opposite to
x
{\displaystyle x}
, the material is said to be under normal compression or pure compressive stress along
x
{\displaystyle x}
. In a solid, the amount of compression generally depends on the direction
x
{\displaystyle x}
, and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain.
The inverse process of compression is called decompression, dilation, or expansion, in which the object enlarges or increases in volume.
In a mechanical wave, which is longitudinal, the medium is displaced in the wave's direction, resulting in areas of compression and rarefaction.
== Effects ==
When put under compression (or any other type of stress), every material will suffer some deformation, even if imperceptible, that causes the average relative positions of its atoms and molecules to change. The deformation may be permanent, or may be reversed when the compression forces disappear. In the latter case, the deformation gives rise to reaction forces that oppose the compression forces, and may eventually balance them.
Liquids and gases cannot bear steady uniaxial or biaxial compression, they will deform promptly and permanently and will not offer any permanent reaction force. However they can bear isotropic compression, and may be compressed in other ways momentarily, for instance in a sound wave.
Every ordinary material will contract in volume when put under isotropic compression, contract in cross-section area when put under uniform biaxial compression, and contract in length when put into uniaxial compression. The deformation may not be uniform and may not be aligned with the compression forces. What happens in the directions where there is no compression depends on the material. Most materials will expand in those directions, but some special materials will remain unchanged or even contract. In general, the relation between the stress applied to a material and the resulting deformation is a central topic of continuum mechanics.
== Uses ==
Compression of solids has many implications in materials science, physics and structural engineering, for compression yields noticeable amounts of stress and tension.
By inducing compression, mechanical properties such as compressive strength or modulus of elasticity, can be measured.
Compression machines range from very small table top systems to ones with over 53 MN capacity.
Gases are often stored and shipped in highly compressed form, to save space. Slightly compressed air or other gases are also used to fill balloons, rubber boats, and other inflatable structures. Compressed liquids are used in hydraulic equipment and in fracking.
== In engines ==
=== Internal combustion engines ===
In internal combustion engines the explosive mixture gets compressed before it is ignited; the compression improves the efficiency of the engine. In the Otto cycle, for instance, the second stroke of the piston effects the compression of the charge which has been drawn into the cylinder by the first forward stroke.
=== Steam engines ===
The term is applied to the arrangement by which the exhaust valve of a steam engine is made to close, shutting a portion of the exhaust steam in the cylinder, before the stroke of the piston is quite complete. This steam being compressed as the stroke is completed, a cushion is formed against which the piston does work while its velocity is being rapidly reduced, and thus the stresses in the mechanism due to the inertia of the reciprocating parts are lessened. This compression, moreover, obviates the shock which would otherwise be caused by the admission of the fresh steam for the return stroke.
== See also ==
Buckling
Container compression test
Compression member
Compressive strength
Longitudinal wave
P-wave
Rarefaction
Strength of materials
Résal effect
Plane strain compression test
== References == | Wikipedia/Dilation_(physics) |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Lagrangian_finite_strain_tensor |
In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body.
== Formulation ==
Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function:
R
→
0
:
Ω
→
P
{\displaystyle {\vec {R}}_{0}:\Omega \to P}
where
R
→
0
{\displaystyle {\vec {R}}_{0}}
is a placement vector
Ω
{\displaystyle \Omega }
are all the points of the body
P
{\displaystyle P}
are all the points in the space in which the body is present
Most often it is a state of the body in which no forces are applied.
Then given any other state of this body in which coordinates of all its points are described as
R
→
1
{\displaystyle {\vec {R}}_{1}}
the displacement field is the difference between two body states:
u
→
=
R
→
1
−
R
→
0
{\displaystyle {\vec {u}}={\vec {R}}_{1}-{\vec {R}}_{0}}
where
u
→
{\displaystyle {\vec {u}}}
is a displacement field, which for each point of the body specifies a displacement vector.
== Decomposition ==
The displacement of a body has two components: a rigid-body displacement and a deformation.
A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size.
Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration
κ
0
(
B
)
{\displaystyle \kappa _{0}({\mathcal {B}})}
to a current or deformed configuration
κ
t
(
B
)
{\displaystyle \kappa _{t}({\mathcal {B}})}
(Figure 1).
A change in the configuration of a continuum body can be described by a displacement field. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement.
== Displacement gradient tensor ==
Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications.
The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration
P
i
{\displaystyle P_{i}}
and deformed configuration
p
i
{\displaystyle p_{i}}
is called the displacement vector,
p
i
−
P
i
{\displaystyle p_{i}-P_{i}}
, denoted
u
i
{\displaystyle u_{i}}
or
U
i
{\displaystyle U_{i}}
below.
=== Material coordinates (Lagrangian description) ===
Using
X
{\displaystyle \mathbf {X} }
in place of
P
i
{\displaystyle P_{i}}
and
x
{\displaystyle \mathbf {x} }
in place of
p
i
{\displaystyle p_{i}\,\!}
, both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector:
u
(
X
,
t
)
=
u
i
e
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}}
where
e
i
{\displaystyle \mathbf {e} _{i}}
are the unit vectors that define the basis of the material (body-frame) coordinate system.
Expressed in terms of the material coordinates, i.e.
u
{\displaystyle \mathbf {u} }
as a function of
X
{\displaystyle \mathbf {X} }
, the displacement field is:
u
(
X
,
t
)
=
b
(
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
where
b
(
t
)
{\displaystyle \mathbf {b} (t)}
is the displacement vector representing rigid-body translation.
The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor
∇
X
u
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} \,\!}
. Thus we have,
∇
X
u
=
∇
X
x
−
R
=
F
−
R
or
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
α
i
K
=
F
i
K
−
α
i
K
{\displaystyle \nabla _{\mathbf {X} }\mathbf {u} =\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {R} =\mathbf {F} -\mathbf {R} \qquad {\text{or}}\qquad {\frac {\partial u_{i}}{\partial X_{K}}}={\frac {\partial x_{i}}{\partial X_{K}}}-\alpha _{iK}=F_{iK}-\alpha _{iK}}
where
F
{\displaystyle \mathbf {F} }
is the material deformation gradient tensor and
R
{\displaystyle \mathbf {R} }
is a rotation.
=== Spatial coordinates (Eulerian description) ===
In the Eulerian description, the vector extending from a particle
P
{\displaystyle P}
in the undeformed configuration to its location in the deformed configuration is called the displacement vector:
U
(
x
,
t
)
=
U
J
E
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=U_{J}\mathbf {E} _{J}}
where
E
i
{\displaystyle \mathbf {E} _{i}}
are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system.
Expressed in terms of spatial coordinates, i.e.
U
{\displaystyle \mathbf {U} }
as a function of
x
{\displaystyle \mathbf {x} }
, the displacement field is:
U
(
x
,
t
)
=
b
(
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor
∇
x
U
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} \,\!}
. Thus we have,
∇
x
U
=
R
T
−
∇
x
X
=
R
T
−
F
−
1
or
∂
U
J
∂
x
k
=
α
J
k
−
∂
X
J
∂
x
k
=
α
J
k
−
F
J
k
−
1
,
{\displaystyle \nabla _{\mathbf {x} }\mathbf {U} =\mathbf {R} ^{T}-\nabla _{\mathbf {x} }\mathbf {X} =\mathbf {R} ^{T}-\mathbf {F} ^{-1}\qquad {\text{or}}\qquad {\frac {\partial U_{J}}{\partial x_{k}}}=\alpha _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}=\alpha _{Jk}-F_{Jk}^{-1}\,,}
where
F
−
1
=
H
{\displaystyle \mathbf {F} ^{-1}=\mathbf {H} }
is the spatial deformation gradient tensor.
=== Relationship between the material and spatial coordinate systems ===
α
J
i
{\displaystyle \alpha _{Ji}}
are the direction cosines between the material and spatial coordinate systems with unit vectors
E
J
{\displaystyle \mathbf {E} _{J}}
and
e
i
{\displaystyle \mathbf {e} _{i}\,\!}
, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
The relationship between
u
i
{\displaystyle u_{i}}
and
U
J
{\displaystyle U_{J}}
is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
=== Combining the coordinate systems of deformed and undeformed configurations ===
It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in
b
=
0
{\displaystyle \mathbf {b} =0\,\!}
, and the direction cosines become Kronecker deltas, i.e.,
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus in material (undeformed) coordinates, the displacement may be expressed as:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}}
And in spatial (deformed) coordinates, the displacement may be expressed as:
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}}
== See also ==
Stress
Strain
== References == | Wikipedia/Material_displacement_gradient_tensor |
In physics and materials science, plasticity (also known as plastic deformation) is the ability of a solid material to undergo permanent deformation, a non-reversible change of shape in response to applied forces. For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is known as yielding.
Plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, and foams. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations. Such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure; in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks. In cellular materials such as liquid foams or biological tissues, plasticity is mainly a consequence of bubble or cell rearrangements, notably T1 processes.
For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the elastic region; now when the load is removed, some degree of extension will remain.
Elastic deformation, however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation".
Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming, may need increasingly higher stresses to deform further. Generally, plastic deformation is also dependent on the deformation speed, i.e. higher stresses usually have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically.
== Contributing properties ==
The plasticity of a material is directly proportional to the ductility and malleability of the material.
== Physical mechanisms ==
=== In metals ===
Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece.
Most metals show more plasticity when hot than when cold. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals. Most metals are rendered plastic by heating and hence shaped hot.
==== Slip systems ====
Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page. The result is a permanent change of shape within the crystal and plastic deformation. The presence of dislocations increases the likelihood of planes.
==== Reversible plasticity ====
On the nanoscale the primary plastic deformation in simple face-centered cubic metals is reversible, as long as there is no material transport in form of cross-slip. Shape-memory alloys such as Nitinol wire also exhibit a reversible form of plasticity which is more properly called pseudoelasticity.
==== Shear banding ====
The presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands.
==== Microplasticity ====
Microplasticity is a local phenomenon in metals. It occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain.
=== Amorphous materials ===
==== Crazing ====
In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of strain and stretch marks.
=== Cellular materials ===
These materials plastically deform when the bending moment exceeds the fully plastic moment. This applies to open cell foams where the bending moment is exerted on the cell walls. The foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3. This is because beams yield axially instead of bending. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells.
=== Soils and sand ===
Soils, particularly clays, display a significant amount of inelasticity under load. The causes of plasticity in soils can be quite complex and are strongly dependent on the microstructure, chemical composition, and water content. Plastic behavior in soils is caused primarily by the rearrangement of clusters of adjacent grains.
=== Rocks and concrete ===
Inelastic deformations of rocks and concrete are primarily caused by the formation of microcracks and sliding motions relative to these cracks. At high temperatures and pressures, plastic behavior can also be affected by the motion of dislocations in individual grains in the microstructure.
== Time-independent yielding and plastic flow in crystalline materials ==
Time-independent plastic flow in both single crystals and polycrystals is defined by a critical/maximum resolved shear stress (τCRSS), initiating dislocation migration along parallel slip planes of a single slip system, thereby defining the transition from elastic to plastic deformation behavior in crystalline materials.
=== Time-independent yielding and plastic flow in single crystals ===
The critical resolved shear stress for single crystals is defined by Schmid’s law τCRSS=σy/m, where σy is the yield strength of the single crystal and m is the Schmid factor. The Schmid factor comprises two variables λ and φ, defining the angle between the slip plane direction and the tensile force applied, and the angle between the slip plane normal and the tensile force applied, respectively. Notably, because m > 1, σy > τCRSS.
==== Critical resolved shear stress dependence on temperature, strain rate, and point defects ====
There are three characteristic regions of the critical resolved shear stress as a function of temperature. In the low temperature region 1 (T ≤ 0.25Tm), the strain rate must be high to achieve high τCRSS which is required to initiate dislocation glide and equivalently plastic flow. In region 1, the critical resolved shear stress has two components: athermal (τa) and thermal (τ*) shear stresses, arising from the stress required to move dislocations in the presence of other dislocations, and the resistance of point defect obstacles to dislocation migration, respectively. At T = T*, the moderate temperature region 2 (0.25Tm < T < 0.7Tm) is defined, where the thermal shear stress component τ* → 0, representing the elimination of point defect impedance to dislocation migration. Thus the temperature-independent critical resolved shear stress τCRSS = τa remains so until region 3 is defined. Notably, in region 2 moderate temperature time-dependent plastic deformation (creep) mechanisms such as solute-drag should be considered. Furthermore, in the high temperature region 3 (T ≥ 0.7Tm) έ can be low, contributing to low τCRSS, however plastic flow will still occur due to thermally activated high temperature time-dependent plastic deformation mechanisms such as Nabarro–Herring (NH) and Coble diffusional flow through the lattice and along the single crystal surfaces, respectively, as well as dislocation climb-glide creep.
==== Stages of time-independent plastic flow, post yielding ====
During the easy glide stage 1, the work hardening rate, defined by the change in shear stress with respect to shear strain (dτ/dγ) is low, representative of a small amount of applied shear stress necessary to induce a large amount of shear strain. Facile dislocation glide and corresponding flow is attributed to dislocation migration along parallel slip planes only (i.e. one slip system). Moderate impedance to dislocation migration along parallel slip planes is exhibited according to the weak stress field interactions between these dislocations, which heightens with smaller interplanar spacing. Overall, these migrating dislocations within a single slip system act as weak obstacles to flow, and a modest rise in stress is observed in comparison to the yield stress. During the linear hardening stage 2 of flow, the work hardening rate becomes high as considerable stress is required to overcome the stress field interactions of dislocations migrating on non-parallel slip planes (i.e. multiple slip systems), acting as strong obstacles to flow. Much stress is required to drive continual dislocation migration for small strains. The shear flow stress is directly proportional to the square root of the dislocation density (τflow ~ρ½), irrespective of the evolution of dislocation configurations, displaying the reliance of hardening on the number of dislocations present. Regarding this evolution of dislocation configurations, at small strains the dislocation arrangement is a random 3D array of intersecting lines. Moderate strains correspond to cellular dislocation structures of heterogeneous dislocation distribution with large dislocation density at the cell boundaries, and small dislocation density within the cell interior. At even larger strains the cellular dislocation structure reduces in size until a minimum size is achieved. Finally, the work hardening rate becomes low again in the exhaustion/saturation of hardening stage 3 of plastic flow, as small shear stresses produce large shear strains. Notably, instances when multiple slip systems are oriented favorably with respect to the applied stress, the τCRSS for these systems may be similar and yielding may occur according to dislocation migration along multiple slip systems with non-parallel slip planes, displaying a stage 1 work-hardening rate typically characteristic of stage 2. Lastly, distinction between time-independent plastic deformation in body-centered cubic transition metals and face centered cubic metals is summarized below.
=== Time-independent yielding and plastic flow in polycrystals ===
Plasticity in polycrystals differs substantially from that in single crystals due to the presence of grain boundary (GB) planar defects, which act as very strong obstacles to plastic flow by impeding dislocation migration along the entire length of the activated slip plane(s). Hence, dislocations cannot pass from one grain to another across the grain boundary. The following sections explore specific GB requirements for extensive plastic deformation of polycrystals prior to fracture, as well as the influence of microscopic yielding within individual crystallites on macroscopic yielding of the polycrystal. The critical resolved shear stress for polycrystals is defined by Schmid’s law as well (τCRSS=σy/ṁ), where σy is the yield strength of the polycrystal and ṁ is the weighted Schmid factor. The weighted Schmid factor reflects the least favorably oriented slip system among the most favorably oriented slip systems of the grains constituting the GB.
==== Grain boundary constraint in polycrystals ====
The GB constraint for polycrystals can be explained by considering a grain boundary in the xz plane between two single crystals A and B of identical composition, structure, and slip systems, but misoriented with respect to each other. To ensure that voids do not form between individually deforming grains, the GB constraint for the bicrystal is as follows:
εxxA = εxxB (the x-axial strain at the GB must be equivalent for A and B), εzzA = εzzB (the z-axial strain at the GB must be equivalent for A and B), and εxzA = εxzB (the xz shear strain along the xz-GB plane must be equivalent for A and B). In addition, this GB constraint requires that five independent slip systems be activated per crystallite constituting the GB. Notably, because independent slip systems are defined as slip planes on which dislocation migrations cannot be reproduced by any combination of dislocation migrations along other slip system’s planes, the number of geometrical slip systems for a given crystal system - which by definition can be constructed by slip system combinations - is typically greater than that of independent slip systems. Significantly, there is a maximum of five independent slip systems for each of the seven crystal systems, however, not all seven crystal systems acquire this upper limit. In fact, even within a given crystal system, the composition and Bravais lattice diversifies the number of independent slip systems (see the table below). In cases for which crystallites of a polycrystal do not obtain five independent slip systems, the GB condition cannot be met, and thus the time-independent deformation of individual crystallites results in cracks and voids at the GBs of the polycrystal, and soon fracture is realized. Hence, for a given composition and structure, a single crystal with less than five independent slip systems is stronger (exhibiting a greater extent of plasticity) than its polycrystalline form.
==== Implications of the grain boundary constraint in polycrystals ====
Although the two crystallites A and B discussed in the above section have identical slip systems, they are misoriented with respect to each other, and therefore misoriented with respect to the applied force. Thus, microscopic yielding within a crystallite interior may occur according to the rules governing single crystal time-independent yielding. Eventually, the activated slip planes within the grain interiors will permit dislocation migration to the GB where many dislocations then pile up as geometrically necessary dislocations. This pile up corresponds to strain gradients across individual grains as the dislocation density near the GB is greater than that in the grain interior, imposing a stress on the adjacent grain in contact. When considering the AB bicrystal as a whole, the most favorably oriented slip system in A will not be the that in B, and hence τACRSS ≠ τBCRSS. Paramount is the fact that macroscopic yielding of the bicrystal is prolonged until the higher value of τCRSS between grains A and B is achieved, according to the GB constraint. Thus, for a given composition and structure, a polycrystal with five independent slip systems is stronger (greater extent of plasticity) than its single crystalline form. Correspondingly, the work hardening rate will be higher for the polycrystal than the single crystal, as more stress is required in the polycrystal to produce strains. Importantly, just as with single crystal flow stress, τflow ~ρ½, but is also inversely proportional to the square root of average grain diameter (τflow ~d-½ ). Therefore, the flow stress of a polycrystal, and hence the polycrystal’s strength, increases with small grain size. The reason for this is that smaller grains have a relatively smaller number of slip planes to be activated, corresponding to a fewer number of dislocations migrating to the GBs, and therefore less stress induced on adjacent grains due to dislocation pile up. In addition, for a given volume of polycrystal, smaller grains present more strong obstacle grain boundaries. These two factors provide an understanding as to why the onset of macroscopic flow in fine-grained polycrystals occurs at larger applied stresses than in coarse-grained polycrystals.
== Mathematical descriptions ==
=== Deformation theory ===
There are several mathematical descriptions of plasticity. One is deformation theory (see e.g. Hooke's law) where the Cauchy stress tensor (of order d-1 in d dimensions) is a function of the strain tensor. Although this description is accurate when a small part of matter is subjected to increasing loading (such as strain loading), this theory cannot account for irreversibility.
Ductile materials can sustain large plastic deformations without fracture. However, even ductile metals will fracture when the strain becomes large enough—this is as a result of work hardening of the material, which causes it to become brittle. Heat treatment such as annealing can restore the ductility of a worked piece, so that shaping can continue.
=== Flow plasticity theory ===
In 1934, Egon Orowan, Michael Polanyi and Geoffrey Ingram Taylor, roughly simultaneously, realized that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations. The mathematical theory of plasticity, flow plasticity theory, uses a set of non-linear, non-integrable equations to describe the set of changes on strain and stress with respect to a previous state and a small increase of deformation.
== Yield criteria ==
If the stress exceeds a critical value, as was mentioned above, the material will undergo plastic, or irreversible, deformation. This critical stress can be tensile or compressive. The Tresca and the von Mises criteria are commonly used to determine whether a material has yielded. However, these criteria have proved inadequate for a large range of materials and several other yield criteria are also in widespread use.
=== Tresca criterion ===
The Tresca criterion is based on the notion that when a material fails, it does so in shear, which is a relatively good assumption when considering metals. Given the principal stress state, we can use Mohr's circle to solve for the maximum shear stresses our material will experience and conclude that the material will fail if
σ
1
−
σ
3
≥
σ
0
{\displaystyle \sigma _{1}-\sigma _{3}\geq \sigma _{0}}
where σ1 is the maximum normal stress, σ3 is the minimum normal stress, and σ0 is the stress under which the material fails in uniaxial loading. A yield surface may be constructed, which provides a visual representation of this concept. Inside of the yield surface, deformation is elastic. On the surface, deformation is plastic. It is impossible for a material to have stress states outside its yield surface.
=== Huber–von Mises criterion ===
The Huber–von Mises criterion is based on the Tresca criterion but takes into account the assumption that hydrostatic stresses do not contribute to material failure. M. T. Huber was the first who proposed the criterion of shear energy. Von Mises solves for an effective stress under uniaxial loading, subtracting out hydrostatic stresses, and states that all effective stresses greater than that which causes material failure in uniaxial loading will result in plastic deformation.
σ
v
2
=
1
2
[
(
σ
11
−
σ
22
)
2
+
(
σ
22
−
σ
33
)
2
+
(
σ
11
−
σ
33
)
2
+
6
(
σ
23
2
+
σ
31
2
+
σ
12
2
)
]
{\displaystyle \sigma _{v}^{2}={\tfrac {1}{2}}[(\sigma _{11}-\sigma _{22})^{2}+(\sigma _{22}-\sigma _{33})^{2}+(\sigma _{11}-\sigma _{33})^{2}+6(\sigma _{23}^{2}+\sigma _{31}^{2}+\sigma _{12}^{2})]}
Again, a visual representation of the yield surface may be constructed using the above equation, which takes the shape of an ellipse. Inside the surface, materials undergo elastic deformation. Reaching the surface means the material undergoes plastic deformations.
== See also ==
Yield (engineering)
Atterberg limits
Deformation (mechanics)
Deformation (engineering)
Plastometer
Poisson's ratio
== References ==
== Further reading ==
Ashby, Michael F. (2001). "Plastic Deformation of Cellular Materials". Encyclopedia of Materials: Science and Technology. Vol. 7. Oxford: Elsevier. pp. 7068–7071. ISBN 0-08-043152-6.
Han, Weimin; Reddy, B. Daya (2013). Plasticity: Mathematical Theory and Numerical Analysis (2nd ed.). New York: Springer. ISBN 978-1-4614-5939-2.
Kachanov, Lazar' Markovich (2004). Fundamentals of the Theory of Plasticity. Dover Books. ISBN 0-486-43583-0.
Khan, Akhtar S.; Huang, Sujian (1995). Continuum Theory of Plasticity. Wiley. ISBN 0-471-31043-3.
Simo, Juan C.; Hughes, Thomas J. R. (1998). Computational Inelasticity. Springer. ISBN 0-387-97520-9.
Van Vliet, Krystyn J. (2006). "Mechanical Behavior of Materials". MIT Course Number 3.032. Massachusetts Institute of Technology. | Wikipedia/Plasticity_(physics) |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Finite_deformation_tensor |
In mechanics and materials science, strain rate is the time derivative of strain of a material. Strain rate has dimension of inverse time and SI units of inverse second, s−1 (or its multiples).
The strain rate at some point within the material measures the rate at which the distances of adjacent parcels of the material change with time in the neighborhood of that point. It comprises both the rate at which the material is expanding or shrinking (expansion rate), and also the rate at which it is being deformed by progressive shearing without changing its volume (shear rate). It is zero if these distances do not change, as happens when all particles in some region are moving with the same velocity (same speed and direction) and/or rotating with the same angular velocity, as if that part of the medium were a rigid body.
The strain rate is a concept of materials science and continuum mechanics that plays an essential role in the physics of fluids and deformable solids. In an isotropic Newtonian fluid, in particular, the viscous stress is a linear function of the rate of strain, defined by two coefficients, one relating to the expansion rate (the bulk viscosity coefficient) and one relating to the shear rate (the "ordinary" viscosity coefficient). In solids, higher strain rates can often cause normally ductile materials to fail in a brittle manner.
== Definition ==
The definition of strain rate was first introduced in 1867 by American metallurgist Jade LeCocq, who defined it as "the rate at which strain occurs. It is the time rate of change of strain." In physics the strain rate is generally defined as the derivative of the strain with respect to time. Its precise definition depends on how strain is measured.
The strain is the ratio of two lengths, so it is a dimensionless quantity (a number that does not depend on the choice of measurement units). Thus, strain rate has dimension of inverse time and units of inverse second, s−1 (or its multiples).
== Simple deformations ==
In simple contexts, a single number may suffice to describe the strain, and therefore the strain rate. For example, when a long and uniform rubber band is gradually stretched by pulling at the ends, the strain can be defined as the ratio
ϵ
{\displaystyle \epsilon }
between the amount of stretching and the original length of the band:
ϵ
(
t
)
=
L
(
t
)
−
L
0
L
0
{\displaystyle \epsilon (t)={\frac {L(t)-L_{0}}{L_{0}}}}
where
L
0
{\displaystyle L_{0}}
is the original length and
L
(
t
)
{\displaystyle L(t)}
its length at each time
t
{\displaystyle t}
. Then the strain rate will be
ϵ
˙
(
t
)
=
d
ϵ
d
t
=
d
d
t
(
L
(
t
)
−
L
0
L
0
)
=
1
L
0
d
L
(
t
)
d
t
=
v
(
t
)
L
0
{\displaystyle {\dot {\epsilon }}(t)={\frac {d\epsilon }{dt}}={\frac {d}{dt}}\left({\frac {L(t)-L_{0}}{L_{0}}}\right)={\frac {1}{L_{0}}}{\frac {dL(t)}{dt}}={\frac {v(t)}{L_{0}}}}
where
v
(
t
)
{\displaystyle v(t)}
is the speed at which the ends are moving away from each other.
The strain rate can also be expressed by a single number when the material is being subjected to parallel shear without change of volume; namely, when the deformation can be described as a set of infinitesimally thin parallel layers sliding against each other as if they were rigid sheets, in the same direction, without changing their spacing. This description fits the laminar flow of a fluid between two solid plates that slide parallel to each other (a Couette flow) or inside a circular pipe of constant cross-section (a Poiseuille flow). In those cases, the state of the material at some time
t
{\displaystyle t}
can be described by the displacement
X
(
y
,
t
)
{\displaystyle X(y,t)}
of each layer, since an arbitrary starting time, as a function of its distance
y
{\displaystyle y}
from the fixed wall. Then the strain in each layer can be expressed as the limit of the ratio between the current relative displacement
X
(
y
+
d
,
t
)
−
X
(
y
,
t
)
{\displaystyle X(y+d,t)-X(y,t)}
of a nearby layer, divided by the spacing
d
{\displaystyle d}
between the layers:
ϵ
(
y
,
t
)
=
lim
d
→
0
X
(
y
+
d
,
t
)
−
X
(
y
,
t
)
d
=
∂
X
∂
y
(
y
,
t
)
{\displaystyle \epsilon (y,t)=\lim _{d\rightarrow 0}{\frac {X(y+d,t)-X(y,t)}{d}}={\frac {\partial X}{\partial y}}(y,t)}
Therefore, the strain rate is
ϵ
˙
(
y
,
t
)
=
(
∂
∂
t
∂
X
∂
y
)
(
y
,
t
)
=
(
∂
∂
y
∂
X
∂
t
)
(
y
,
t
)
=
∂
V
∂
y
(
y
,
t
)
{\displaystyle {\dot {\epsilon }}(y,t)=\left({\frac {\partial }{\partial t}}{\frac {\partial X}{\partial y}}\right)(y,t)=\left({\frac {\partial }{\partial y}}{\frac {\partial X}{\partial t}}\right)(y,t)={\frac {\partial V}{\partial y}}(y,t)}
where
V
(
y
,
t
)
{\displaystyle V(y,t)}
is the current linear speed of the material at distance
y
{\displaystyle y}
from the wall.
== The strain-rate tensor ==
In more general situations, when the material is being deformed in various directions at different rates, the strain (and therefore the strain rate) around a point within a material cannot be expressed by a single number, or even by a single vector. In such cases, the rate of deformation must be expressed by a tensor, a linear map between vectors, that expresses how the relative velocity of the medium changes when one moves by a small distance away from the point in a given direction. This strain rate tensor can be defined as the time derivative of the strain tensor, or as the symmetric part of the gradient (derivative with respect to position) of the velocity of the material.
With a chosen coordinate system, the strain rate tensor can be represented by a symmetric 3×3 matrix of real numbers. The strain rate tensor typically varies with position and time within the material, and is therefore a (time-varying) tensor field. It only describes the local rate of deformation to first order; but that is generally sufficient for most purposes, even when the viscosity of the material is highly non-linear.
== Strain rate testing ==
Materials can be tested using the so-called epsilon dot (
ε
˙
{\displaystyle {\dot {\varepsilon }}}
) method which can be used to derive viscoelastic parameters through lumped parameter analysis.
== Sliding rate or shear strain rate ==
Similarly, the sliding rate, also called the deviatoric strain rate or shear strain rate is the derivative with respect to time of the shear strain. Engineering sliding strain can be defined as the angular displacement created by an applied shear stress,
τ
{\displaystyle \tau }
.
γ
=
w
l
=
tan
(
θ
)
{\displaystyle \gamma ={\frac {w}{l}}=\tan(\theta )}
Therefore the unidirectional sliding strain rate can be defined as:
γ
˙
=
d
γ
d
t
{\displaystyle {\dot {\gamma }}={\frac {d\gamma }{dt}}}
== See also ==
Flow velocity
Strain
Strain gauge
Stress–strain curve
Stretch ratio
== References ==
== External links ==
Bar Technology for High-Strain-Rate Material Properties | Wikipedia/Strain_rate |
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an initial configuration to a final configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body.
A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc.
In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials.
Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations may remain, and these exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation.
In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material.
== Definition and formulation ==
Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred.
It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at t = 0 is considered the reference configuration, κ0(B). The configuration at the current time t is the current configuration.
For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest.
The components Xi of the position vector X of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components xi of the position vector x of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates
There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description.
There is continuity during deformation of a continuum body in the sense that:
The material points forming a closed curve at any instant will always form a closed curve at any subsequent time.
The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within.
=== Affine deformation ===
An affine deformation is a deformation that can be completely described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations.
Therefore, an affine deformation has the form
x
(
X
,
t
)
=
F
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {F}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where x is the position of a point in the deformed configuration, X is the position in a reference configuration, t is a time-like parameter, F is the linear transformer and c is the translation. In matrix form, where the components are with respect to an orthonormal basis,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
F
11
(
t
)
F
12
(
t
)
F
13
(
t
)
F
21
(
t
)
F
22
(
t
)
F
23
(
t
)
F
31
(
t
)
F
32
(
t
)
F
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}F_{11}(t)&F_{12}(t)&F_{13}(t)\\F_{21}(t)&F_{22}(t)&F_{23}(t)\\F_{31}(t)&F_{32}(t)&F_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
The above deformation becomes non-affine or inhomogeneous if F = F(X,t) or c = c(X,t).
=== Rigid body motion ===
A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix F is proper orthogonal in order to allow rotations but no reflections.
A rigid body motion can be described by
x
(
X
,
t
)
=
Q
(
t
)
⋅
X
+
c
(
t
)
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {Q}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
where
Q
⋅
Q
T
=
Q
T
⋅
Q
=
1
{\displaystyle {\boldsymbol {Q}}\cdot {\boldsymbol {Q}}^{T}={\boldsymbol {Q}}^{T}\cdot {\boldsymbol {Q}}={\boldsymbol {\mathit {1}}}}
In matrix form,
[
x
1
(
X
1
,
X
2
,
X
3
,
t
)
x
2
(
X
1
,
X
2
,
X
3
,
t
)
x
3
(
X
1
,
X
2
,
X
3
,
t
)
]
=
[
Q
11
(
t
)
Q
12
(
t
)
Q
13
(
t
)
Q
21
(
t
)
Q
22
(
t
)
Q
23
(
t
)
Q
31
(
t
)
Q
32
(
t
)
Q
33
(
t
)
]
[
X
1
X
2
X
3
]
+
[
c
1
(
t
)
c
2
(
t
)
c
3
(
t
)
]
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}Q_{11}(t)&Q_{12}(t)&Q_{13}(t)\\Q_{21}(t)&Q_{22}(t)&Q_{23}(t)\\Q_{31}(t)&Q_{32}(t)&Q_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
== Background: displacement ==
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0(B) to a current or deformed configuration κt(B) (Figure 1).
If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred.
The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector u(X,t) = uiei in the Lagrangian description, or U(x,t) = UJEJ in the Eulerian description.
A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as
u
(
X
,
t
)
=
b
(
X
,
t
)
+
x
(
X
,
t
)
−
X
or
u
i
=
α
i
J
b
J
+
x
i
−
α
i
J
X
J
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (\mathbf {X} ,t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
b
(
x
,
t
)
+
x
−
X
(
x
,
t
)
or
U
J
=
b
J
+
α
J
i
x
i
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (\mathbf {x} ,t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
where αJi are the direction cosines between the material and spatial coordinate systems with unit vectors EJ and ei, respectively. Thus
E
J
⋅
e
i
=
α
J
i
=
α
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
and the relationship between ui and UJ is then given by
u
i
=
α
i
J
U
J
or
U
J
=
α
J
i
u
i
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
Knowing that
e
i
=
α
i
J
E
J
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
then
u
(
X
,
t
)
=
u
i
e
i
=
u
i
(
α
i
J
E
J
)
=
U
J
E
J
=
U
(
x
,
t
)
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in b = 0, and the direction cosines become Kronecker deltas:
E
J
⋅
e
i
=
δ
J
i
=
δ
i
J
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
Thus, we have
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}}
or in terms of the spatial coordinates as
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}}
=== Displacement gradient tensor ===
The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇Xu. Thus we have:
u
(
X
,
t
)
=
x
(
X
,
t
)
−
X
∇
X
u
=
∇
X
x
−
I
∇
X
u
=
F
−
I
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} ,t)&=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \\\nabla _{\mathbf {X} }\mathbf {u} &=\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {I} \\\nabla _{\mathbf {X} }\mathbf {u} &=\mathbf {F} -\mathbf {I} \end{aligned}}}
or
u
i
=
x
i
−
δ
i
J
X
J
=
x
i
−
X
i
∂
u
i
∂
X
K
=
∂
x
i
∂
X
K
−
δ
i
K
{\displaystyle {\begin{aligned}u_{i}&=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}\\{\frac {\partial u_{i}}{\partial X_{K}}}&={\frac {\partial x_{i}}{\partial X_{K}}}-\delta _{iK}\end{aligned}}}
where F is the deformation gradient tensor.
Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor ∇xU. Thus we have,
U
(
x
,
t
)
=
x
−
X
(
x
,
t
)
∇
x
U
=
I
−
∇
x
X
∇
x
U
=
I
−
F
−
1
{\displaystyle {\begin{aligned}\mathbf {U} (\mathbf {x} ,t)&=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\nabla _{\mathbf {x} }\mathbf {X} \\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\mathbf {F} ^{-1}\end{aligned}}}
or
U
J
=
δ
J
i
x
i
−
X
J
=
x
J
−
X
J
∂
U
J
∂
x
k
=
δ
J
k
−
∂
X
J
∂
x
k
{\displaystyle {\begin{aligned}U_{J}&=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}\\{\frac {\partial U_{J}}{\partial x_{k}}}&=\delta _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}\end{aligned}}}
== Examples ==
Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are
uniform extension
pure dilation
equibiaxial tension
simple shear
pure shear
Linear or longitudinal deformations of long objects, such as beams and fibers, are called elongation or shortening; derived quantities are the relative elongation and the stretch ratio.
Plane deformations are also of interest, particularly in the experimental context.
Volume deformation is a uniform scaling due to isotropic compression; the relative volume deformation is called volumetric strain.
=== Plane deformation ===
A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors e1, e2, the deformation gradient has the form
F
=
F
11
e
1
⊗
e
1
+
F
12
e
1
⊗
e
2
+
F
21
e
2
⊗
e
1
+
F
22
e
2
⊗
e
2
+
e
3
⊗
e
3
{\displaystyle {\boldsymbol {F}}=F_{11}\mathbf {e} _{1}\otimes \mathbf {e} _{1}+F_{12}\mathbf {e} _{1}\otimes \mathbf {e} _{2}+F_{21}\mathbf {e} _{2}\otimes \mathbf {e} _{1}+F_{22}\mathbf {e} _{2}\otimes \mathbf {e} _{2}+\mathbf {e} _{3}\otimes \mathbf {e} _{3}}
In matrix form,
F
=
[
F
11
F
12
0
F
21
F
22
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}F_{11}&F_{12}&0\\F_{21}&F_{22}&0\\0&0&1\end{bmatrix}}}
From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write
F
=
R
⋅
U
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
[
λ
1
0
0
0
λ
2
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}\lambda _{1}&0&0\\0&\lambda _{2}&0\\0&0&1\end{bmatrix}}}
where θ is the angle of rotation and λ1, λ2 are the principal stretches.
==== Isochoric plane deformation ====
If the deformation is isochoric (volume preserving) then det(F) = 1 and we have
F
11
F
22
−
F
12
F
21
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1}
Alternatively,
λ
1
λ
2
=
1
{\displaystyle \lambda _{1}\lambda _{2}=1}
==== Simple shear ====
A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation.
If e1 is the fixed reference orientation in which line elements do not deform during the deformation then λ1 = 1 and F·e1 = e1.
Therefore,
F
11
e
1
+
F
21
e
2
=
e
1
⟹
F
11
=
1
;
F
21
=
0
{\displaystyle F_{11}\mathbf {e} _{1}+F_{21}\mathbf {e} _{2}=\mathbf {e} _{1}\quad \implies \quad F_{11}=1~;~~F_{21}=0}
Since the deformation is isochoric,
F
11
F
22
−
F
12
F
21
=
1
⟹
F
22
=
1
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1\quad \implies \quad F_{22}=1}
Define
γ
:=
F
12
{\displaystyle \gamma :=F_{12}}
Then, the deformation gradient in simple shear can be expressed as
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
Now,
F
⋅
e
2
=
F
12
e
1
+
F
22
e
2
=
γ
e
1
+
e
2
⟹
F
⋅
(
e
2
⊗
e
2
)
=
γ
e
1
⊗
e
2
+
e
2
⊗
e
2
{\displaystyle {\boldsymbol {F}}\cdot \mathbf {e} _{2}=F_{12}\mathbf {e} _{1}+F_{22}\mathbf {e} _{2}=\gamma \mathbf {e} _{1}+\mathbf {e} _{2}\quad \implies \quad {\boldsymbol {F}}\cdot (\mathbf {e} _{2}\otimes \mathbf {e} _{2})=\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}+\mathbf {e} _{2}\otimes \mathbf {e} _{2}}
Since
e
i
⊗
e
i
=
1
{\displaystyle \mathbf {e} _{i}\otimes \mathbf {e} _{i}={\boldsymbol {\mathit {1}}}}
we can also write the deformation gradient as
F
=
1
+
γ
e
1
⊗
e
2
{\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}}
== See also ==
The deformation of long elements such as beams or studs due to bending forces is known as deflection.
Euler–Bernoulli beam theory
Deformation (engineering)
Finite strain theory
Infinitesimal strain theory
Moiré pattern
Shear modulus
Shear stress
Shear strength
Strain (mechanics)
Stress (mechanics)
Stress measures
== References ==
== Further reading ==
Bazant, Zdenek P.; Cedolin, Luigi (2010). Three-Dimensional Continuum Instabilities and Effects of Finite Strain Tensor, chapter 11 in "Stability of Structures", 3rd ed. Singapore, New Jersey, London: World Scientific Publishing. ISBN 978-9814317030.
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Hutter, Kolumban; Jöhnk, Klaus (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Jirasek, M; Bazant, Z.P. (2002). Inelastic Analysis of Structures. London and New York: J. Wiley & Sons. ISBN 0471987166.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; Mase, George E. (1999). Continuum Mechanics for Engineers (2nd ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Prager, William (1961). Introduction to Mechanics of Continua. Boston: Ginn and Co. ISBN 0486438090. {{cite book}}: ISBN / Date incompatibility (help) | Wikipedia/Deformation_(mechanics) |
In mathematics, a quadratic differential on a Riemann surface is a section of the symmetric square of the holomorphic cotangent bundle. If the section is holomorphic, then the quadratic differential is said to be holomorphic. The vector space of holomorphic quadratic differentials on a Riemann surface has a natural interpretation as the cotangent space to the Riemann moduli space, or Teichmüller space.
== Local form ==
Each quadratic differential on a domain
U
{\displaystyle U}
in the complex plane may be written as
f
(
z
)
d
z
⊗
d
z
{\displaystyle f(z)\,dz\otimes dz}
, where
z
{\displaystyle z}
is the complex variable, and
f
{\displaystyle f}
is a complex-valued function on
U
{\displaystyle U}
.
Such a "local" quadratic differential is holomorphic if and only if
f
{\displaystyle f}
is holomorphic. Given a chart
μ
{\displaystyle \mu }
for a general Riemann surface
R
{\displaystyle R}
and a quadratic differential
q
{\displaystyle q}
on
R
{\displaystyle R}
, the pull-back
(
μ
−
1
)
∗
(
q
)
{\displaystyle (\mu ^{-1})^{*}(q)}
defines a quadratic differential on a domain in the complex plane.
== Relation to abelian differentials ==
If
ω
{\displaystyle \omega }
is an abelian differential on a Riemann surface, then
ω
⊗
ω
{\displaystyle \omega \otimes \omega }
is a quadratic differential.
== Singular Euclidean structure ==
A holomorphic quadratic differential
q
{\displaystyle q}
determines a Riemannian metric
|
q
|
{\displaystyle |q|}
on the complement of its zeroes. If
q
{\displaystyle q}
is defined on a domain in the complex plane, and
q
=
f
(
z
)
d
z
⊗
d
z
{\displaystyle q=f(z)\,dz\otimes dz}
, then the associated Riemannian metric is
|
f
(
z
)
|
(
d
x
2
+
d
y
2
)
{\displaystyle |f(z)|(dx^{2}+dy^{2})}
, where
z
=
x
+
i
y
{\displaystyle z=x+iy}
. Since
f
{\displaystyle f}
is holomorphic, the curvature of this metric is zero. Thus, a holomorphic quadratic differential defines a flat metric on the complement of the set of
z
{\displaystyle z}
such that
f
(
z
)
=
0
{\displaystyle f(z)=0}
.
== References ==
Kurt Strebel, Quadratic differentials. Ergebnisse der Mathematik und ihrer Grenzgebiete (3), 5. Springer-Verlag, Berlin, 1984. xii + 184 pp. ISBN 3-540-13035-7.
Y. Imayoshi and M. Taniguchi, M. An introduction to Teichmüller spaces. Translated and revised from the Japanese version by the authors. Springer-Verlag, Tokyo, 1992. xiv + 279 pp. ISBN 4-431-70088-9.
Frederick P. Gardiner, Teichmüller Theory and Quadratic Differentials. Wiley-Interscience, New York, 1987. xvii + 236 pp. ISBN 0-471-84539-6. | Wikipedia/Quadratic_differential |
In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, especially in geometry, topology and physics.
For instance, the expression
f
(
x
)
d
x
{\displaystyle f(x)\,dx}
is an example of a 1-form, and can be integrated over an interval
[
a
,
b
]
{\displaystyle [a,b]}
contained in the domain of
f
{\displaystyle f}
:
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{a}^{b}f(x)\,dx.}
Similarly, the expression
f
(
x
,
y
,
z
)
d
x
∧
d
y
+
g
(
x
,
y
,
z
)
d
z
∧
d
x
+
h
(
x
,
y
,
z
)
d
y
∧
d
z
{\displaystyle f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz}
is a 2-form that can be integrated over a surface
S
{\displaystyle S}
:
∫
S
(
f
(
x
,
y
,
z
)
d
x
∧
d
y
+
g
(
x
,
y
,
z
)
d
z
∧
d
x
+
h
(
x
,
y
,
z
)
d
y
∧
d
z
)
.
{\displaystyle \int _{S}\left(f(x,y,z)\,dx\wedge dy+g(x,y,z)\,dz\wedge dx+h(x,y,z)\,dy\wedge dz\right).}
The symbol
∧
{\displaystyle \wedge }
denotes the exterior product, sometimes called the wedge product, of two differential forms. Likewise, a 3-form
f
(
x
,
y
,
z
)
d
x
∧
d
y
∧
d
z
{\displaystyle f(x,y,z)\,dx\wedge dy\wedge dz}
represents a volume element that can be integrated over a region of space. In general, a k-form is an object that may be integrated over a k-dimensional manifold, and is homogeneous of degree k in the coordinate differentials
d
x
,
d
y
,
…
.
{\displaystyle dx,dy,\ldots .}
On an n-dimensional manifold, a top-dimensional form (n-form) is called a volume form.
The differential forms form an alternating algebra. This implies that
d
y
∧
d
x
=
−
d
x
∧
d
y
{\displaystyle dy\wedge dx=-dx\wedge dy}
and
d
x
∧
d
x
=
0.
{\displaystyle dx\wedge dx=0.}
This alternating property reflects the orientation of the domain of integration.
The exterior derivative is an operation on differential forms that, given a k-form
φ
{\displaystyle \varphi }
, produces a (k+1)-form
d
φ
.
{\displaystyle d\varphi .}
This operation extends the differential of a function (a function can be considered as a 0-form, and its differential is
d
f
(
x
)
=
f
′
(
x
)
d
x
{\displaystyle df(x)=f'(x)\,dx}
). This allows expressing the fundamental theorem of calculus, the divergence theorem, Green's theorem, and Stokes' theorem as special cases of a single general result, the generalized Stokes theorem.
Differential 1-forms are naturally dual to vector fields on a differentiable manifold, and the pairing between vector fields and 1-forms is extended to arbitrary differential forms by the interior product. The algebra of differential forms along with the exterior derivative defined on it is preserved by the pullback under smooth functions between two manifolds. This feature allows geometrically invariant information to be moved from one space to another via the pullback, provided that the information is expressed in terms of differential forms. As an example, the change of variables formula for integration becomes a simple statement that an integral is preserved under pullback.
== History ==
Differential forms are part of the field of differential geometry, influenced by linear algebra. Although the notion of a differential is quite old, the initial attempt at an algebraic organization of differential forms is usually credited to Élie Cartan with reference to his 1899 paper. Some aspects of the exterior algebra of differential forms appears in Hermann Grassmann's 1844 work, Die Lineale Ausdehnungslehre, ein neuer Zweig der Mathematik (The Theory of Linear Extension, a New Branch of Mathematics).
== Concept ==
Differential forms provide an approach to multivariable calculus that is independent of coordinates.
=== Integration and orientation ===
A differential k-form can be integrated over an oriented manifold of dimension k. A differential 1-form can be thought of as measuring an infinitesimal oriented length, or 1-dimensional oriented density. A differential 2-form can be thought of as measuring an infinitesimal oriented area, or 2-dimensional oriented density. And so on.
Integration of differential forms is well-defined only on oriented manifolds. An example of a 1-dimensional manifold is an interval [a, b], and intervals can be given an orientation: they are positively oriented if a < b, and negatively oriented otherwise. If a < b then the integral of the differential 1-form f(x) dx over the interval [a, b] (with its natural positive orientation) is
∫
a
b
f
(
x
)
d
x
{\displaystyle \int _{a}^{b}f(x)\,dx}
which is the negative of the integral of the same differential form over the same interval, when equipped with the opposite orientation. That is:
∫
b
a
f
(
x
)
d
x
=
−
∫
a
b
f
(
x
)
d
x
.
{\displaystyle \int _{b}^{a}f(x)\,dx=-\int _{a}^{b}f(x)\,dx.}
This gives a geometrical context to the conventions for one-dimensional integrals, that the sign changes when the orientation of the interval is reversed. A standard explanation of this in one-variable integration theory is that, when the limits of integration are in the opposite order (b < a), the increment dx is negative in the direction of integration.
More generally, an m-form is an oriented density that can be integrated over an m-dimensional oriented manifold. (For example, a 1-form can be integrated over an oriented curve, a 2-form can be integrated over an oriented surface, etc.) If M is an oriented m-dimensional manifold, and M′ is the same manifold with opposite orientation and ω is an m-form, then one has:
∫
M
ω
=
−
∫
M
′
ω
.
{\displaystyle \int _{M}\omega =-\int _{M'}\omega \,.}
These conventions correspond to interpreting the integrand as a differential form, integrated over a chain. In measure theory, by contrast, one interprets the integrand as a function f with respect to a measure μ and integrates over a subset A, without any notion of orientation; one writes
∫
A
f
d
μ
=
∫
[
a
,
b
]
f
d
μ
{\textstyle \int _{A}f\,d\mu =\int _{[a,b]}f\,d\mu }
to indicate integration over a subset A. This is a minor distinction in one dimension, but becomes subtler on higher-dimensional manifolds; see below for details.
Making the notion of an oriented density precise, and thus of a differential form, involves the exterior algebra. The differentials of a set of coordinates, dx1, ..., dxn can be used as a basis for all 1-forms. Each of these represents a covector at each point on the manifold that may be thought of as measuring a small displacement in the corresponding coordinate direction. A general 1-form is a linear combination of these differentials at every point on the manifold:
f
1
d
x
1
+
⋯
+
f
n
d
x
n
,
{\displaystyle f_{1}\,dx^{1}+\cdots +f_{n}\,dx^{n},}
where the fk = fk(x1, ... , xn) are functions of all the coordinates. A differential 1-form is integrated along an oriented curve as a line integral.
The expressions dxi ∧ dxj, where i < j can be used as a basis at every point on the manifold for all 2-forms. This may be thought of as an infinitesimal oriented square parallel to the xi–xj-plane. A general 2-form is a linear combination of these at every point on the manifold:
∑
1
≤
i
<
j
≤
n
f
i
,
j
d
x
i
∧
d
x
j
{\textstyle \sum _{1\leq i<j\leq n}f_{i,j}\,dx^{i}\wedge dx^{j}}
, and it is integrated just like a surface integral.
A fundamental operation defined on differential forms is the exterior product (the symbol is the wedge ∧). This is similar to the cross product from vector calculus, in that it is an alternating product. For instance,
d
x
1
∧
d
x
2
=
−
d
x
2
∧
d
x
1
{\displaystyle dx^{1}\wedge dx^{2}=-dx^{2}\wedge dx^{1}}
because the square whose first side is dx1 and second side is dx2 is to be regarded as having the opposite orientation as the square whose first side is dx2 and whose second side is dx1. This is why we only need to sum over expressions dxi ∧ dxj, with i < j; for example: a(dxi ∧ dxj) + b(dxj ∧ dxi) = (a − b) dxi ∧ dxj. The exterior product allows higher-degree differential forms to be built out of lower-degree ones, in much the same way that the cross product in vector calculus allows one to compute the area vector of a parallelogram from vectors pointing up the two sides. Alternating also implies that dxi ∧ dxi = 0, in the same way that the cross product of parallel vectors, whose magnitude is the area of the parallelogram spanned by those vectors, is zero. In higher dimensions, dxi1 ∧ ⋅⋅⋅ ∧ dxim = 0 if any two of the indices i1, ..., im are equal, in the same way that the "volume" enclosed by a parallelotope whose edge vectors are linearly dependent is zero.
=== Multi-index notation ===
A common notation for the wedge product of elementary k-forms is so called multi-index notation: in an n-dimensional context, for
I
=
(
i
1
,
i
2
,
…
,
i
k
)
,
1
≤
i
1
<
i
2
<
⋯
<
i
k
≤
n
{\displaystyle I=(i_{1},i_{2},\ldots ,i_{k}),1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n}
, we define
d
x
I
:=
d
x
i
1
∧
⋯
∧
d
x
i
k
=
⋀
i
∈
I
d
x
i
{\textstyle dx^{I}:=dx^{i_{1}}\wedge \cdots \wedge dx^{i_{k}}=\bigwedge _{i\in I}dx^{i}}
. Another useful notation is obtained by defining the set of all strictly increasing multi-indices of length k, in a space of dimension n, denoted
J
k
,
n
:=
{
I
=
(
i
1
,
…
,
i
k
)
:
1
≤
i
1
<
i
2
<
⋯
<
i
k
≤
n
}
{\displaystyle {\mathcal {J}}_{k,n}:=\{I=(i_{1},\ldots ,i_{k}):1\leq i_{1}<i_{2}<\cdots <i_{k}\leq n\}}
. Then locally (wherever the coordinates apply),
{
d
x
I
}
I
∈
J
k
,
n
{\displaystyle \{dx^{I}\}_{I\in {\mathcal {J}}_{k,n}}}
spans the space of differential k-forms in a manifold M of dimension n, when viewed as a module over the ring C∞(M) of smooth functions on M. By calculating the size of
J
k
,
n
{\displaystyle {\mathcal {J}}_{k,n}}
combinatorially, the module of k-forms on an n-dimensional manifold, and in general space of k-covectors on an n-dimensional vector space, is n choose k:
|
J
k
,
n
|
=
(
n
k
)
{\textstyle |{\mathcal {J}}_{k,n}|={\binom {n}{k}}}
. This also demonstrates that there are no nonzero differential forms of degree greater than the dimension of the underlying manifold.
=== The exterior derivative ===
In addition to the exterior product, there is also the exterior derivative operator d. The exterior derivative of a differential form is a generalization of the differential of a function, in the sense that the exterior derivative of f ∈ C∞(M) = Ω0(M) is exactly the differential of f. When generalized to higher forms, if ω = f dxI is a simple k-form, then its exterior derivative dω is a (k + 1)-form defined by taking the differential of the coefficient functions:
d
ω
=
∑
i
=
1
n
∂
f
∂
x
i
d
x
i
∧
d
x
I
.
{\displaystyle d\omega =\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}\,dx^{i}\wedge dx^{I}.}
with extension to general k-forms through linearity: if
τ
=
∑
I
∈
J
k
,
n
a
I
d
x
I
∈
Ω
k
(
M
)
{\textstyle \tau =\sum _{I\in {\mathcal {J}}_{k,n}}a_{I}\,dx^{I}\in \Omega ^{k}(M)}
, then its exterior derivative is
d
τ
=
∑
I
∈
J
k
,
n
(
∑
j
=
1
n
∂
a
I
∂
x
j
d
x
j
)
∧
d
x
I
∈
Ω
k
+
1
(
M
)
{\displaystyle d\tau =\sum _{I\in {\mathcal {J}}_{k,n}}\left(\sum _{j=1}^{n}{\frac {\partial a_{I}}{\partial x^{j}}}\,dx^{j}\right)\wedge dx^{I}\in \Omega ^{k+1}(M)}
In R3, with the Hodge star operator, the exterior derivative corresponds to gradient, curl, and divergence, although this correspondence, like the cross product, does not generalize to higher dimensions, and should be treated with some caution.
The exterior derivative itself applies in an arbitrary finite number of dimensions, and is a flexible and powerful tool with wide application in differential geometry, differential topology, and many areas in physics. Of note, although the above definition of the exterior derivative was defined with respect to local coordinates, it can be defined in an entirely coordinate-free manner, as an antiderivation of degree 1 on the exterior algebra of differential forms. The benefit of this more general approach is that it allows for a natural coordinate-free approach to integrate on manifolds. It also allows for a natural generalization of the fundamental theorem of calculus, called the (generalized) Stokes' theorem, which is a central result in the theory of integration on manifolds.
=== Differential calculus ===
Let U be an open set in Rn. A differential 0-form ("zero-form") is defined to be a smooth function f on U – the set of which is denoted C∞(U). If v is any vector in Rn, then f has a directional derivative ∂v f, which is another function on U whose value at a point p ∈ U is the rate of change (at p) of f in the v direction:
(
∂
v
f
)
(
p
)
=
d
d
t
f
(
p
+
t
v
)
|
t
=
0
.
{\displaystyle (\partial _{\mathbf {v} }f)(p)=\left.{\frac {d}{dt}}f(p+t\mathbf {v} )\right|_{t=0}.}
(This notion can be extended pointwise to the case that v is a vector field on U by evaluating v at the point p in the definition.)
In particular, if v = ej is the jth coordinate vector then ∂v f is the partial derivative of f with respect to the jth coordinate vector, i.e., ∂f / ∂xj, where x1, x2, ..., xn are the coordinate vectors in U. By their very definition, partial derivatives depend upon the choice of coordinates: if new coordinates y1, y2, ..., yn are introduced, then
∂
f
∂
x
j
=
∑
i
=
1
n
∂
y
i
∂
x
j
∂
f
∂
y
i
.
{\displaystyle {\frac {\partial f}{\partial x^{j}}}=\sum _{i=1}^{n}{\frac {\partial y^{i}}{\partial x^{j}}}{\frac {\partial f}{\partial y^{i}}}.}
The first idea leading to differential forms is the observation that ∂v f (p) is a linear function of v:
(
∂
v
+
w
f
)
(
p
)
=
(
∂
v
f
)
(
p
)
+
(
∂
w
f
)
(
p
)
(
∂
c
v
f
)
(
p
)
=
c
(
∂
v
f
)
(
p
)
{\displaystyle {\begin{aligned}(\partial _{\mathbf {v} +\mathbf {w} }f)(p)&=(\partial _{\mathbf {v} }f)(p)+(\partial _{\mathbf {w} }f)(p)\\(\partial _{c\mathbf {v} }f)(p)&=c(\partial _{\mathbf {v} }f)(p)\end{aligned}}}
for any vectors v, w and any real number c. At each point p, this linear map from Rn to R is denoted dfp and called the derivative or differential of f at p. Thus dfp(v) = ∂v f (p). Extended over the whole set, the object df can be viewed as a function that takes a vector field on U, and returns a real-valued function whose value at each point is the derivative along the vector field of the function f. Note that at each p, the differential dfp is not a real number, but a linear functional on tangent vectors, and a prototypical example of a differential 1-form.
Since any vector v is a linear combination Σ vjej of its components, df is uniquely determined by dfp(ej) for each j and each p ∈ U, which are just the partial derivatives of f on U. Thus df provides a way of encoding the partial derivatives of f. It can be decoded by noticing that the coordinates x1, x2, ..., xn are themselves functions on U, and so define differential 1-forms dx1, dx2, ..., dxn. Let f = xi. Since ∂xi / ∂xj = δij, the Kronecker delta function, it follows that
The meaning of this expression is given by evaluating both sides at an arbitrary point p: on the right hand side, the sum is defined "pointwise", so that
d
f
p
=
∑
i
=
1
n
∂
f
∂
x
i
(
p
)
(
d
x
i
)
p
.
{\displaystyle df_{p}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x^{i}}}(p)(dx^{i})_{p}.}
Applying both sides to ej, the result on each side is the jth partial derivative of f at p. Since p and j were arbitrary, this proves the formula (*).
More generally, for any smooth functions gi and hi on U, we define the differential 1-form α = Σi gi dhi pointwise by
α
p
=
∑
i
g
i
(
p
)
(
d
h
i
)
p
{\displaystyle \alpha _{p}=\sum _{i}g_{i}(p)(dh_{i})_{p}}
for each p ∈ U. Any differential 1-form arises this way, and by using (*) it follows that any differential 1-form α on U may be expressed in coordinates as
α
=
∑
i
=
1
n
f
i
d
x
i
{\displaystyle \alpha =\sum _{i=1}^{n}f_{i}\,dx^{i}}
for some smooth functions fi on U.
The second idea leading to differential forms arises from the following question: given a differential 1-form α on U, when does there exist a function f on U such that α = df? The above expansion reduces this question to the search for a function f whose partial derivatives ∂f / ∂xi are equal to n given functions fi. For n > 1, such a function does not always exist: any smooth function f satisfies
∂
2
f
∂
x
i
∂
x
j
=
∂
2
f
∂
x
j
∂
x
i
,
{\displaystyle {\frac {\partial ^{2}f}{\partial x^{i}\,\partial x^{j}}}={\frac {\partial ^{2}f}{\partial x^{j}\,\partial x^{i}}},}
so it will be impossible to find such an f unless
∂
f
j
∂
x
i
−
∂
f
i
∂
x
j
=
0
{\displaystyle {\frac {\partial f_{j}}{\partial x^{i}}}-{\frac {\partial f_{i}}{\partial x^{j}}}=0}
for all i and j.
The skew-symmetry of the left hand side in i and j suggests introducing an antisymmetric product ∧ on differential 1-forms, the exterior product, so that these equations can be combined into a single condition
∑
i
,
j
=
1
n
∂
f
j
∂
x
i
d
x
i
∧
d
x
j
=
0
,
{\displaystyle \sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}=0,}
where ∧ is defined so that:
d
x
i
∧
d
x
j
=
−
d
x
j
∧
d
x
i
.
{\displaystyle dx^{i}\wedge dx^{j}=-dx^{j}\wedge dx^{i}.}
This is an example of a differential 2-form. This 2-form is called the exterior derivative dα of α = ∑nj=1 fj dxj. It is given by
d
α
=
∑
j
=
1
n
d
f
j
∧
d
x
j
=
∑
i
,
j
=
1
n
∂
f
j
∂
x
i
d
x
i
∧
d
x
j
.
{\displaystyle d\alpha =\sum _{j=1}^{n}df_{j}\wedge dx^{j}=\sum _{i,j=1}^{n}{\frac {\partial f_{j}}{\partial x^{i}}}\,dx^{i}\wedge dx^{j}.}
To summarize: dα = 0 is a necessary condition for the existence of a function f with α = df.
Differential 0-forms, 1-forms, and 2-forms are special cases of differential forms. For each k, there is a space of differential k-forms, which can be expressed in terms of the coordinates as
∑
i
1
,
i
2
…
i
k
=
1
n
f
i
1
i
2
…
i
k
d
x
i
1
∧
d
x
i
2
∧
⋯
∧
d
x
i
k
{\displaystyle \sum _{i_{1},i_{2}\ldots i_{k}=1}^{n}f_{i_{1}i_{2}\ldots i_{k}}\,dx^{i_{1}}\wedge dx^{i_{2}}\wedge \cdots \wedge dx^{i_{k}}}
for a collection of functions fi1i2⋅⋅⋅ik. Antisymmetry, which was already present for 2-forms, makes it possible to restrict the sum to those sets of indices for which i1 < i2 < ... < ik−1 < ik.
Differential forms can be multiplied together using the exterior product, and for any differential k-form α, there is a differential (k + 1)-form dα called the exterior derivative of α.
Differential forms, the exterior product and the exterior derivative are independent of a choice of coordinates. Consequently, they may be defined on any smooth manifold M. One way to do this is cover M with coordinate charts and define a differential k-form on M to be a family of differential k-forms on each chart which agree on the overlaps. However, there are more intrinsic definitions which make the independence of coordinates manifest.
== Intrinsic definitions ==
Let M be a smooth manifold. A smooth differential form of degree k is a smooth section of the kth exterior power of the cotangent bundle of M. The set of all differential k-forms on a manifold M is a vector space, often denoted
Ω
k
(
M
)
{\displaystyle \Omega ^{k}(M)}
.
The definition of a differential form may be restated as follows. At any point
p
∈
M
{\displaystyle p\in M}
, a k-form
β
{\displaystyle \beta }
defines an element
β
p
∈
⋀
k
T
p
∗
M
,
{\displaystyle \beta _{p}\in {\textstyle \bigwedge }^{k}T_{p}^{*}M,}
where
T
p
M
{\displaystyle T_{p}M}
is the tangent space to M at p and
T
p
∗
(
M
)
{\displaystyle T_{p}^{*}(M)}
is its dual space. This space is naturally isomorphic to the fiber at p of the dual bundle of the kth exterior power of the tangent bundle of M. That is,
β
{\displaystyle \beta }
is also a linear functional
β
p
:
⋀
k
T
p
M
→
R
{\textstyle \beta _{p}\colon {\textstyle \bigwedge }^{k}T_{p}M\to \mathbf {R} }
, i.e. the dual of the kth exterior power is isomorphic to the kth exterior power of the dual:
⋀
k
T
p
∗
M
≅
(
⋀
k
T
p
M
)
∗
{\displaystyle {\textstyle \bigwedge }^{k}T_{p}^{*}M\cong {\Big (}{\textstyle \bigwedge }^{k}T_{p}M{\Big )}^{*}}
By the universal property of exterior powers, this is equivalently an alternating multilinear map:
β
p
:
⨁
n
=
1
k
T
p
M
→
R
.
{\displaystyle \beta _{p}\colon \bigoplus _{n=1}^{k}T_{p}M\to \mathbf {R} .}
Consequently, a differential k-form may be evaluated against any k-tuple of tangent vectors to the same point p of M. For example, a differential 1-form α assigns to each point
p
∈
M
{\displaystyle p\in M}
a linear functional αp on
T
p
M
{\displaystyle T_{p}M}
. In the presence of an inner product on
T
p
M
{\displaystyle T_{p}M}
(induced by a Riemannian metric on M), αp may be represented as the inner product with a tangent vector
X
p
{\displaystyle X_{p}}
. Differential 1-forms are sometimes called covariant vector fields, covector fields, or "dual vector fields", particularly within physics.
The exterior algebra may be embedded in the tensor algebra by means of the alternation map. The alternation map is defined as a mapping
Alt
:
⨂
k
T
∗
M
→
⨂
k
T
∗
M
.
{\displaystyle \operatorname {Alt} \colon {\bigotimes }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.}
For a tensor
τ
{\displaystyle \tau }
at a point p,
Alt
(
τ
p
)
(
x
1
,
…
,
x
k
)
=
1
k
!
∑
σ
∈
S
k
sgn
(
σ
)
τ
p
(
x
σ
(
1
)
,
…
,
x
σ
(
k
)
)
,
{\displaystyle \operatorname {Alt} (\tau _{p})(x_{1},\dots ,x_{k})={\frac {1}{k!}}\sum _{\sigma \in S_{k}}\operatorname {sgn}(\sigma )\tau _{p}(x_{\sigma (1)},\dots ,x_{\sigma (k)}),}
where Sk is the symmetric group on k elements. The alternation map is constant on the cosets of the ideal in the tensor algebra generated by the symmetric 2-forms, and therefore descends to an embedding
Alt
:
⋀
k
T
∗
M
→
⨂
k
T
∗
M
.
{\displaystyle \operatorname {Alt} \colon {\textstyle \bigwedge }^{k}T^{*}M\to {\bigotimes }^{k}T^{*}M.}
This map exhibits
β
{\displaystyle \beta }
as a totally antisymmetric covariant tensor field of rank k. The differential forms on M are in one-to-one correspondence with such tensor fields.
== Operations ==
As well as the addition and multiplication by scalar operations which arise from the vector space structure, there are several other standard operations defined on differential forms. The most important operations are the exterior product of two differential forms, the exterior derivative of a single differential form, the interior product of a differential form and a vector field, the Lie derivative of a differential form with respect to a vector field and the covariant derivative of a differential form with respect to a vector field on a manifold with a defined connection.
=== Exterior product ===
The exterior product of a k-form α and an ℓ-form β, denoted α ∧ β, is a (k + ℓ)-form. At each point p of the manifold M, the forms α and β are elements of an exterior power of the cotangent space at p. When the exterior algebra is viewed as a quotient of the tensor algebra, the exterior product corresponds to the tensor product (modulo the equivalence relation defining the exterior algebra).
The antisymmetry inherent in the exterior algebra means that when α ∧ β is viewed as a multilinear functional, it is alternating. However, when the exterior algebra is embedded as a subspace of the tensor algebra by means of the alternation map, the tensor product α ⊗ β is not alternating. There is an explicit formula which describes the exterior product in this situation. The exterior product is
α
∧
β
=
Alt
(
α
⊗
β
)
.
{\displaystyle \alpha \wedge \beta =\operatorname {Alt} (\alpha \otimes \beta ).}
If the embedding of
⋀
n
T
∗
M
{\displaystyle {\textstyle \bigwedge }^{n}T^{*}M}
into
⨂
n
T
∗
M
{\displaystyle {\bigotimes }^{n}T^{*}M}
is done via the map
n
!
Alt
{\displaystyle n!\operatorname {Alt} }
instead of
Alt
{\displaystyle \operatorname {Alt} }
, the exterior product is
α
∧
β
=
(
k
+
ℓ
)
!
k
!
ℓ
!
Alt
(
α
⊗
β
)
.
{\displaystyle \alpha \wedge \beta ={\frac {(k+\ell )!}{k!\ell !}}\operatorname {Alt} (\alpha \otimes \beta ).}
This description is useful for explicit computations. For example, if k = ℓ = 1, then α ∧ β is the 2-form whose value at a point p is the alternating bilinear form defined by
(
α
∧
β
)
p
(
v
,
w
)
=
α
p
(
v
)
β
p
(
w
)
−
α
p
(
w
)
β
p
(
v
)
{\displaystyle (\alpha \wedge \beta )_{p}(v,w)=\alpha _{p}(v)\beta _{p}(w)-\alpha _{p}(w)\beta _{p}(v)}
for v, w ∈ TpM.
The exterior product is bilinear: If α, β, and γ are any differential forms, and if f is any smooth function, then
α
∧
(
β
+
γ
)
=
α
∧
β
+
α
∧
γ
,
{\displaystyle \alpha \wedge (\beta +\gamma )=\alpha \wedge \beta +\alpha \wedge \gamma ,}
α
∧
(
f
⋅
β
)
=
f
⋅
(
α
∧
β
)
.
{\displaystyle \alpha \wedge (f\cdot \beta )=f\cdot (\alpha \wedge \beta ).}
It is skew commutative (also known as graded commutative), meaning that it satisfies a variant of anticommutativity that depends on the degrees of the forms: if α is a k-form and β is an ℓ-form, then
α
∧
β
=
(
−
1
)
k
ℓ
β
∧
α
.
{\displaystyle \alpha \wedge \beta =(-1)^{k\ell }\beta \wedge \alpha .}
One also has the graded Leibniz rule:
d
(
α
∧
β
)
=
d
α
∧
β
+
(
−
1
)
k
α
∧
d
β
.
{\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .}
=== Riemannian manifold ===
On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, the metric defines a fibre-wise isomorphism of the tangent and cotangent bundles. This makes it possible to convert vector fields to covector fields and vice versa. It also enables the definition of additional operations such as the Hodge star operator
⋆
:
Ω
k
(
M
)
→
∼
Ω
n
−
k
(
M
)
{\displaystyle \star \colon \Omega ^{k}(M)\ {\stackrel {\sim }{\to }}\ \Omega ^{n-k}(M)}
and the codifferential
δ
:
Ω
k
(
M
)
→
Ω
k
−
1
(
M
)
{\displaystyle \delta \colon \Omega ^{k}(M)\rightarrow \Omega ^{k-1}(M)}
, which has degree −1 and is adjoint to the exterior differential d.
==== Vector field structures ====
On a pseudo-Riemannian manifold, 1-forms can be identified with vector fields; vector fields have additional distinct algebraic structures, which are listed here for context and to avoid confusion.
Firstly, each (co)tangent space generates a Clifford algebra, where the product of a (co)vector with itself is given by the value of a quadratic form – in this case, the natural one induced by the metric. This algebra is distinct from the exterior algebra of differential forms, which can be viewed as a Clifford algebra where the quadratic form vanishes (since the exterior product of any vector with itself is zero). Clifford algebras are thus non-anticommutative ("quantum") deformations of the exterior algebra. They are studied in geometric algebra.
Another alternative is to consider vector fields as derivations. The (noncommutative) algebra of differential operators they generate is the Weyl algebra and is a noncommutative ("quantum") deformation of the symmetric algebra in the vector fields.
=== Exterior differential complex ===
One important property of the exterior derivative is that d2 = 0. This means that the exterior derivative defines a cochain complex:
0
→
Ω
0
(
M
)
→
d
Ω
1
(
M
)
→
d
Ω
2
(
M
)
→
d
Ω
3
(
M
)
→
⋯
→
Ω
n
(
M
)
→
0.
{\displaystyle 0\ \to \ \Omega ^{0}(M)\ {\stackrel {d}{\to }}\ \Omega ^{1}(M)\ {\stackrel {d}{\to }}\ \Omega ^{2}(M)\ {\stackrel {d}{\to }}\ \Omega ^{3}(M)\ \to \ \cdots \ \to \ \Omega ^{n}(M)\ \to \ 0.}
This complex is called the de Rham complex, and its cohomology is by definition the de Rham cohomology of M. By the Poincaré lemma, the de Rham complex is locally exact except at Ω0(M). The kernel at Ω0(M) is the space of locally constant functions on M. Therefore, the complex is a resolution of the constant sheaf R, which in turn implies a form of de Rham's theorem: de Rham cohomology computes the sheaf cohomology of R.
== Pullback ==
Suppose that f : M → N is smooth. The differential of f is a smooth map df : TM → TN between the tangent bundles of M and N. This map is also denoted f∗ and called the pushforward. For any point p ∈ M and any tangent vector v ∈ TpM, there is a well-defined pushforward vector f∗(v) in Tf(p)N. However, the same is not true of a vector field. If f is not injective, say because q ∈ N has two or more preimages, then the vector field may determine two or more distinct vectors in TqN. If f is not surjective, then there will be a point q ∈ N at which f∗ does not determine any tangent vector at all. Since a vector field on N determines, by definition, a unique tangent vector at every point of N, the pushforward of a vector field does not always exist.
By contrast, it is always possible to pull back a differential form. A differential form on N may be viewed as a linear functional on each tangent space. Precomposing this functional with the differential df : TM → TN defines a linear functional on each tangent space of M and therefore a differential form on M. The existence of pullbacks is one of the key features of the theory of differential forms. It leads to the existence of pullback maps in other situations, such as pullback homomorphisms in de Rham cohomology.
Formally, let f : M → N be smooth, and let ω be a smooth k-form on N. Then there is a differential form f∗ω on M, called the pullback of ω, which captures the behavior of ω as seen relative to f. To define the pullback, fix a point p of M and tangent vectors v1, ..., vk to M at p. The pullback of ω is defined by the formula
(
f
∗
ω
)
p
(
v
1
,
…
,
v
k
)
=
ω
f
(
p
)
(
f
∗
v
1
,
…
,
f
∗
v
k
)
.
{\displaystyle (f^{*}\omega )_{p}(v_{1},\ldots ,v_{k})=\omega _{f(p)}(f_{*}v_{1},\ldots ,f_{*}v_{k}).}
There are several more abstract ways to view this definition. If ω is a 1-form on N, then it may be viewed as a section of the cotangent bundle T∗N of N. Using ∗ to denote a dual map, the dual to the differential of f is (df)∗ : T∗N → T∗M. The pullback of ω may be defined to be the composite
M
→
f
N
→
ω
T
∗
N
⟶
(
d
f
)
∗
T
∗
M
.
{\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ T^{*}N\ {\stackrel {(df)^{*}}{\longrightarrow }}\ T^{*}M.}
This is a section of the cotangent bundle of M and hence a differential 1-form on M. In full generality, let
⋀
k
(
d
f
)
∗
{\textstyle \bigwedge ^{k}(df)^{*}}
denote the kth exterior power of the dual map to the differential. Then the pullback of a k-form ω is the composite
M
→
f
N
→
ω
⋀
k
T
∗
N
⟶
⋀
k
(
d
f
)
∗
⋀
k
T
∗
M
.
{\displaystyle M\ {\stackrel {f}{\to }}\ N\ {\stackrel {\omega }{\to }}\ {\textstyle \bigwedge }^{k}T^{*}N\ {\stackrel {{\bigwedge }^{k}(df)^{*}}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}T^{*}M.}
Another abstract way to view the pullback comes from viewing a k-form ω as a linear functional on tangent spaces. From this point of view, ω is a morphism of vector bundles
⋀
k
T
N
→
ω
N
×
R
,
{\displaystyle {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} ,}
where N × R is the trivial rank one bundle on N. The composite map
⋀
k
T
M
⟶
⋀
k
d
f
⋀
k
T
N
→
ω
N
×
R
{\displaystyle {\textstyle \bigwedge }^{k}TM\ {\stackrel {{\bigwedge }^{k}df}{\longrightarrow }}\ {\textstyle \bigwedge }^{k}TN\ {\stackrel {\omega }{\to }}\ N\times \mathbf {R} }
defines a linear functional on each tangent space of M, and therefore it factors through the trivial bundle M × R. The vector bundle morphism
⋀
k
T
M
→
M
×
R
{\textstyle {\textstyle \bigwedge }^{k}TM\to M\times \mathbf {R} }
defined in this way is f∗ω.
Pullback respects all of the basic operations on forms. If ω and η are forms and c is a real number, then
f
∗
(
c
ω
)
=
c
(
f
∗
ω
)
,
f
∗
(
ω
+
η
)
=
f
∗
ω
+
f
∗
η
,
f
∗
(
ω
∧
η
)
=
f
∗
ω
∧
f
∗
η
,
f
∗
(
d
ω
)
=
d
(
f
∗
ω
)
.
{\displaystyle {\begin{aligned}f^{*}(c\omega )&=c(f^{*}\omega ),\\f^{*}(\omega +\eta )&=f^{*}\omega +f^{*}\eta ,\\f^{*}(\omega \wedge \eta )&=f^{*}\omega \wedge f^{*}\eta ,\\f^{*}(d\omega )&=d(f^{*}\omega ).\end{aligned}}}
The pullback of a form can also be written in coordinates. Assume that x1, ..., xm are coordinates on M, that y1, ..., yn are coordinates on N, and that these coordinate systems are related by the formulas yi = fi(x1, ..., xm) for all i. Locally on N, ω can be written as
ω
=
∑
i
1
<
⋯
<
i
k
ω
i
1
⋯
i
k
d
y
i
1
∧
⋯
∧
d
y
i
k
,
{\displaystyle \omega =\sum _{i_{1}<\cdots <i_{k}}\omega _{i_{1}\cdots i_{k}}\,dy^{i_{1}}\wedge \cdots \wedge dy^{i_{k}},}
where, for each choice of i1, ..., ik, ωi1⋅⋅⋅ik is a real-valued function of y1, ..., yn. Using the linearity of pullback and its compatibility with exterior product, the pullback of ω has the formula
f
∗
ω
=
∑
i
1
<
⋯
<
i
k
(
ω
i
1
⋯
i
k
∘
f
)
d
f
i
1
∧
⋯
∧
d
f
i
k
.
{\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f)\,df_{i_{1}}\wedge \cdots \wedge df_{i_{k}}.}
Each exterior derivative dfi can be expanded in terms of dx1, ..., dxm. The resulting k-form can be written using Jacobian matrices:
f
∗
ω
=
∑
i
1
<
⋯
<
i
k
∑
j
1
<
⋯
<
j
k
(
ω
i
1
⋯
i
k
∘
f
)
∂
(
f
i
1
,
…
,
f
i
k
)
∂
(
x
j
1
,
…
,
x
j
k
)
d
x
j
1
∧
⋯
∧
d
x
j
k
.
{\displaystyle f^{*}\omega =\sum _{i_{1}<\cdots <i_{k}}\sum _{j_{1}<\cdots <j_{k}}(\omega _{i_{1}\cdots i_{k}}\circ f){\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}\,dx^{j_{1}}\wedge \cdots \wedge dx^{j_{k}}.}
Here,
∂
(
f
i
1
,
…
,
f
i
k
)
∂
(
x
j
1
,
…
,
x
j
k
)
{\textstyle {\frac {\partial (f_{i_{1}},\ldots ,f_{i_{k}})}{\partial (x^{j_{1}},\ldots ,x^{j_{k}})}}}
denotes the determinant of the matrix whose entries are
∂
f
i
m
∂
x
j
n
{\textstyle {\frac {\partial f_{i_{m}}}{\partial x^{j_{n}}}}}
,
1
≤
m
,
n
≤
k
{\displaystyle 1\leq m,n\leq k}
.
== Integration ==
A differential k-form can be integrated over an oriented k-dimensional manifold. When the k-form is defined on an n-dimensional manifold with n > k, then the k-form can be integrated over oriented k-dimensional submanifolds. If k = 0, integration over oriented 0-dimensional submanifolds is just the summation of the integrand evaluated at points, according to the orientation of those points. Other values of k = 1, 2, 3, ... correspond to line integrals, surface integrals, volume integrals, and so on. There are several equivalent ways to formally define the integral of a differential form, all of which depend on reducing to the case of Euclidean space.
=== Integration on Euclidean space ===
Let U be an open subset of Rn. Give Rn its standard orientation and U the restriction of that orientation. Every smooth n-form ω on U has the form
ω
=
f
(
x
)
d
x
1
∧
⋯
∧
d
x
n
{\displaystyle \omega =f(x)\,dx^{1}\wedge \cdots \wedge dx^{n}}
for some smooth function f : Rn → R. Such a function has an integral in the usual Riemann or Lebesgue sense. This allows us to define the integral of ω to be the integral of f:
∫
U
ω
=
def
∫
U
f
(
x
)
d
x
1
⋯
d
x
n
.
{\displaystyle \int _{U}\omega \ {\stackrel {\text{def}}{=}}\int _{U}f(x)\,dx^{1}\cdots dx^{n}.}
Fixing an orientation is necessary for this to be well-defined. The skew-symmetry of differential forms means that the integral of, say, dx1 ∧ dx2 must be the negative of the integral of dx2 ∧ dx1. Riemann and Lebesgue integrals cannot see this dependence on the ordering of the coordinates, so they leave the sign of the integral undetermined. The orientation resolves this ambiguity.
=== Integration over chains ===
Let M be an n-manifold and ω an n-form on M. First, assume that there is a parametrization of M by an open subset of Euclidean space. That is, assume that there exists a diffeomorphism
φ
:
D
→
M
{\displaystyle \varphi \colon D\to M}
where D ⊆ Rn. Give M the orientation induced by φ. Then (Rudin 1976) defines the integral of ω over M to be the integral of φ∗ω over D. In coordinates, this has the following expression. Fix an embedding of M in RI with coordinates x1, ..., xI. Then
ω
=
∑
i
1
<
⋯
<
i
n
a
i
1
,
…
,
i
n
(
x
)
d
x
i
1
∧
⋯
∧
d
x
i
n
.
{\displaystyle \omega =\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}({\mathbf {x} })\,dx^{i_{1}}\wedge \cdots \wedge dx^{i_{n}}.}
Suppose that φ is defined by
φ
(
u
)
=
(
x
1
(
u
)
,
…
,
x
I
(
u
)
)
.
{\displaystyle \varphi ({\mathbf {u} })=(x^{1}({\mathbf {u} }),\ldots ,x^{I}({\mathbf {u} })).}
Then the integral may be written in coordinates as
∫
M
ω
=
∫
D
∑
i
1
<
⋯
<
i
n
a
i
1
,
…
,
i
n
(
φ
(
u
)
)
∂
(
x
i
1
,
…
,
x
i
n
)
∂
(
u
1
,
…
,
u
n
)
d
u
1
⋯
d
u
n
,
{\displaystyle \int _{M}\omega =\int _{D}\sum _{i_{1}<\cdots <i_{n}}a_{i_{1},\ldots ,i_{n}}(\varphi ({\mathbf {u} })){\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\dots ,u^{n})}}\,du^{1}\cdots du^{n},}
where
∂
(
x
i
1
,
…
,
x
i
n
)
∂
(
u
1
,
…
,
u
n
)
{\displaystyle {\frac {\partial (x^{i_{1}},\ldots ,x^{i_{n}})}{\partial (u^{1},\ldots ,u^{n})}}}
is the determinant of the Jacobian. The Jacobian exists because φ is differentiable.
In general, an n-manifold cannot be parametrized by an open subset of Rn. But such a parametrization is always possible locally, so it is possible to define integrals over arbitrary manifolds by defining them as sums of integrals over collections of local parametrizations. Moreover, it is also possible to define parametrizations of k-dimensional subsets for k < n, and this makes it possible to define integrals of k-forms. To make this precise, it is convenient to fix a standard domain D in Rk, usually a cube or a simplex. A k-chain is a formal sum of smooth embeddings D → M. That is, it is a collection of smooth embeddings, each of which is assigned an integer multiplicity. Each smooth embedding determines a k-dimensional submanifold of M. If the chain is
c
=
∑
i
=
1
r
m
i
φ
i
,
{\displaystyle c=\sum _{i=1}^{r}m_{i}\varphi _{i},}
then the integral of a k-form ω over c is defined to be the sum of the integrals over the terms of c:
∫
c
ω
=
∑
i
=
1
r
m
i
∫
D
φ
i
∗
ω
.
{\displaystyle \int _{c}\omega =\sum _{i=1}^{r}m_{i}\int _{D}\varphi _{i}^{*}\omega .}
This approach to defining integration does not assign a direct meaning to integration over the whole manifold M. However, it is still possible to assign such a meaning indirectly because every smooth manifold may be smoothly triangulated in an essentially unique way, and the integral over M may be defined to be the integral over the chain determined by a triangulation.
=== Integration using partitions of unity ===
There is another approach, expounded in (Dieudonné 1972), which does directly assign a meaning to integration over M, but this approach requires fixing an orientation of M. The integral of an n-form ω on an n-dimensional manifold is defined by working in charts. Suppose first that ω is supported on a single positively oriented chart. On this chart, it may be pulled back to an n-form on an open subset of Rn. Here, the form has a well-defined Riemann or Lebesgue integral as before. The change of variables formula and the assumption that the chart is positively oriented together ensure that the integral of ω is independent of the chosen chart. In the general case, use a partition of unity to write ω as a sum of n-forms, each of which is supported in a single positively oriented chart, and define the integral of ω to be the sum of the integrals of each term in the partition of unity.
It is also possible to integrate k-forms on oriented k-dimensional submanifolds using this more intrinsic approach. The form is pulled back to the submanifold, where the integral is defined using charts as before. For example, given a path γ(t) : [0, 1] → R2, integrating a 1-form on the path is simply pulling back the form to a form f(t) dt on [0, 1], and this integral is the integral of the function f(t) on the interval.
=== Integration along fibers ===
Fubini's theorem states that the integral over a set that is a product may be computed as an iterated integral over the two factors in the product. This suggests that the integral of a differential form over a product ought to be computable as an iterated integral as well. The geometric flexibility of differential forms ensures that this is possible not just for products, but in more general situations as well. Under some hypotheses, it is possible to integrate along the fibers of a smooth map, and the analog of Fubini's theorem is the case where this map is the projection from a product to one of its factors.
Because integrating a differential form over a submanifold requires fixing an orientation, a prerequisite to integration along fibers is the existence of a well-defined orientation on those fibers. Let M and N be two orientable manifolds of pure dimensions m and n, respectively. Suppose that f : M → N is a surjective submersion. This implies that each fiber f−1(y) is (m − n)-dimensional and that, around each point of M, there is a chart on which f looks like the projection from a product onto one of its factors. Fix x ∈ M and set y = f(x). Suppose that
ω
x
∈
⋀
m
T
x
∗
M
,
η
y
∈
⋀
n
T
y
∗
N
,
{\displaystyle {\begin{aligned}\omega _{x}&\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,\\[2pt]\eta _{y}&\in {\textstyle \bigwedge }^{n}T_{y}^{*}N,\end{aligned}}}
and that ηy does not vanish. Following (Dieudonné 1972), there is a unique
σ
x
∈
⋀
m
−
n
T
x
∗
(
f
−
1
(
y
)
)
{\displaystyle \sigma _{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}(f^{-1}(y))}
which may be thought of as the fibral part of ωx with respect to ηy. More precisely, define j : f−1(y) → M to be the inclusion. Then σx is defined by the property that
ω
x
=
(
f
∗
η
y
)
x
∧
σ
x
′
∈
⋀
m
T
x
∗
M
,
{\displaystyle \omega _{x}=(f^{*}\eta _{y})_{x}\wedge \sigma '_{x}\in {\textstyle \bigwedge }^{m}T_{x}^{*}M,}
where
σ
x
′
∈
⋀
m
−
n
T
x
∗
M
{\displaystyle \sigma '_{x}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M}
is any (m − n)-covector for which
σ
x
=
j
∗
σ
x
′
.
{\displaystyle \sigma _{x}=j^{*}\sigma '_{x}.}
The form σx may also be notated ωx / ηy.
Moreover, for fixed y, σx varies smoothly with respect to x. That is, suppose that
ω
:
f
−
1
(
y
)
→
T
∗
M
{\displaystyle \omega \colon f^{-1}(y)\to T^{*}M}
is a smooth section of the projection map; we say that ω is a smooth differential m-form on M along f−1(y). Then there is a smooth differential (m − n)-form σ on f−1(y) such that, at each x ∈ f−1(y),
σ
x
=
ω
x
/
η
y
.
{\displaystyle \sigma _{x}=\omega _{x}/\eta _{y}.}
This form is denoted ω / ηy. The same construction works if ω is an m-form in a neighborhood of the fiber, and the same notation is used. A consequence is that each fiber f−1(y) is orientable. In particular, a choice of orientation forms on M and N defines an orientation of every fiber of f.
The analog of Fubini's theorem is as follows. As before, M and N are two orientable manifolds of pure dimensions m and n, and f : M → N is a surjective submersion. Fix orientations of M and N, and give each fiber of f the induced orientation. Let ω be an m-form on M, and let η be an n-form on N that is almost everywhere positive with respect to the orientation of N. Then, for almost every y ∈ N, the form ω / ηy is a well-defined integrable m − n form on f−1(y). Moreover, there is an integrable n-form on N defined by
y
↦
(
∫
f
−
1
(
y
)
ω
/
η
y
)
η
y
.
{\displaystyle y\mapsto {\bigg (}\int _{f^{-1}(y)}\omega /\eta _{y}{\bigg )}\,\eta _{y}.}
Denote this form by
(
∫
f
−
1
(
y
)
ω
/
η
)
η
.
{\displaystyle {\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .}
Then (Dieudonné 1972) proves the generalized Fubini formula
∫
M
ω
=
∫
N
(
∫
f
−
1
(
y
)
ω
/
η
)
η
.
{\displaystyle \int _{M}\omega =\int _{N}{\bigg (}\int _{f^{-1}(y)}\omega /\eta {\bigg )}\,\eta .}
It is also possible to integrate forms of other degrees along the fibers of a submersion. Assume the same hypotheses as before, and let α be a compactly supported (m − n + k)-form on M. Then there is a k-form γ on N which is the result of integrating α along the fibers of f. The form α is defined by specifying, at each y ∈ N, how γ pairs with each k-vector v at y, and the value of that pairing is an integral over f−1(y) that depends only on α, v, and the orientations of M and N. More precisely, at each y ∈ N, there is an isomorphism
⋀
k
T
y
N
→
⋀
n
−
k
T
y
∗
N
{\displaystyle {\textstyle \bigwedge }^{k}T_{y}N\to {\textstyle \bigwedge }^{n-k}T_{y}^{*}N}
defined by the interior product
v
↦
v
⌟
ζ
y
,
{\displaystyle \mathbf {v} \mapsto \mathbf {v} \,\lrcorner \,\zeta _{y},}
for any choice of volume form ζ in the orientation of N. If x ∈ f−1(y), then a k-vector v at y determines an (n − k)-covector at x by pullback:
f
∗
(
v
⌟
ζ
y
)
∈
⋀
n
−
k
T
x
∗
M
.
{\displaystyle f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\in {\textstyle \bigwedge }^{n-k}T_{x}^{*}M.}
Each of these covectors has an exterior product against α, so there is an (m − n)-form βv on M along f−1(y) defined by
(
β
v
)
x
=
(
α
x
∧
f
∗
(
v
⌟
ζ
y
)
)
/
ζ
y
∈
⋀
m
−
n
T
x
∗
M
.
{\displaystyle (\beta _{\mathbf {v} })_{x}=\left(\alpha _{x}\wedge f^{*}(\mathbf {v} \,\lrcorner \,\zeta _{y})\right){\big /}\zeta _{y}\in {\textstyle \bigwedge }^{m-n}T_{x}^{*}M.}
This form depends on the orientation of N but not the choice of ζ. Then the k-form γ is uniquely defined by the property
⟨
γ
y
,
v
⟩
=
∫
f
−
1
(
y
)
β
v
,
{\displaystyle \langle \gamma _{y},\mathbf {v} \rangle =\int _{f^{-1}(y)}\beta _{\mathbf {v} },}
and γ is smooth (Dieudonné 1972). This form also denoted α♭ and called the integral of α along the fibers of f. Integration along fibers is important for the construction of Gysin maps in de Rham cohomology.
Integration along fibers satisfies the projection formula (Dieudonné 1972). If λ is any ℓ-form on N, then
α
♭
∧
λ
=
(
α
∧
f
∗
λ
)
♭
.
{\displaystyle \alpha ^{\flat }\wedge \lambda =(\alpha \wedge f^{*}\lambda )^{\flat }.}
=== Stokes's theorem ===
The fundamental relationship between the exterior derivative and integration is given by the Stokes' theorem: If ω is an (n − 1)-form with compact support on M and ∂M denotes the boundary of M with its induced orientation, then
∫
M
d
ω
=
∫
∂
M
ω
.
{\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .}
A key consequence of this is that "the integral of a closed form over homologous chains is equal": If ω is a closed k-form and M and N are k-chains that are homologous (such that M − N is the boundary of a (k + 1)-chain W), then
∫
M
ω
=
∫
N
ω
{\displaystyle \textstyle {\int _{M}\omega =\int _{N}\omega }}
, since the difference is the integral
∫
W
d
ω
=
∫
W
0
=
0
{\displaystyle \textstyle \int _{W}d\omega =\int _{W}0=0}
.
For example, if ω = df is the derivative of a potential function on the plane or Rn, then the integral of ω over a path from a to b does not depend on the choice of path (the integral is f(b) − f(a)), since different paths with given endpoints are homotopic, hence homologous (a weaker condition). This case is called the gradient theorem, and generalizes the fundamental theorem of calculus. This path independence is very useful in contour integration.
This theorem also underlies the duality between de Rham cohomology and the homology of chains.
=== Relation with measures ===
On a general differentiable manifold (without additional structure), differential forms cannot be integrated over subsets of the manifold; this distinction is key to the distinction between differential forms, which are integrated over chains or oriented submanifolds, and measures, which are integrated over subsets. The simplest example is attempting to integrate the 1-form dx over the interval [0, 1]. Assuming the usual distance (and thus measure) on the real line, this integral is either 1 or −1, depending on orientation:
∫
0
1
d
x
=
1
{\textstyle \int _{0}^{1}dx=1}
, while
∫
1
0
d
x
=
−
∫
0
1
d
x
=
−
1
{\textstyle \int _{1}^{0}dx=-\int _{0}^{1}dx=-1}
. By contrast, the integral of the measure |dx| on the interval is unambiguously 1 (i.e. the integral of the constant function 1 with respect to this measure is 1). Similarly, under a change of coordinates a differential n-form changes by the Jacobian determinant J, while a measure changes by the absolute value of the Jacobian determinant, |J|, which further reflects the issue of orientation. For example, under the map x ↦ −x on the line, the differential form dx pulls back to −dx; orientation has reversed; while the Lebesgue measure, which here we denote |dx|, pulls back to |dx|; it does not change.
In the presence of the additional data of an orientation, it is possible to integrate n-forms (top-dimensional forms) over the entire manifold or over compact subsets; integration over the entire manifold corresponds to integrating the form over the fundamental class of the manifold, [M]. Formally, in the presence of an orientation, one may identify n-forms with densities on a manifold; densities in turn define a measure, and thus can be integrated (Folland 1999, Section 11.4, pp. 361–362).
On an orientable but not oriented manifold, there are two choices of orientation; either choice allows one to integrate n-forms over compact subsets, with the two choices differing by a sign. On a non-orientable manifold, n-forms and densities cannot be identified —notably, any top-dimensional form must vanish somewhere (there are no volume forms on non-orientable manifolds), but there are nowhere-vanishing densities— thus while one can integrate densities over compact subsets, one cannot integrate n-forms. One can instead identify densities with top-dimensional pseudoforms.
Even in the presence of an orientation, there is in general no meaningful way to integrate k-forms over subsets for k < n because there is no consistent way to use the ambient orientation to orient k-dimensional subsets. Geometrically, a k-dimensional subset can be turned around in place, yielding the same subset with the opposite orientation; for example, the horizontal axis in a plane can be rotated by 180 degrees. Compare the Gram determinant of a set of k vectors in an n-dimensional space, which, unlike the determinant of n vectors, is always positive, corresponding to a squared number. An orientation of a k-submanifold is therefore extra data not derivable from the ambient manifold.
On a Riemannian manifold, one may define a k-dimensional Hausdorff measure for any k (integer or real), which may be integrated over k-dimensional subsets of the manifold. A function times this Hausdorff measure can then be integrated over k-dimensional subsets, providing a measure-theoretic analog to integration of k-forms. The n-dimensional Hausdorff measure yields a density, as above.
=== Currents ===
The differential form analog of a distribution or generalized function is called a current. The space of k-currents on M is the dual space to an appropriate space of differential k-forms. Currents play the role of generalized domains of integration, similar to but even more flexible than chains.
== Applications in physics ==
Differential forms arise in some important physical contexts. For example, in Maxwell's theory of electromagnetism, the Faraday 2-form, or electromagnetic field strength, is
F
=
1
2
f
a
b
d
x
a
∧
d
x
b
,
{\displaystyle \mathbf {F} ={\frac {1}{2}}f_{ab}\,dx^{a}\wedge dx^{b}\,,}
where the fab are formed from the electromagnetic fields
E
→
{\displaystyle {\vec {E}}}
and
B
→
{\displaystyle {\vec {B}}}
; e.g., f12 = Ez/c, f23 = −Bz, or equivalent definitions.
This form is a special case of the curvature form on the U(1) principal bundle on which both electromagnetism and general gauge theories may be described. The connection form for the principal bundle is the vector potential, typically denoted by A, when represented in some gauge. One then has
F
=
d
A
.
{\displaystyle \mathbf {F} =d\mathbf {A} .}
The current 3-form is
J
=
1
6
j
a
ε
a
b
c
d
d
x
b
∧
d
x
c
∧
d
x
d
,
{\displaystyle \mathbf {J} ={\frac {1}{6}}j^{a}\,\varepsilon _{abcd}\,dx^{b}\wedge dx^{c}\wedge dx^{d}\,,}
where ja are the four components of the current density. (Here it is a matter of convention to write Fab instead of fab, i.e. to use capital letters, and to write Ja instead of ja. However, the vector rsp. tensor components and the above-mentioned forms have different physical dimensions. Moreover, by decision of an international commission of the International Union of Pure and Applied Physics, the magnetic polarization vector has been called
J
→
{\displaystyle {\vec {J}}}
for several decades, and by some publishers J; i.e., the same name is used for different quantities.)
Using the above-mentioned definitions, Maxwell's equations can be written very compactly in geometrized units as
d
F
=
0
d
⋆
F
=
J
,
{\displaystyle {\begin{aligned}d{\mathbf {F} }&=\mathbf {0} \\d{\star \mathbf {F} }&=\mathbf {J} ,\end{aligned}}}
where
⋆
{\displaystyle \star }
denotes the Hodge star operator. Similar considerations describe the geometry of gauge theories in general.
The 2-form
⋆
F
{\displaystyle {\star }\mathbf {F} }
, which is dual to the Faraday form, is also called Maxwell 2-form.
Electromagnetism is an example of a U(1) gauge theory. Here the Lie group is U(1), the one-dimensional unitary group, which is in particular abelian. There are gauge theories, such as Yang–Mills theory, in which the Lie group is not abelian. In that case, one gets relations which are similar to those described here. The analog of the field F in such theories is the curvature form of the connection, which is represented in a gauge by a Lie algebra-valued one-form A. The Yang–Mills field F is then defined by
F
=
d
A
+
A
∧
A
.
{\displaystyle \mathbf {F} =d\mathbf {A} +\mathbf {A} \wedge \mathbf {A} .}
In the abelian case, such as electromagnetism, A ∧ A = 0, but this does not hold in general. Likewise the field equations are modified by additional terms involving exterior products of A and F, owing to the structure equations of the gauge group.
== Applications in geometric measure theory ==
Numerous minimality results for complex analytic manifolds are based on the Wirtinger inequality for 2-forms. A succinct proof may be found in Herbert Federer's classic text Geometric Measure Theory. The Wirtinger inequality is also a key ingredient in Gromov's inequality for complex projective space in systolic geometry.
== See also ==
Closed and exact differential forms
Complex differential form
Vector-valued differential form
Equivariant differential form
Calculus on Manifolds
Multilinear form
Polynomial differential form
Presymplectic form
== Notes ==
== References ==
== External links ==
Weisstein, Eric W. "Differential form". MathWorld.
Sjamaar, Reyer (2006), Manifolds and differential forms lecture notes (PDF), a course taught at Cornell University.
Bachman, David (2003), A Geometric Approach to Differential Forms, arXiv:math/0306194, Bibcode:2003math......6194B, an undergraduate text.
Needham, Tristan. Visual differential geometry and forms: a mathematical drama in five acts. Princeton University Press, 2021. | Wikipedia/Differential_1-form |
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
SDEs have a random differential that is in the most basic case random white noise calculated as the distributional derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps.
Stochastic differential equations are in general neither differential equations nor random differential equations. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds.
== Background ==
Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force.
The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus.
=== Terminology ===
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus.
Another construction was later proposed by Russian physicist Stratonovich,
leading to what is known as the Stratonovich integral.
The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time.
The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds, although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, for example when trying to optimally approximate SDEs on submanifolds.
An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.
In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.
=== Stochastic calculus ===
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
=== Numerical solutions ===
Numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE), Rosenbrock method, and methods based on different representations of iterated stochastic integrals.
== Use in physics ==
In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.
There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:
d
x
(
t
)
d
t
=
F
(
x
(
t
)
)
+
∑
α
=
1
n
g
α
(
x
(
t
)
)
ξ
α
(
t
)
,
{\displaystyle {\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=F(x(t))+\sum _{\alpha =1}^{n}g_{\alpha }(x(t))\xi ^{\alpha }(t),\,}
where
x
∈
X
{\displaystyle x\in X}
is the position in the system in its phase (or state) space,
X
{\displaystyle X}
, assumed to be a differentiable manifold, the
F
∈
T
X
{\displaystyle F\in TX}
is a flow vector field representing deterministic law of evolution, and
g
α
∈
T
X
{\displaystyle g_{\alpha }\in TX}
is a set of vector fields that define the coupling of the system to Gaussian white noise,
ξ
α
{\displaystyle \xi ^{\alpha }}
. If
X
{\displaystyle X}
is a linear space and
g
{\displaystyle g}
are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. For additive noise, the Itô and Stratonovich forms of the SDE generate the same solution, and it is not important which definition is used to solve the SDE. For multiplicative noise SDEs the Itô and Stratonovich forms of the SDE are different, and care should be used in mapping between them.
For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.
== Use in probability and mathematical finance ==
The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time
ξ
α
{\displaystyle \xi ^{\alpha }}
in the physics formulation more explicit. In strict mathematical terms,
ξ
α
{\displaystyle \xi ^{\alpha }}
cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.
A typical equation is of the form
d
X
t
=
μ
(
X
t
,
t
)
d
t
+
σ
(
X
t
,
t
)
d
B
t
,
{\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t},}
where
B
{\displaystyle B}
denotes a Wiener process (standard Brownian motion).
This equation should be interpreted as an informal way of expressing the corresponding integral equation
X
t
+
s
−
X
t
=
∫
t
t
+
s
μ
(
X
u
,
u
)
d
u
+
∫
t
t
+
s
σ
(
X
u
,
u
)
d
B
u
.
{\displaystyle X_{t+s}-X_{t}=\int _{t}^{t+s}\mu (X_{u},u)\mathrm {d} u+\int _{t}^{t+s}\sigma (X_{u},u)\,\mathrm {d} B_{u}.}
The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xt, t) δ and variance σ(Xt, t)2 δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property.
The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (
Ω
,
F
,
P
{\displaystyle \Omega ,\,{\mathcal {F}},\,P}
). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two.
An important example is the equation for geometric Brownian motion
d
X
t
=
μ
X
t
d
t
+
σ
X
t
d
B
t
.
{\displaystyle \mathrm {d} X_{t}=\mu X_{t}\,\mathrm {d} t+\sigma X_{t}\,\mathrm {d} B_{t}.}
which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.
Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. This leads to models that can deal with the volatility smile in financial mathematics.
The simpler SDE called arithmetic Brownian motion
d
X
t
=
μ
d
t
+
σ
d
B
t
{\displaystyle \mathrm {d} X_{t}=\mu \,\mathrm {d} t+\sigma \,\mathrm {d} B_{t}}
was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model.
There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.
A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of Marcus type. The Marcus integral is an extension of McShane's stochastic calculus.
An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process
d
R
t
=
μ
R
t
d
t
+
σ
t
d
B
t
.
{\displaystyle \mathrm {d} R_{t}=\mu R_{t}\,\mathrm {d} t+\sigma _{t}\,\mathrm {d} B_{t}.}
which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution.
Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena.
=== SDEs on manifolds ===
More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold
M
{\displaystyle M}
, some finite-dimensional vector space
E
{\displaystyle E}
, a filtered probability space
(
Ω
,
F
,
(
F
t
)
t
∈
R
+
,
P
)
{\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}},P)}
with
(
F
t
)
t
∈
R
+
{\displaystyle ({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}}}
satisfying the usual conditions and let
M
^
=
M
∪
{
∞
}
{\displaystyle {\widehat {M}}=M\cup \{\infty \}}
be the one-point compactification and
x
0
{\displaystyle x_{0}}
be
F
0
{\displaystyle {\mathcal {F}}_{0}}
-measurable. A stochastic differential equation on
M
{\displaystyle M}
written
d
X
=
A
(
X
)
∘
d
Z
{\displaystyle \mathrm {d} X=A(X)\circ dZ}
is a pair
(
A
,
Z
)
{\displaystyle (A,Z)}
, such that
Z
{\displaystyle Z}
is a continuous
E
{\displaystyle E}
-valued semimartingale,
A
:
M
×
E
→
T
M
,
(
x
,
e
)
↦
A
(
x
)
e
{\displaystyle A:M\times E\to TM,(x,e)\mapsto A(x)e}
is a homomorphism of vector bundles over
M
{\displaystyle M}
.
For each
x
∈
M
{\displaystyle x\in M}
the map
A
(
x
)
:
E
→
T
x
M
{\displaystyle A(x):E\to T_{x}M}
is linear and
A
(
⋅
)
e
∈
Γ
(
T
M
)
{\displaystyle A(\cdot )e\in \Gamma (TM)}
for each
e
∈
E
{\displaystyle e\in E}
.
A solution to the SDE on
M
{\displaystyle M}
with initial condition
X
0
=
x
0
{\displaystyle X_{0}=x_{0}}
is a continuous
{
F
t
}
{\displaystyle \{{\mathcal {F}}_{t}\}}
-adapted
M
{\displaystyle M}
-valued process
(
X
t
)
t
<
ζ
{\displaystyle (X_{t})_{t<\zeta }}
up to life time
ζ
{\displaystyle \zeta }
, s.t. for each test function
f
∈
C
c
∞
(
M
)
{\displaystyle f\in C_{c}^{\infty }(M)}
the process
f
(
X
)
{\displaystyle f(X)}
is a real-valued semimartingale and for each stopping time
τ
{\displaystyle \tau }
with
0
≤
τ
<
ζ
{\displaystyle 0\leq \tau <\zeta }
the equation
f
(
X
τ
)
=
f
(
x
0
)
+
∫
0
τ
(
d
f
)
X
A
(
X
)
∘
d
Z
{\displaystyle f(X_{\tau })=f(x_{0})+\int _{0}^{\tau }(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z}
holds
P
{\displaystyle P}
-almost surely, where
(
d
f
)
X
:
T
x
M
→
T
f
(
x
)
M
{\displaystyle (df)_{X}:T_{x}M\to T_{f(x)}M}
is the differential at
X
{\displaystyle X}
. It is a maximal solution if the life time is maximal, i.e.,
{
ζ
<
∞
}
⊂
{
lim
t
↗
ζ
X
t
=
∞
in
M
^
}
{\displaystyle \{\zeta <\infty \}\subset \left\{\lim \limits _{t\nearrow \zeta }X_{t}=\infty {\text{ in }}{\widehat {M}}\right\}}
P
{\displaystyle P}
-almost surely. It follows from the fact that
f
(
X
)
{\displaystyle f(X)}
for each test function
f
∈
C
c
∞
(
M
)
{\displaystyle f\in C_{c}^{\infty }(M)}
is a semimartingale, that
X
{\displaystyle X}
is a semimartingale on
M
{\displaystyle M}
. Given a maximal solution we can extend the time of
X
{\displaystyle X}
onto full
R
+
{\displaystyle \mathbb {R} _{+}}
and after a continuation of
f
{\displaystyle f}
on
M
^
{\displaystyle {\widehat {M}}}
we get
f
(
X
t
)
=
f
(
X
0
)
+
∫
0
t
(
d
f
)
X
A
(
X
)
∘
d
Z
,
t
≥
0
{\displaystyle f(X_{t})=f(X_{0})+\int _{0}^{t}(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z,\quad t\geq 0}
up to indistinguishable processes.
Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem, leading to optimal projection filters.
== As rough paths ==
Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory.
This points to considering the SDE
d
X
t
(
ω
)
=
μ
(
X
t
(
ω
)
,
t
)
d
t
+
σ
(
X
t
(
ω
)
,
t
)
d
B
t
(
ω
)
{\displaystyle \mathrm {d} X_{t}(\omega )=\mu (X_{t}(\omega ),t)\,\mathrm {d} t+\sigma (X_{t}(\omega ),t)\,\mathrm {d} B_{t}(\omega )}
as a single deterministic differential equation for every
ω
∈
Ω
{\displaystyle \omega \in \Omega }
, where
Ω
{\displaystyle \Omega }
is the sample space in the given probability space (
Ω
,
F
,
P
{\displaystyle \Omega ,\,{\mathcal {F}},\,P}
). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like
d
B
t
(
ω
)
{\displaystyle \mathrm {d} B_{t}(\omega )}
, precluding also a naive path-wise definition of the stochastic integral as an integral against every single
d
B
t
(
ω
)
{\displaystyle \mathrm {d} B_{t}(\omega )}
. However, motivated by the Wong-Zakai result for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single
ω
∈
Ω
{\displaystyle \omega \in \Omega }
that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability.
== Existence and uniqueness of solutions ==
As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2).
Let T > 0, and let
μ
:
R
n
×
[
0
,
T
]
→
R
n
;
{\displaystyle \mu :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n};}
σ
:
R
n
×
[
0
,
T
]
→
R
n
×
m
;
{\displaystyle \sigma :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n\times m};}
be measurable functions for which there exist constants C and D such that
|
μ
(
x
,
t
)
|
+
|
σ
(
x
,
t
)
|
≤
C
(
1
+
|
x
|
)
;
{\displaystyle {\big |}\mu (x,t){\big |}+{\big |}\sigma (x,t){\big |}\leq C{\big (}1+|x|{\big )};}
|
μ
(
x
,
t
)
−
μ
(
y
,
t
)
|
+
|
σ
(
x
,
t
)
−
σ
(
y
,
t
)
|
≤
D
|
x
−
y
|
;
{\displaystyle {\big |}\mu (x,t)-\mu (y,t){\big |}+{\big |}\sigma (x,t)-\sigma (y,t){\big |}\leq D|x-y|;}
for all t ∈ [0, T] and all x and y ∈ Rn, where
|
σ
|
2
=
∑
i
,
j
=
1
n
|
σ
i
j
|
2
.
{\displaystyle |\sigma |^{2}=\sum _{i,j=1}^{n}|\sigma _{ij}|^{2}.}
Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment:
E
[
|
Z
|
2
]
<
+
∞
.
{\displaystyle \mathbb {E} {\big [}|Z|^{2}{\big ]}<+\infty .}
Then the stochastic differential equation/initial value problem
d
X
t
=
μ
(
X
t
,
t
)
d
t
+
σ
(
X
t
,
t
)
d
B
t
for
t
∈
[
0
,
T
]
;
{\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t}{\mbox{ for }}t\in [0,T];}
X
0
=
Z
;
{\displaystyle X_{0}=Z;}
has a P-almost surely unique t-continuous solution (t, ω) ↦ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and
E
[
∫
0
T
|
X
t
|
2
d
t
]
<
+
∞
.
{\displaystyle \mathbb {E} \left[\int _{0}^{T}|X_{t}|^{2}\,\mathrm {d} t\right]<+\infty .}
=== General case: local Lipschitz condition and maximal solutions ===
The stochastic differential equation above is only a special case of a more general form
d
Y
t
=
α
(
t
,
Y
t
)
d
X
t
{\displaystyle \mathrm {d} Y_{t}=\alpha (t,Y_{t})\mathrm {d} X_{t}}
where
X
{\displaystyle X}
is a continuous semimartingale in
R
n
{\displaystyle \mathbb {R} ^{n}}
and
Y
{\displaystyle Y}
is a continuous semimartingal in
R
d
{\displaystyle \mathbb {R} ^{d}}
α
:
R
+
×
U
→
Lin
(
R
n
;
R
d
)
{\displaystyle \alpha :\mathbb {R} _{+}\times U\to \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})}
is a map from some open nonempty set
U
⊂
R
d
{\displaystyle U\subset \mathbb {R} ^{d}}
, where
Lin
(
R
n
;
R
d
)
{\displaystyle \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})}
is the space of all linear maps from
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
d
{\displaystyle \mathbb {R} ^{d}}
.
More generally one can also look at stochastic differential equations on manifolds.
Whether the solution of this equation explodes depends on the choice of
α
{\displaystyle \alpha }
. Suppose
α
{\displaystyle \alpha }
satisfies some local Lipschitz condition, i.e., for
t
≥
0
{\displaystyle t\geq 0}
and some compact set
K
⊂
U
{\displaystyle K\subset U}
and some constant
L
(
t
,
K
)
{\displaystyle L(t,K)}
the condition
|
α
(
s
,
y
)
−
α
(
s
,
x
)
|
≤
L
(
t
,
K
)
|
y
−
x
|
,
x
,
y
∈
K
,
0
≤
s
≤
t
,
{\displaystyle |\alpha (s,y)-\alpha (s,x)|\leq L(t,K)|y-x|,\quad x,y\in K,\;0\leq s\leq t,}
where
|
⋅
|
{\displaystyle |\cdot |}
is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called maximal solution.
Suppose
α
{\displaystyle \alpha }
is continuous and satisfies the above local Lipschitz condition and let
F
:
Ω
→
U
{\displaystyle F:\Omega \to U}
be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let
ζ
:
Ω
→
R
¯
+
{\displaystyle \zeta :\Omega \to {\overline {\mathbb {R} }}_{+}}
be a predictable stopping time with
ζ
>
0
{\displaystyle \zeta >0}
almost surely. A
U
{\displaystyle U}
-valued semimartingale
(
Y
t
)
t
<
ζ
{\displaystyle (Y_{t})_{t<\zeta }}
is called a maximal solution of
d
Y
t
=
α
(
t
,
Y
t
)
d
X
t
,
Y
0
=
F
{\displaystyle dY_{t}=\alpha (t,Y_{t})dX_{t},\quad Y_{0}=F}
with life time
ζ
{\displaystyle \zeta }
if
for one (and hence all) announcing
ζ
n
↗
ζ
{\displaystyle \zeta _{n}\nearrow \zeta }
the stopped process
Y
ζ
n
{\displaystyle Y^{\zeta _{n}}}
is a solution to the stopped stochastic differential equation
d
Y
=
α
(
t
,
Y
)
d
X
ζ
n
{\displaystyle \mathrm {d} Y=\alpha (t,Y)\mathrm {d} X^{\zeta _{n}}}
on the set
{
ζ
<
∞
}
{\displaystyle \{\zeta <\infty \}}
we have almost surely that
Y
t
→
∂
U
{\displaystyle Y_{t}\to \partial U}
with
t
→
ζ
{\displaystyle t\to \zeta }
.
ζ
{\displaystyle \zeta }
is also a so-called explosion time.
== Some explicitly solvable examples ==
Explicitly solvable SDEs include:
=== Linear SDE: General case ===
d
X
t
=
(
a
(
t
)
X
t
+
c
(
t
)
)
d
t
+
(
b
(
t
)
X
t
+
d
(
t
)
)
d
W
t
{\displaystyle \mathrm {d} X_{t}=(a(t)X_{t}+c(t))\mathrm {d} t+(b(t)X_{t}+d(t))\mathrm {d} W_{t}}
X
t
=
Φ
t
,
t
0
(
X
t
0
+
∫
t
0
t
Φ
s
,
t
0
−
1
(
c
(
s
)
−
b
(
s
)
d
(
s
)
)
d
s
+
∫
t
0
t
Φ
s
,
t
0
−
1
d
(
s
)
d
W
s
)
{\displaystyle X_{t}=\Phi _{t,t_{0}}\left(X_{t_{0}}+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}(c(s)-b(s)d(s))\mathrm {d} s+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}d(s)\mathrm {d} W_{s}\right)}
where
Φ
t
,
t
0
=
exp
(
∫
t
0
t
(
a
(
s
)
−
b
2
(
s
)
2
)
d
s
+
∫
t
0
t
b
(
s
)
d
W
s
)
{\displaystyle \Phi _{t,t_{0}}=\exp \left(\int _{t_{0}}^{t}\left(a(s)-{\frac {b^{2}(s)}{2}}\right)\mathrm {d} s+\int _{t_{0}}^{t}b(s)\mathrm {d} W_{s}\right)}
=== Reducible SDEs: Case 1 ===
d
X
t
=
1
2
f
(
X
t
)
f
′
(
X
t
)
d
t
+
f
(
X
t
)
d
W
t
{\displaystyle \mathrm {d} X_{t}={\frac {1}{2}}f(X_{t})f'(X_{t})\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}}
for a given differentiable function
f
{\displaystyle f}
is equivalent to the Stratonovich SDE
d
X
t
=
f
(
X
t
)
∘
W
t
{\displaystyle \mathrm {d} X_{t}=f(X_{t})\circ W_{t}}
which has a general solution
X
t
=
h
−
1
(
W
t
+
h
(
X
0
)
)
{\displaystyle X_{t}=h^{-1}(W_{t}+h(X_{0}))}
where
h
(
x
)
=
∫
x
d
s
f
(
s
)
{\displaystyle h(x)=\int ^{x}{\frac {\mathrm {d} s}{f(s)}}}
=== Reducible SDEs: Case 2 ===
d
X
t
=
(
α
f
(
X
t
)
+
1
2
f
(
X
t
)
f
′
(
X
t
)
)
d
t
+
f
(
X
t
)
d
W
t
{\displaystyle \mathrm {d} X_{t}=\left(\alpha f(X_{t})+{\frac {1}{2}}f(X_{t})f'(X_{t})\right)\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}}
for a given differentiable function
f
{\displaystyle f}
is equivalent to the Stratonovich SDE
d
X
t
=
α
f
(
X
t
)
d
t
+
f
(
X
t
)
∘
W
t
{\displaystyle \mathrm {d} X_{t}=\alpha f(X_{t})\mathrm {d} t+f(X_{t})\circ W_{t}}
which is reducible to
d
Y
t
=
α
d
t
+
d
W
t
{\displaystyle \mathrm {d} Y_{t}=\alpha \mathrm {d} t+\mathrm {d} W_{t}}
where
Y
t
=
h
(
X
t
)
{\displaystyle Y_{t}=h(X_{t})}
where
h
{\displaystyle h}
is defined as before.
Its general solution is
X
t
=
h
−
1
(
α
t
+
W
t
+
h
(
X
0
)
)
{\displaystyle X_{t}=h^{-1}(\alpha t+W_{t}+h(X_{0}))}
== SDEs and supersymmetry ==
In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc.
== See also ==
Backward stochastic differential equation
Langevin dynamics
Local volatility
Stochastic process
Stochastic volatility
Stochastic partial differential equations
Diffusion process
Stochastic difference equation
== References ==
== Further reading ==
Evans, Lawrence C. (2013). An Introduction to Stochastic Differential Equations American Mathematical Society.
Adomian, George (1983). Stochastic systems. Mathematics in Science and Engineering (169). Orlando, FL: Academic Press Inc.
Adomian, George (1986). Nonlinear stochastic operator equations. Orlando, FL: Academic Press Inc. ISBN 978-0-12-044375-8.
Adomian, George (1989). Nonlinear stochastic systems theory and applications to physics. Mathematics and its Applications (46). Dordrecht: Kluwer Academic Publishers Group.
Calin, Ovidiu (2015). An Informal Introduction to Stochastic Calculus with Applications. Singapore: World Scientific Publishing. p. 315. ISBN 978-981-4678-93-3.
Teugels, J.; Sund, B., eds. (2004). Encyclopedia of Actuarial Science. Chichester: Wiley. pp. 523–527.
Gardiner, C. W. (2004). Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer. p. 415.
Mikosch, Thomas (1998). Elementary Stochastic Calculus: with Finance in View. Singapore: World Scientific Publishing. p. 212. ISBN 981-02-3543-7.
Seifedine Kadry (2007). "A Solution of Linear Stochastic Differential Equation". Wseas Transactions on Mathematics. USA: WSEAS TRANSACTIONS on MATHEMATICS, April 2007.: 618. ISSN 1109-2769.
Higham, Desmond J. (January 2001). "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations". SIAM Review. 43 (3): 525–546. Bibcode:2001SIAMR..43..525H. CiteSeerX 10.1.1.137.6375. doi:10.1137/S0036144500378302.
Desmond Higham and Peter Kloeden: "An Introduction to the Numerical Simulation of Stochastic Differential Equations", SIAM, ISBN 978-1-611976-42-7 (2021). | Wikipedia/Stochastic_differential |
Differential may refer to:
== Mathematics ==
Differential (mathematics) comprises multiple related meanings of the word, both in calculus and differential geometry, such as an infinitesimal change in the value of a function
Differential algebra
Differential calculus
Differential of a function, represents a change in the linearization of a function
Total differential is its generalization for functions of multiple variables
Differential (infinitesimal) (e.g. dx, dy, dt etc.) are interpreted as infinitesimals
Differential topology
Differential (pushforward) The total derivative of a map between manifolds.
Differential exponent, an exponent in the factorisation of the different ideal
Differential geometry, exterior differential, or exterior derivative, is a generalization to differential forms of the notion of differential of a function on a differentiable manifold
Differential (coboundary), in homological algebra and algebraic topology, one of the maps of a cochain complex
Differential cryptanalysis, a pair consisting of the difference, usually computed by XOR, between two plaintexts, and the difference of the corresponding ciphertexts
== Science and technology ==
Differential (mechanical device), as part of a motor vehicle drivetrain, the device that allows driving wheels or axles on opposite sides to rotate at different speeds
Limited-slip differential
Differential steering, the steering method used by tanks and similar tracked vehicles
Electronic differential, an electric motor controller which substitutes its mechanical counterpart with significant advantages in electric vehicle application
Differential signaling, in electronics, applies to a method of transmitting electronic signals over a pair of wires to reduce interference
Differential amplifier, an electronic amplifier that amplifies signals.
== Social sciences ==
Semantic and structural differentials in psychology
Quality spread differential, in finance
Compensating differential, in labor economics
== Medicine ==
Differential diagnosis, the characterization of the underlying cause of pathological states based on specific tests
White blood cell differential, the enumeration of each type of white blood cell either manually or using automated analyzers
== Other ==
Differential hardening, in metallurgy
Differential rotation, in astronomy
Differential centrifugation, in cell biology
Differential scanning calorimetry, in materials science
Differential signalling, in communications
Differential GPS, in satellite navigation technology
Differential interferometry in radar
Differential, an extended play by The Sixth Lie
Handicap differential, part of the calculation used in producing golf handicaps
== See also ==
All pages with titles beginning with Differential
All pages with titles containing Differential
Different (disambiguation) | Wikipedia/Differential_(disambiguation) |
In mathematics, differential of the first kind is a traditional term used in the theories of Riemann surfaces (more generally, complex manifolds) and algebraic curves (more generally, algebraic varieties), for everywhere-regular differential 1-forms. Given a complex manifold M, a differential of the first kind ω is therefore the same thing as a 1-form that is everywhere holomorphic; on an algebraic variety V that is non-singular it would be a global section of the coherent sheaf Ω1 of Kähler differentials. In either case the definition has its origins in the theory of abelian integrals.
The dimension of the space of differentials of the first kind, by means of this identification, is the Hodge number
h1,0.
The differentials of the first kind, when integrated along paths, give rise to integrals that generalise the elliptic integrals to all curves over the complex numbers. They include for example the hyperelliptic integrals of type
∫
x
k
d
x
Q
(
x
)
{\displaystyle \int {\frac {x^{k}\,dx}{\sqrt {Q(x)}}}}
where Q is a square-free polynomial of any given degree > 4. The allowable power k has to be determined by analysis of the possible pole at the point at infinity on the corresponding hyperelliptic curve. When this is done, one finds that the condition is
k ≤ g − 1,
or in other words, k at most 1 for degree of Q 5 or 6, at most 2 for degree 7 or 8, and so on (as g = [(1+ deg Q)/2]).
Quite generally, as this example illustrates, for a compact Riemann surface or algebraic curve, the Hodge number is the genus g. For the case of algebraic surfaces, this is the quantity known classically as the irregularity q. It is also, in general, the dimension of the Albanese variety, which takes the place of the Jacobian variety.
== Differentials of the second and third kind ==
The traditional terminology also included differentials of the second kind and of the third kind. The idea behind this has been supported by modern theories of algebraic differential forms, both from the side of more Hodge theory, and through the use of morphisms to commutative algebraic groups.
The Weierstrass zeta function was called an integral of the second kind in elliptic function theory; it is a logarithmic derivative of a theta function, and therefore has simple poles, with integer residues. The decomposition of a (meromorphic) elliptic function into pieces of 'three kinds' parallels the representation as (i) a constant, plus (ii) a linear combination of translates of the Weierstrass zeta function, plus (iii) a function with arbitrary poles but no residues at them.
The same type of decomposition exists in general, mutatis mutandis, though the terminology is not completely consistent. In the algebraic group (generalized Jacobian) theory the three kinds are abelian varieties, algebraic tori, and affine spaces, and the decomposition is in terms of a composition series.
On the other hand, a meromorphic abelian differential of the second kind has traditionally been one with residues at all poles being zero. One of the third kind is one where all poles are simple. There is a higher-dimensional analogue available, using the Poincaré residue.
== See also ==
Logarithmic form
== References ==
"Abelian differential", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Abelian_differential |
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
There are multiple different notations for differentiation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances.
Derivatives can be generalized to functions of several real variables. In this case, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector.
== Definition ==
=== As a limit ===
A function of a real variable
f
(
x
)
{\displaystyle f(x)}
is differentiable at a point
a
{\displaystyle a}
of its domain, if its domain contains an open interval containing
a
{\displaystyle a}
, and the limit
L
=
lim
h
→
0
f
(
a
+
h
)
−
f
(
a
)
h
{\displaystyle L=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}}
exists. This means that, for every positive real number
ε
{\displaystyle \varepsilon }
, there exists a positive real number
δ
{\displaystyle \delta }
such that, for every
h
{\displaystyle h}
such that
|
h
|
<
δ
{\displaystyle |h|<\delta }
and
h
≠
0
{\displaystyle h\neq 0}
then
f
(
a
+
h
)
{\displaystyle f(a+h)}
is defined, and
|
L
−
f
(
a
+
h
)
−
f
(
a
)
h
|
<
ε
,
{\displaystyle \left|L-{\frac {f(a+h)-f(a)}{h}}\right|<\varepsilon ,}
where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit.
If the function
f
{\displaystyle f}
is differentiable at
a
{\displaystyle a}
, that is if the limit
L
{\displaystyle L}
exists, then this limit is called the derivative of
f
{\displaystyle f}
at
a
{\displaystyle a}
. Multiple notations for the derivative exist. The derivative of
f
{\displaystyle f}
at
a
{\displaystyle a}
can be denoted
f
′
(
a
)
{\displaystyle f'(a)}
, read as "
f
{\displaystyle f}
prime of
a
{\displaystyle a}
"; or it can be denoted
d
f
d
x
(
a
)
{\displaystyle \textstyle {\frac {df}{dx}}(a)}
, read as "the derivative of
f
{\displaystyle f}
with respect to
x
{\displaystyle x}
at
a
{\displaystyle a}
" or "
d
f
{\displaystyle df}
by (or over)
d
x
{\displaystyle dx}
at
a
{\displaystyle a}
". See § Notation below. If
f
{\displaystyle f}
is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point
x
{\displaystyle x}
to the value of the derivative of
f
{\displaystyle f}
at
x
{\displaystyle x}
. This function is written
f
′
{\displaystyle f'}
and is called the derivative function or the derivative of
f
{\displaystyle f}
. The function
f
{\displaystyle f}
sometimes has a derivative at most, but not all, points of its domain. The function whose value at
a
{\displaystyle a}
equals
f
′
(
a
)
{\displaystyle f'(a)}
whenever
f
′
(
a
)
{\displaystyle f'(a)}
is defined and elsewhere is undefined is also called the derivative of
f
{\displaystyle f}
. It is still a function, but its domain may be smaller than the domain of
f
{\displaystyle f}
.
For example, let
f
{\displaystyle f}
be the squaring function:
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
. Then the quotient in the definition of the derivative is
f
(
a
+
h
)
−
f
(
a
)
h
=
(
a
+
h
)
2
−
a
2
h
=
a
2
+
2
a
h
+
h
2
−
a
2
h
=
2
a
+
h
.
{\displaystyle {\frac {f(a+h)-f(a)}{h}}={\frac {(a+h)^{2}-a^{2}}{h}}={\frac {a^{2}+2ah+h^{2}-a^{2}}{h}}=2a+h.}
The division in the last step is valid as long as
h
≠
0
{\displaystyle h\neq 0}
. The closer
h
{\displaystyle h}
is to
0
{\displaystyle 0}
, the closer this expression becomes to the value
2
a
{\displaystyle 2a}
. The limit exists, and for every input
a
{\displaystyle a}
the limit is
2
a
{\displaystyle 2a}
. So, the derivative of the squaring function is the doubling function:
f
′
(
x
)
=
2
x
{\displaystyle f'(x)=2x}
.
The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function
f
{\displaystyle f}
, specifically the points
(
a
,
f
(
a
)
)
{\displaystyle (a,f(a))}
and
(
a
+
h
,
f
(
a
+
h
)
)
{\displaystyle (a+h,f(a+h))}
. As
h
{\displaystyle h}
is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of
f
{\displaystyle f}
at
a
{\displaystyle a}
. In other words, the derivative is the slope of the tangent.
=== Using infinitesimals ===
One way to think of the derivative
d
f
d
x
(
a
)
{\textstyle {\frac {df}{dx}}(a)}
is as the ratio of an infinitesimal change in the output of the function
f
{\displaystyle f}
to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form
1
+
1
+
⋯
+
1
{\displaystyle 1+1+\cdots +1}
for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the
d
{\displaystyle d}
in the Leibniz notation. Thus, the derivative of
f
(
x
)
{\displaystyle f(x)}
becomes
f
′
(
x
)
=
st
(
f
(
x
+
d
x
)
−
f
(
x
)
d
x
)
{\displaystyle f'(x)=\operatorname {st} \left({\frac {f(x+dx)-f(x)}{dx}}\right)}
for an arbitrary infinitesimal
d
x
{\displaystyle dx}
, where
st
{\displaystyle \operatorname {st} }
denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function
f
(
x
)
=
x
2
{\displaystyle f(x)=x^{2}}
as an example again,
f
′
(
x
)
=
st
(
x
2
+
2
x
⋅
d
x
+
(
d
x
)
2
−
x
2
d
x
)
=
st
(
2
x
⋅
d
x
+
(
d
x
)
2
d
x
)
=
st
(
2
x
⋅
d
x
d
x
+
(
d
x
)
2
d
x
)
=
st
(
2
x
+
d
x
)
=
2
x
.
{\displaystyle {\begin{aligned}f'(x)&=\operatorname {st} \left({\frac {x^{2}+2x\cdot dx+(dx)^{2}-x^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx+(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx}{dx}}+{\frac {(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left(2x+dx\right)\\&=2x.\end{aligned}}}
== Continuity and differentiability ==
If
f
{\displaystyle f}
is differentiable at
a
{\displaystyle a}
, then
f
{\displaystyle f}
must also be continuous at
a
{\displaystyle a}
. As an example, choose a point
a
{\displaystyle a}
and let
f
{\displaystyle f}
be the step function that returns the value 1 for all
x
{\displaystyle x}
less than
a
{\displaystyle a}
, and returns a different value 10 for all
x
{\displaystyle x}
greater than or equal to
a
{\displaystyle a}
. The function
f
{\displaystyle f}
cannot have a derivative at
a
{\displaystyle a}
. If
h
{\displaystyle h}
is negative, then
a
+
h
{\displaystyle a+h}
is on the low part of the step, so the secant line from
a
{\displaystyle a}
to
a
+
h
{\displaystyle a+h}
is very steep; as
h
{\displaystyle h}
tends to zero, the slope tends to infinity. If
h
{\displaystyle h}
is positive, then
a
+
h
{\displaystyle a+h}
is on the high part of the step, so the secant line from
a
{\displaystyle a}
to
a
+
h
{\displaystyle a+h}
has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by
f
(
x
)
=
|
x
|
{\displaystyle f(x)=|x|}
is continuous at
x
=
0
{\displaystyle x=0}
, but it is not differentiable there. If
h
{\displaystyle h}
is positive, then the slope of the secant line from 0 to
h
{\displaystyle h}
is one; if
h
{\displaystyle h}
is negative, then the slope of the secant line from
0
{\displaystyle 0}
to
h
{\displaystyle h}
is
−
1
{\displaystyle -1}
. This can be seen graphically as a "kink" or a "cusp" in the graph at
x
=
0
{\displaystyle x=0}
. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by
f
(
x
)
=
x
1
/
3
{\displaystyle f(x)=x^{1/3}}
is not differentiable at
x
=
0
{\displaystyle x=0}
. In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative.
Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point.
== Notation ==
One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
. It is still commonly used when the equation
y
=
f
(
x
)
{\displaystyle y=f(x)}
is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by
d
y
d
x
{\displaystyle \textstyle {\frac {dy}{dx}}}
, read as "the derivative of
y
{\displaystyle y}
with respect to
x
{\displaystyle x}
". This derivative can alternately be treated as the application of a differential operator to a function,
d
y
d
x
=
d
d
x
f
(
x
)
.
{\textstyle {\frac {dy}{dx}}={\frac {d}{dx}}f(x).}
Higher derivatives are expressed using the notation
d
n
y
d
x
n
{\textstyle {\frac {d^{n}y}{dx^{n}}}}
for the
n
{\displaystyle n}
-th derivative of
y
=
f
(
x
)
{\displaystyle y=f(x)}
. These are abbreviations for multiple applications of the derivative operator; for example,
d
2
y
d
x
2
=
d
d
x
(
d
d
x
f
(
x
)
)
.
{\textstyle {\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}{\Bigl (}{\frac {d}{dx}}f(x){\Bigr )}.}
Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if
u
=
g
(
x
)
{\displaystyle u=g(x)}
and
y
=
f
(
g
(
x
)
)
{\displaystyle y=f(g(x))}
then
d
y
d
x
=
d
y
d
u
⋅
d
u
d
x
.
{\textstyle {\frac {dy}{dx}}={\frac {dy}{du}}\cdot {\frac {du}{dx}}.}
Another common notation for differentiation is by using the prime mark in the symbol of a function
f
(
x
)
{\displaystyle f(x)}
. This notation, due to Joseph-Louis Lagrange, is now known as prime notation. The first derivative is written as
f
′
(
x
)
{\displaystyle f'(x)}
, read as "
f
{\displaystyle f}
prime of
x
{\displaystyle x}
", or
y
′
{\displaystyle y'}
, read as "
y
{\displaystyle y}
prime". Similarly, the second and the third derivatives can be written as
f
″
{\displaystyle f''}
and
f
‴
{\displaystyle f'''}
, respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as
f
i
v
{\displaystyle f^{\mathrm {iv} }}
or
f
(
4
)
{\displaystyle f^{(4)}}
. The latter notation generalizes to yield the notation
f
(
n
)
{\displaystyle f^{(n)}}
for the
n
{\displaystyle n}
th derivative of
f
{\displaystyle f}
.
In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If
y
{\displaystyle y}
is a function of
t
{\displaystyle t}
, then the first and second derivatives can be written as
y
˙
{\displaystyle {\dot {y}}}
and
y
¨
{\displaystyle {\ddot {y}}}
, respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables.
Another notation is D-notation, which represents the differential operator by the symbol
D
{\displaystyle D}
. The first derivative is written
D
f
(
x
)
{\displaystyle Df(x)}
and higher derivatives are written with a superscript, so the
n
{\displaystyle n}
-th derivative is
D
n
f
(
x
)
{\displaystyle D^{n}f(x)}
. This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function
u
=
f
(
x
,
y
)
{\displaystyle u=f(x,y)}
, its partial derivative with respect to
x
{\displaystyle x}
can be written
D
x
u
{\displaystyle D_{x}u}
or
D
x
f
(
x
,
y
)
{\displaystyle D_{x}f(x,y)}
. Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g.
D
x
y
f
(
x
,
y
)
=
∂
∂
y
(
∂
∂
x
f
(
x
,
y
)
)
{\textstyle D_{xy}f(x,y)={\frac {\partial }{\partial y}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}}
and
D
x
2
f
(
x
,
y
)
=
∂
∂
x
(
∂
∂
x
f
(
x
,
y
)
)
{\displaystyle \textstyle D_{x}^{2}f(x,y)={\frac {\partial }{\partial x}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}}
.
== Rules of computation ==
In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation.
=== Rules for basic functions ===
The following are the rules for the derivatives of the most common basic functions. Here,
a
{\displaystyle a}
is a real number, and
e
{\displaystyle e}
is the base of the natural logarithm, approximately 2.71828.
Derivatives of powers:
d
d
x
x
a
=
a
x
a
−
1
{\displaystyle {\frac {d}{dx}}x^{a}=ax^{a-1}}
Functions of exponential, natural logarithm, and logarithm with general base:
d
d
x
e
x
=
e
x
{\displaystyle {\frac {d}{dx}}e^{x}=e^{x}}
d
d
x
a
x
=
a
x
ln
(
a
)
{\displaystyle {\frac {d}{dx}}a^{x}=a^{x}\ln(a)}
, for
a
>
0
{\displaystyle a>0}
d
d
x
ln
(
x
)
=
1
x
{\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}}
, for
x
>
0
{\displaystyle x>0}
d
d
x
log
a
(
x
)
=
1
x
ln
(
a
)
{\displaystyle {\frac {d}{dx}}\log _{a}(x)={\frac {1}{x\ln(a)}}}
, for
x
,
a
>
0
{\displaystyle x,a>0}
Trigonometric functions:
d
d
x
sin
(
x
)
=
cos
(
x
)
{\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x)}
d
d
x
cos
(
x
)
=
−
sin
(
x
)
{\displaystyle {\frac {d}{dx}}\cos(x)=-\sin(x)}
d
d
x
tan
(
x
)
=
sec
2
(
x
)
=
1
cos
2
(
x
)
=
1
+
tan
2
(
x
)
{\displaystyle {\frac {d}{dx}}\tan(x)=\sec ^{2}(x)={\frac {1}{\cos ^{2}(x)}}=1+\tan ^{2}(x)}
Inverse trigonometric functions:
d
d
x
arcsin
(
x
)
=
1
1
−
x
2
{\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {1}{\sqrt {1-x^{2}}}}}
, for
−
1
<
x
<
1
{\displaystyle -1<x<1}
d
d
x
arccos
(
x
)
=
−
1
1
−
x
2
{\displaystyle {\frac {d}{dx}}\arccos(x)=-{\frac {1}{\sqrt {1-x^{2}}}}}
, for
−
1
<
x
<
1
{\displaystyle -1<x<1}
d
d
x
arctan
(
x
)
=
1
1
+
x
2
{\displaystyle {\frac {d}{dx}}\arctan(x)={\frac {1}{1+x^{2}}}}
=== Rules for combined functions ===
Given that the
f
{\displaystyle f}
and
g
{\displaystyle g}
are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions.
Constant rule: if
f
{\displaystyle f}
is constant, then for all
x
{\displaystyle x}
,
f
′
(
x
)
=
0.
{\displaystyle f'(x)=0.}
Sum rule:
(
α
f
+
β
g
)
′
=
α
f
′
+
β
g
′
{\displaystyle (\alpha f+\beta g)'=\alpha f'+\beta g'}
for all functions
f
{\displaystyle f}
and
g
{\displaystyle g}
and all real numbers
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
.
Product rule:
(
f
g
)
′
=
f
′
g
+
f
g
′
{\displaystyle (fg)'=f'g+fg'}
for all functions
f
{\displaystyle f}
and
g
{\displaystyle g}
. As a special case, this rule includes the fact
(
α
f
)
′
=
α
f
′
{\displaystyle (\alpha f)'=\alpha f'}
whenever
α
{\displaystyle \alpha }
is a constant because
α
′
f
=
0
⋅
f
=
0
{\displaystyle \alpha 'f=0\cdot f=0}
by the constant rule.
Quotient rule:
(
f
g
)
′
=
f
′
g
−
f
g
′
g
2
{\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-fg'}{g^{2}}}}
for all functions
f
{\displaystyle f}
and
g
{\displaystyle g}
at all inputs where g ≠ 0.
Chain rule for composite functions: If
f
(
x
)
=
h
(
g
(
x
)
)
{\displaystyle f(x)=h(g(x))}
, then
f
′
(
x
)
=
h
′
(
g
(
x
)
)
⋅
g
′
(
x
)
.
{\displaystyle f'(x)=h'(g(x))\cdot g'(x).}
=== Computation example ===
The derivative of the function given by
f
(
x
)
=
x
4
+
sin
(
x
2
)
−
ln
(
x
)
e
x
+
7
{\displaystyle f(x)=x^{4}+\sin \left(x^{2}\right)-\ln(x)e^{x}+7}
is
f
′
(
x
)
=
4
x
(
4
−
1
)
+
d
(
x
2
)
d
x
cos
(
x
2
)
−
d
(
ln
x
)
d
x
e
x
−
ln
(
x
)
d
(
e
x
)
d
x
+
0
=
4
x
3
+
2
x
cos
(
x
2
)
−
1
x
e
x
−
ln
(
x
)
e
x
.
{\displaystyle {\begin{aligned}f'(x)&=4x^{(4-1)}+{\frac {d\left(x^{2}\right)}{dx}}\cos \left(x^{2}\right)-{\frac {d\left(\ln {x}\right)}{dx}}e^{x}-\ln(x){\frac {d\left(e^{x}\right)}{dx}}+0\\&=4x^{3}+2x\cos \left(x^{2}\right)-{\frac {1}{x}}e^{x}-\ln(x)e^{x}.\end{aligned}}}
Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions
x
2
{\displaystyle x^{2}}
,
x
4
{\displaystyle x^{4}}
,
sin
(
x
)
{\displaystyle \sin(x)}
,
ln
(
x
)
{\displaystyle \ln(x)}
, and
exp
(
x
)
=
e
x
{\displaystyle \exp(x)=e^{x}}
, as well as the constant
7
{\displaystyle 7}
, were also used.
== Higher-order derivatives ==
Higher order derivatives are the result of differentiating a function repeatedly. Given that
f
{\displaystyle f}
is a differentiable function, the derivative of
f
{\displaystyle f}
is the first derivative, denoted as
f
′
{\displaystyle f'}
. The derivative of
f
′
{\displaystyle f'}
is the second derivative, denoted as
f
″
{\displaystyle f''}
, and the derivative of
f
″
{\displaystyle f''}
is the third derivative, denoted as
f
‴
{\displaystyle f'''}
. By continuing this process, if it exists, the
n
{\displaystyle n}
th derivative is the derivative of the
(
n
−
1
)
{\displaystyle (n-1)}
th derivative or the derivative of order
n
{\displaystyle n}
. As has been discussed above, the generalization of derivative of a function
f
{\displaystyle f}
may be denoted as
f
(
n
)
{\displaystyle f^{(n)}}
. A function that has
k
{\displaystyle k}
successive derivatives is called
k
{\displaystyle k}
times differentiable. If the
k
{\displaystyle k}
-th derivative is continuous, then the function is said to be of differentiability class
C
k
{\displaystyle C^{k}}
. A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero.
One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk.
== In other dimensions ==
=== Vector-valued functions ===
A vector-valued function
y
{\displaystyle \mathbf {y} }
of a real variable sends real numbers to vectors in some vector space
R
n
{\displaystyle \mathbb {R} ^{n}}
. A vector-valued function can be split up into its coordinate functions
y
1
(
t
)
,
y
2
(
t
)
,
…
,
y
n
(
t
)
{\displaystyle y_{1}(t),y_{2}(t),\dots ,y_{n}(t)}
, meaning that
y
=
(
y
1
(
t
)
,
y
2
(
t
)
,
…
,
y
n
(
t
)
)
{\displaystyle \mathbf {y} =(y_{1}(t),y_{2}(t),\dots ,y_{n}(t))}
. This includes, for example, parametric curves in
R
2
{\displaystyle \mathbb {R} ^{2}}
or
R
3
{\displaystyle \mathbb {R} ^{3}}
. The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of
y
(
t
)
{\displaystyle \mathbf {y} (t)}
is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,
y
′
(
t
)
=
lim
h
→
0
y
(
t
+
h
)
−
y
(
t
)
h
,
{\displaystyle \mathbf {y} '(t)=\lim _{h\to 0}{\frac {\mathbf {y} (t+h)-\mathbf {y} (t)}{h}},}
if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of
y
{\displaystyle \mathbf {y} }
exists for every value of
t
{\displaystyle t}
, then
y
′
{\displaystyle \mathbf {y} '}
is another vector-valued function.
=== Partial derivatives ===
Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function
f
(
x
,
y
,
…
)
{\displaystyle f(x,y,\dots )}
with respect to the variable
x
{\displaystyle x}
is variously denoted by
among other possibilities. It can be thought of as the rate of change of the function in the
x
{\displaystyle x}
-direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let
f
(
x
,
y
)
=
x
2
+
x
y
+
y
2
{\displaystyle f(x,y)=x^{2}+xy+y^{2}}
, then the partial derivative of function
f
{\displaystyle f}
with respect to both variables
x
{\displaystyle x}
and
y
{\displaystyle y}
are, respectively:
∂
f
∂
x
=
2
x
+
y
,
∂
f
∂
y
=
x
+
2
y
.
{\displaystyle {\frac {\partial f}{\partial x}}=2x+y,\qquad {\frac {\partial f}{\partial y}}=x+2y.}
In general, the partial derivative of a function
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\dots ,x_{n})}
in the direction
x
i
{\displaystyle x_{i}}
at the point
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\dots ,a_{n})}
is defined to be:
∂
f
∂
x
i
(
a
1
,
…
,
a
n
)
=
lim
h
→
0
f
(
a
1
,
…
,
a
i
+
h
,
…
,
a
n
)
−
f
(
a
1
,
…
,
a
i
,
…
,
a
n
)
h
.
{\displaystyle {\frac {\partial f}{\partial x_{i}}}(a_{1},\ldots ,a_{n})=\lim _{h\to 0}{\frac {f(a_{1},\ldots ,a_{i}+h,\ldots ,a_{n})-f(a_{1},\ldots ,a_{i},\ldots ,a_{n})}{h}}.}
This is fundamental for the study of the functions of several real variables. Let
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(x_{1},\dots ,x_{n})}
be such a real-valued function. If all partial derivatives
f
{\displaystyle f}
with respect to
x
j
{\displaystyle x_{j}}
are defined at the point
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\dots ,a_{n})}
, these partial derivatives define the vector
∇
f
(
a
1
,
…
,
a
n
)
=
(
∂
f
∂
x
1
(
a
1
,
…
,
a
n
)
,
…
,
∂
f
∂
x
n
(
a
1
,
…
,
a
n
)
)
,
{\displaystyle \nabla f(a_{1},\ldots ,a_{n})=\left({\frac {\partial f}{\partial x_{1}}}(a_{1},\ldots ,a_{n}),\ldots ,{\frac {\partial f}{\partial x_{n}}}(a_{1},\ldots ,a_{n})\right),}
which is called the gradient of
f
{\displaystyle f}
at
a
{\displaystyle a}
. If
f
{\displaystyle f}
is differentiable at every point in some domain, then the gradient is a vector-valued function
∇
f
{\displaystyle \nabla f}
that maps the point
(
a
1
,
…
,
a
n
)
{\displaystyle (a_{1},\dots ,a_{n})}
to the vector
∇
f
(
a
1
,
…
,
a
n
)
{\displaystyle \nabla f(a_{1},\dots ,a_{n})}
. Consequently, the gradient determines a vector field.
=== Directional derivatives ===
If
f
{\displaystyle f}
is a real-valued function on
R
n
{\displaystyle \mathbb {R} ^{n}}
, then the partial derivatives of
f
{\displaystyle f}
measure its variation in the direction of the coordinate axes. For example, if
f
{\displaystyle f}
is a function of
x
{\displaystyle x}
and
y
{\displaystyle y}
, then its partial derivatives measure the variation in
f
{\displaystyle f}
in the
x
{\displaystyle x}
and
y
{\displaystyle y}
direction. However, they do not directly measure the variation of
f
{\displaystyle f}
in any other direction, such as along the diagonal line
y
=
x
{\displaystyle y=x}
. These are measured using directional derivatives. Given a vector
v
=
(
v
1
,
…
,
v
n
)
{\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})}
, then the directional derivative of
f
{\displaystyle f}
in the direction of
v
{\displaystyle \mathbf {v} }
at the point
x
{\displaystyle \mathbf {x} }
is:
D
v
f
(
x
)
=
lim
h
→
0
f
(
x
+
h
v
)
−
f
(
x
)
h
.
{\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\rightarrow 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}.}
If all the partial derivatives of
f
{\displaystyle f}
exist and are continuous at
x
{\displaystyle \mathbf {x} }
, then they determine the directional derivative of
f
{\displaystyle f}
in the direction
v
{\displaystyle \mathbf {v} }
by the formula:
D
v
f
(
x
)
=
∑
j
=
1
n
v
j
∂
f
∂
x
j
.
{\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\sum _{j=1}^{n}v_{j}{\frac {\partial f}{\partial x_{j}}}.}
=== Total derivative and Jacobian matrix ===
When
f
{\displaystyle f}
is a function from an open subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
to
R
m
{\displaystyle \mathbb {R} ^{m}}
, then the directional derivative of
f
{\displaystyle f}
in a chosen direction is the best linear approximation to
f
{\displaystyle f}
at that point and in that direction. However, when
n
>
1
{\displaystyle n>1}
, no single directional derivative can give a complete picture of the behavior of
f
{\displaystyle f}
. The total derivative gives a complete picture by considering all directions at once. That is, for any vector
v
{\displaystyle \mathbf {v} }
starting at
a
{\displaystyle \mathbf {a} }
, the linear approximation formula holds:
f
(
a
+
v
)
≈
f
(
a
)
+
f
′
(
a
)
v
.
{\displaystyle f(\mathbf {a} +\mathbf {v} )\approx f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {v} .}
Similarly with the single-variable derivative,
f
′
(
a
)
{\displaystyle f'(\mathbf {a} )}
is chosen so that the error in this approximation is as small as possible. The total derivative of
f
{\displaystyle f}
at
a
{\displaystyle \mathbf {a} }
is the unique linear transformation
f
′
(
a
)
:
R
n
→
R
m
{\displaystyle f'(\mathbf {a} )\colon \mathbb {R} ^{n}\to \mathbb {R} ^{m}}
such that
lim
h
→
0
‖
f
(
a
+
h
)
−
(
f
(
a
)
+
f
′
(
a
)
h
)
‖
‖
h
‖
=
0.
{\displaystyle \lim _{\mathbf {h} \to 0}{\frac {\lVert f(\mathbf {a} +\mathbf {h} )-(f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {h} )\rVert }{\lVert \mathbf {h} \rVert }}=0.}
Here
h
{\displaystyle \mathbf {h} }
is a vector in
R
n
{\displaystyle \mathbb {R} ^{n}}
, so the norm in the denominator is the standard length on
R
n
{\displaystyle \mathbb {R} ^{n}}
. However,
f
′
(
a
)
h
{\displaystyle f'(\mathbf {a} )\mathbf {h} }
is a vector in
R
m
{\displaystyle \mathbb {R} ^{m}}
, and the norm in the numerator is the standard length on
R
m
{\displaystyle \mathbb {R} ^{m}}
. If
v
{\displaystyle v}
is a vector starting at
a
{\displaystyle a}
, then
f
′
(
a
)
v
{\displaystyle f'(\mathbf {a} )\mathbf {v} }
is called the pushforward of
v
{\displaystyle \mathbf {v} }
by
f
{\displaystyle f}
.
If the total derivative exists at
a
{\displaystyle \mathbf {a} }
, then all the partial derivatives and directional derivatives of
f
{\displaystyle f}
exist at
a
{\displaystyle \mathbf {a} }
, and for all
v
{\displaystyle \mathbf {v} }
,
f
′
(
a
)
v
{\displaystyle f'(\mathbf {a} )\mathbf {v} }
is the directional derivative of
f
{\displaystyle f}
in the direction
v
{\displaystyle \mathbf {v} }
. If
f
{\displaystyle f}
is written using coordinate functions, so that
f
=
(
f
1
,
f
2
,
…
,
f
m
)
{\displaystyle f=(f_{1},f_{2},\dots ,f_{m})}
, then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of
f
{\displaystyle f}
at
a
{\displaystyle \mathbf {a} }
:
f
′
(
a
)
=
Jac
a
=
(
∂
f
i
∂
x
j
)
i
j
.
{\displaystyle f'(\mathbf {a} )=\operatorname {Jac} _{\mathbf {a} }=\left({\frac {\partial f_{i}}{\partial x_{j}}}\right)_{ij}.}
== Generalizations ==
The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.
An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers
C
{\displaystyle \mathbb {C} }
to
C
{\displaystyle \mathbb {C} }
. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If
C
{\displaystyle \mathbb {C} }
is identified with
R
2
{\displaystyle \mathbb {R} ^{2}}
by writing a complex number
z
{\displaystyle z}
as
x
+
i
y
{\displaystyle x+iy}
then a differentiable function from
C
{\displaystyle \mathbb {C} }
to
C
{\displaystyle \mathbb {C} }
is certainly differentiable as a function from
R
2
{\displaystyle \mathbb {R} ^{2}}
to
R
2
{\displaystyle \mathbb {R} ^{2}}
(in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions.
Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold
M
{\displaystyle M}
is a space that can be approximated near each point
x
{\displaystyle x}
by a vector space called its tangent space: the prototypical example is a smooth surface in
R
3
{\displaystyle \mathbb {R} ^{3}}
. The derivative (or differential) of a (differentiable) map
f
:
M
→
N
{\displaystyle f:M\to N}
between manifolds, at a point
x
{\displaystyle x}
in
M
{\displaystyle M}
, is then a linear map from the tangent space of
M
{\displaystyle M}
at
x
{\displaystyle x}
to the tangent space of
N
{\displaystyle N}
at
f
(
x
)
{\displaystyle f(x)}
. The derivative function becomes a map between the tangent bundles of
M
{\displaystyle M}
and
N
{\displaystyle N}
. This definition is used in differential geometry.
Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative.
One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".
Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on.
The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus.
The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule.
== See also ==
Covariant derivative
Derivation
Exterior derivative
Functional derivative
Integral
Lie derivative
== Notes ==
== References ==
== External links ==
"Derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Khan Academy: "Newton, Leibniz, and Usain Bolt"
Weisstein, Eric W. "Derivative". MathWorld.
Online Derivative Calculator from Wolfram Alpha. | Wikipedia/Instantaneous_rate_of_change |
In calculus, the differential represents the principal part of the change in a function
y
=
f
(
x
)
{\displaystyle y=f(x)}
with respect to changes in the independent variable. The differential
d
y
{\displaystyle dy}
is defined by
d
y
=
f
′
(
x
)
d
x
,
{\displaystyle dy=f'(x)\,dx,}
where
f
′
(
x
)
{\displaystyle f'(x)}
is the derivative of f with respect to
x
{\displaystyle x}
, and
d
x
{\displaystyle dx}
is an additional real variable (so that
d
y
{\displaystyle dy}
is a function of
x
{\displaystyle x}
and
d
x
{\displaystyle dx}
). The notation is such that the equation
d
y
=
d
y
d
x
d
x
{\displaystyle dy={\frac {dy}{dx}}\,dx}
holds, where the derivative is represented in the Leibniz notation
d
y
/
d
x
{\displaystyle dy/dx}
, and this is consistent with regarding the derivative as the quotient of the differentials. One also writes
d
f
(
x
)
=
f
′
(
x
)
d
x
.
{\displaystyle df(x)=f'(x)\,dx.}
The precise meaning of the variables
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
depends on the context of the application and the required level of mathematical rigor. The domain of these variables may take on a particular geometrical significance if the differential is regarded as a particular differential form, or analytical significance if the differential is regarded as a linear approximation to the increment of a function. Traditionally, the variables
d
x
{\displaystyle dx}
and
d
y
{\displaystyle dy}
are considered to be very small (infinitesimal), and this interpretation is made rigorous in non-standard analysis.
== History and usage ==
The differential was first introduced via an intuitive or heuristic definition by Isaac Newton and furthered by Gottfried Leibniz, who thought of the differential dy as an infinitely small (or infinitesimal) change in the value y of the function, corresponding to an infinitely small change dx in the function's argument x. For that reason, the instantaneous rate of change of y with respect to x, which is the value of the derivative of the function, is denoted by the fraction
d
y
d
x
{\displaystyle {\frac {dy}{dx}}}
in what is called the Leibniz notation for derivatives. The quotient
d
y
/
d
x
{\displaystyle dy/dx}
is not infinitely small; rather it is a real number.
The use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy (1823) defined the differential without appeal to the atomism of Leibniz's infinitesimals. Instead, Cauchy, following d'Alembert, inverted the logical order of Leibniz and his successors: the derivative itself became the fundamental object, defined as a limit of difference quotients, and the differentials were then defined in terms of it. That is, one was free to define the differential
d
y
{\displaystyle dy}
by an expression
d
y
=
f
′
(
x
)
d
x
{\displaystyle dy=f'(x)\,dx}
in which
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
are simply new variables taking finite real values, not fixed infinitesimals as they had been for Leibniz.
According to Boyer (1959, p. 12), Cauchy's approach was a significant logical improvement over the infinitesimal approach of Leibniz because, instead of invoking the metaphysical notion of infinitesimals, the quantities
d
y
{\displaystyle dy}
and
d
x
{\displaystyle dx}
could now be manipulated in exactly the same manner as any other real quantities
in a meaningful way. Cauchy's overall conceptual approach to differentials remains the standard one in modern analytical treatments, although the final word on rigor, a fully modern notion of the limit, was ultimately due to Karl Weierstrass.
In physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John (1999, p. 184) reconcile the physical use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the particular purpose for which they are intended. Thus "physical infinitesimals" need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense.
Following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is more desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a linear functional of an increment
Δ
x
{\displaystyle \Delta x}
. This approach allows the differential (as a linear map) to be developed for a variety of more sophisticated spaces, ultimately giving rise to such notions as the Fréchet or Gateaux derivative. Likewise, in differential geometry, the differential of a function at a point is a linear function of a tangent vector (an "infinitely small displacement"), which exhibits it as a kind of one-form: the exterior derivative of the function. In non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing (see differential (infinitesimal)).
== Definition ==
The differential is defined in modern treatments of differential calculus as follows. The differential of a function
f
(
x
)
{\displaystyle f(x)}
of a single real variable
x
{\displaystyle x}
is the function
d
f
{\displaystyle df}
of two independent real variables
x
{\displaystyle x}
and
Δ
x
{\displaystyle \Delta x}
given by
d
f
(
x
,
Δ
x
)
=
d
e
f
f
′
(
x
)
Δ
x
.
{\displaystyle df(x,\Delta x)\ {\stackrel {\mathrm {def} }{=}}\ f'(x)\,\Delta x.}
One or both of the arguments may be suppressed, i.e., one may see
d
f
(
x
)
{\displaystyle df(x)}
or simply
d
f
{\displaystyle df}
. If
y
=
f
(
x
)
{\displaystyle y=f(x)}
, the differential may also be written as
d
y
{\displaystyle dy}
. Since
d
x
(
x
,
Δ
x
)
=
Δ
x
{\displaystyle dx(x,\Delta x)=\Delta x}
, it is conventional to write
d
x
=
Δ
x
{\displaystyle dx=\Delta x}
so that the following equality holds:
d
f
(
x
)
=
f
′
(
x
)
d
x
{\displaystyle df(x)=f'(x)\,dx}
This notion of differential is broadly applicable when a linear approximation to a function is sought, in which the value of the increment
Δ
x
{\displaystyle \Delta x}
is small enough. More precisely, if
f
{\displaystyle f}
is a differentiable function at
x
{\displaystyle x}
, then the difference in
y
{\displaystyle y}
-values
Δ
y
=
d
e
f
f
(
x
+
Δ
x
)
−
f
(
x
)
{\displaystyle \Delta y\ {\stackrel {\rm {def}}{=}}\ f(x+\Delta x)-f(x)}
satisfies
Δ
y
=
f
′
(
x
)
Δ
x
+
ε
=
d
f
(
x
)
+
ε
{\displaystyle \Delta y=f'(x)\,\Delta x+\varepsilon =df(x)+\varepsilon \,}
where the error
ε
{\displaystyle \varepsilon }
in the approximation satisfies
ε
/
Δ
x
→
0
{\displaystyle \varepsilon /\Delta x\rightarrow 0}
as
Δ
x
→
0
{\displaystyle \Delta x\rightarrow 0}
. In other words, one has the approximate identity
Δ
y
≈
d
y
{\displaystyle \Delta y\approx dy}
in which the error can be made as small as desired relative to
Δ
x
{\displaystyle \Delta x}
by constraining
Δ
x
{\displaystyle \Delta x}
to be sufficiently small; that is to say,
Δ
y
−
d
y
Δ
x
→
0
{\displaystyle {\frac {\Delta y-dy}{\Delta x}}\to 0}
as
Δ
x
→
0
{\displaystyle \Delta x\rightarrow 0}
. For this reason, the differential of a function is known as the principal (linear) part in the increment of a function: the differential is a linear function of the increment
Δ
x
{\displaystyle \Delta x}
, and although the error
ε
{\displaystyle \varepsilon }
may be nonlinear, it tends to zero rapidly as
Δ
x
{\displaystyle \Delta x}
tends to zero.
== Differentials in several variables ==
Following Goursat (1904, I, §15), for functions of more than one independent variable,
y
=
f
(
x
1
,
…
,
x
n
)
,
{\displaystyle y=f(x_{1},\dots ,x_{n}),}
the partial differential of y with respect to any one of the variables xi is the principal part of the change in y resulting from a change dxi in that one variable. The partial differential is therefore
∂
y
∂
x
i
d
x
i
{\displaystyle {\frac {\partial y}{\partial x_{i}}}dx_{i}}
involving the partial derivative of y with respect to xi. The sum of the partial differentials with respect to all of the independent variables is the total differential
d
y
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
,
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}dx_{n},}
which is the principal part of the change in y resulting from changes in the independent variables xi.
More precisely, in the context of multivariable calculus, following Courant (1937b), if f is a differentiable function, then by the definition of differentiability, the increment
Δ
y
=
d
e
f
f
(
x
1
+
Δ
x
1
,
…
,
x
n
+
Δ
x
n
)
−
f
(
x
1
,
…
,
x
n
)
=
∂
y
∂
x
1
Δ
x
1
+
⋯
+
∂
y
∂
x
n
Δ
x
n
+
ε
1
Δ
x
1
+
⋯
+
ε
n
Δ
x
n
{\displaystyle {\begin{aligned}\Delta y&{}~{\stackrel {\mathrm {def} }{=}}~f(x_{1}+\Delta x_{1},\dots ,x_{n}+\Delta x_{n})-f(x_{1},\dots ,x_{n})\\&{}={\frac {\partial y}{\partial x_{1}}}\Delta x_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\Delta x_{n}+\varepsilon _{1}\Delta x_{1}+\cdots +\varepsilon _{n}\Delta x_{n}\end{aligned}}}
where the error terms ε i tend to zero as the increments Δxi jointly tend to zero. The total differential is then rigorously defined as
d
y
=
∂
y
∂
x
1
Δ
x
1
+
⋯
+
∂
y
∂
x
n
Δ
x
n
.
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}\Delta x_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\Delta x_{n}.}
Since, with this definition,
d
x
i
(
Δ
x
1
,
…
,
Δ
x
n
)
=
Δ
x
i
,
{\displaystyle dx_{i}(\Delta x_{1},\dots ,\Delta x_{n})=\Delta x_{i},}
one has
d
y
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
.
{\displaystyle dy={\frac {\partial y}{\partial x_{1}}}\,dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}\,dx_{n}.}
As in the case of one variable, the approximate identity holds
d
y
≈
Δ
y
{\displaystyle dy\approx \Delta y}
in which the total error can be made as small as desired relative to
Δ
x
1
2
+
⋯
+
Δ
x
n
2
{\textstyle {\sqrt {\Delta x_{1}^{2}+\cdots +\Delta x_{n}^{2}}}}
by confining attention to sufficiently small increments.
=== Application of the total differential to error estimation ===
In measurement, the total differential is used in estimating the error
Δ
f
{\displaystyle \Delta f}
of a function
f
{\displaystyle f}
based on the errors
Δ
x
,
Δ
y
,
…
{\displaystyle \Delta x,\Delta y,\ldots }
of the parameters
x
,
y
,
…
{\displaystyle x,y,\ldots }
. Assuming that the interval is short enough for the change to be approximately linear:
Δ
f
(
x
)
=
f
′
(
x
)
Δ
x
{\displaystyle \Delta f(x)=f'(x)\Delta x}
and that all variables are independent, then for all variables,
Δ
f
=
f
x
Δ
x
+
f
y
Δ
y
+
⋯
{\displaystyle \Delta f=f_{x}\Delta x+f_{y}\Delta y+\cdots }
This is because the derivative
f
x
{\displaystyle f_{x}}
with respect to the particular parameter
x
{\displaystyle x}
gives the sensitivity of the function
f
{\displaystyle f}
to a change in
x
{\displaystyle x}
, in particular the error
Δ
x
{\displaystyle \Delta x}
. As they are assumed to be independent, the analysis describes the worst-case scenario. The absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign. From this principle the error rules of summation, multiplication etc. are derived, e.g.:
That is to say, in multiplication, the total relative error is the sum of the relative errors of the parameters.
To illustrate how this depends on the function considered, consider the case where the function is
f
(
a
,
b
)
=
a
ln
b
{\displaystyle f(a,b)=a\ln b}
instead. Then, it can be computed that the error estimate is
Δ
f
f
=
Δ
a
a
+
Δ
b
b
ln
b
{\displaystyle {\frac {\Delta f}{f}}={\frac {\Delta a}{a}}+{\frac {\Delta b}{b\ln b}}}
with an extra ln b factor not found in the case of a simple product. This additional factor tends to make the error smaller, as the denominator b ln b is larger than a bare b.
== Higher-order differentials ==
Higher-order differentials of a function y = f(x) of a single variable x can be defined via:
d
2
y
=
d
(
d
y
)
=
d
(
f
′
(
x
)
d
x
)
=
(
d
f
′
(
x
)
)
d
x
=
f
″
(
x
)
(
d
x
)
2
,
{\displaystyle d^{2}y=d(dy)=d(f'(x)dx)=(df'(x))dx=f''(x)\,(dx)^{2},}
and, in general,
d
n
y
=
f
(
n
)
(
x
)
(
d
x
)
n
.
{\displaystyle d^{n}y=f^{(n)}(x)\,(dx)^{n}.}
Informally, this motivates Leibniz's notation for higher-order derivatives
f
(
n
)
(
x
)
=
d
n
f
d
x
n
.
{\displaystyle f^{(n)}(x)={\frac {d^{n}f}{dx^{n}}}.}
When the independent variable x itself is permitted to depend on other variables, then the expression becomes more complicated, as it must include also higher order differentials in x itself. Thus, for instance,
d
2
y
=
f
″
(
x
)
(
d
x
)
2
+
f
′
(
x
)
d
2
x
d
3
y
=
f
‴
(
x
)
(
d
x
)
3
+
3
f
″
(
x
)
d
x
d
2
x
+
f
′
(
x
)
d
3
x
{\displaystyle {\begin{aligned}d^{2}y&=f''(x)\,(dx)^{2}+f'(x)d^{2}x\\[1ex]d^{3}y&=f'''(x)\,(dx)^{3}+3f''(x)dx\,d^{2}x+f'(x)d^{3}x\end{aligned}}}
and so forth.
Similar considerations apply to defining higher order differentials of functions of several variables. For example, if f is a function of two variables x and y, then
d
n
f
=
∑
k
=
0
n
(
n
k
)
∂
n
f
∂
x
k
∂
y
n
−
k
(
d
x
)
k
(
d
y
)
n
−
k
,
{\displaystyle d^{n}f=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {\partial ^{n}f}{\partial x^{k}\partial y^{n-k}}}(dx)^{k}(dy)^{n-k},}
where
(
n
k
)
{\textstyle {\binom {n}{k}}}
is a binomial coefficient. In more variables, an analogous expression holds, but with an appropriate multinomial expansion rather than binomial expansion.
Higher order differentials in several variables also become more complicated when the independent variables are themselves allowed to depend on other variables. For instance, for a function f of x and y which are allowed to depend on auxiliary variables, one has
d
2
f
=
(
∂
2
f
∂
x
2
(
d
x
)
2
+
2
∂
2
f
∂
x
∂
y
d
x
d
y
+
∂
2
f
∂
y
2
(
d
y
)
2
)
+
∂
f
∂
x
d
2
x
+
∂
f
∂
y
d
2
y
.
{\displaystyle d^{2}f=\left({\frac {\partial ^{2}f}{\partial x^{2}}}(dx)^{2}+2{\frac {\partial ^{2}f}{\partial x\partial y}}dx\,dy+{\frac {\partial ^{2}f}{\partial y^{2}}}(dy)^{2}\right)+{\frac {\partial f}{\partial x}}d^{2}x+{\frac {\partial f}{\partial y}}d^{2}y.}
Because of this notational awkwardness, the use of higher order differentials was roundly criticized by Hadamard (1935), who concluded:
Enfin, que signifie ou que représente l'égalité
d
2
z
=
r
d
x
2
+
2
s
d
x
d
y
+
t
d
y
2
?
{\displaystyle d^{2}z=r\,dx^{2}+2s\,dx\,dy+t\,dy^{2}\,?}
A mon avis, rien du tout.
That is: Finally, what is meant, or represented, by the equality [...]? In my opinion, nothing at all. In spite of this skepticism, higher order differentials did emerge as an important tool in analysis.
In these contexts, the n-th order differential of the function f applied to an increment Δx is defined by
d
n
f
(
x
,
Δ
x
)
=
d
n
d
t
n
f
(
x
+
t
Δ
x
)
|
t
=
0
{\displaystyle d^{n}f(x,\Delta x)=\left.{\frac {d^{n}}{dt^{n}}}f(x+t\Delta x)\right|_{t=0}}
or an equivalent expression, such as
lim
t
→
0
Δ
t
Δ
x
n
f
t
n
{\displaystyle \lim _{t\to 0}{\frac {\Delta _{t\Delta x}^{n}f}{t^{n}}}}
where
Δ
t
Δ
x
n
f
{\displaystyle \Delta _{t\Delta x}^{n}f}
is an nth forward difference with increment tΔx.
This definition makes sense as well if f is a function of several variables (for simplicity taken here as a vector argument). Then the n-th differential defined in this way is a homogeneous function of degree n in the vector increment Δx. Furthermore, the Taylor series of f at the point x is given by
f
(
x
+
Δ
x
)
∼
f
(
x
)
+
d
f
(
x
,
Δ
x
)
+
1
2
d
2
f
(
x
,
Δ
x
)
+
⋯
+
1
n
!
d
n
f
(
x
,
Δ
x
)
+
⋯
{\displaystyle f(x+\Delta x)\sim f(x)+df(x,\Delta x)+{\frac {1}{2}}d^{2}f(x,\Delta x)+\cdots +{\frac {1}{n!}}d^{n}f(x,\Delta x)+\cdots }
The higher order Gateaux derivative generalizes these considerations to infinite dimensional spaces.
== Properties ==
A number of properties of the differential follow in a straightforward manner from the corresponding properties of the derivative, partial derivative, and total derivative. These include:
Linearity: For constants a and b and differentiable functions f and g,
d
(
a
f
+
b
g
)
=
a
d
f
+
b
d
g
.
{\displaystyle d(af+bg)=a\,df+b\,dg.}
Product rule: For two differentiable functions f and g,
d
(
f
g
)
=
f
d
g
+
g
d
f
.
{\displaystyle d(fg)=f\,dg+g\,df.}
An operation d with these two properties is known in abstract algebra as a derivation. They imply the power rule
d
(
f
n
)
=
n
f
n
−
1
d
f
{\displaystyle d(f^{n})=nf^{n-1}df}
In addition, various forms of the chain rule hold, in increasing level of generality:
If y = f(u) is a differentiable function of the variable u and u = g(x) is a differentiable function of x, then
d
y
=
f
′
(
u
)
d
u
=
f
′
(
g
(
x
)
)
g
′
(
x
)
d
x
.
{\displaystyle dy=f'(u)\,du=f'(g(x))g'(x)\,dx.}
If y = f(x1, ..., xn) and all of the variables x1, ..., xn depend on another variable t, then by the chain rule for partial derivatives, one has
d
y
=
d
y
d
t
d
t
=
∂
y
∂
x
1
d
x
1
+
⋯
+
∂
y
∂
x
n
d
x
n
=
∂
y
∂
x
1
d
x
1
d
t
d
t
+
⋯
+
∂
y
∂
x
n
d
x
n
d
t
d
t
.
{\displaystyle {\begin{aligned}dy={\frac {dy}{dt}}dt&={\frac {\partial y}{\partial x_{1}}}dx_{1}+\cdots +{\frac {\partial y}{\partial x_{n}}}dx_{n}\\[1ex]&={\frac {\partial y}{\partial x_{1}}}{\frac {dx_{1}}{dt}}\,dt+\cdots +{\frac {\partial y}{\partial x_{n}}}{\frac {dx_{n}}{dt}}\,dt.\end{aligned}}}
Heuristically, the chain rule for several variables can itself be understood by dividing through both sides of this equation by the infinitely small quantity dt.
More general analogous expressions hold, in which the intermediate variables xi depend on more than one variable.
== General formulation ==
A consistent notion of differential can be developed for a function f : Rn → Rm between two Euclidean spaces. Let x,Δx ∈ Rn be a pair of Euclidean vectors. The increment in the function f is
Δ
f
=
f
(
x
+
Δ
x
)
−
f
(
x
)
.
{\displaystyle \Delta f=f(\mathbf {x} +\Delta \mathbf {x} )-f(\mathbf {x} ).}
If there exists an m × n matrix A such that
Δ
f
=
A
Δ
x
+
‖
Δ
x
‖
ε
{\displaystyle \Delta f=A\Delta \mathbf {x} +\|\Delta \mathbf {x} \|{\boldsymbol {\varepsilon }}}
in which the vector ε → 0 as Δx → 0, then f is by definition differentiable at the point x. The matrix A is sometimes known as the Jacobian matrix, and the linear transformation that associates to the increment Δx ∈ Rn the vector AΔx ∈ Rm is, in this general setting, known as the differential df(x) of f at the point x. This is precisely the Fréchet derivative, and the same construction can be made to work for a function between any Banach spaces.
Another fruitful point of view is to define the differential directly as a kind of directional derivative:
d
f
(
x
,
h
)
=
lim
t
→
0
f
(
x
+
t
h
)
−
f
(
x
)
t
=
d
d
t
f
(
x
+
t
h
)
|
t
=
0
,
{\displaystyle df(\mathbf {x} ,\mathbf {h} )=\lim _{t\to 0}{\frac {f(\mathbf {x} +t\mathbf {h} )-f(\mathbf {x} )}{t}}=\left.{\frac {d}{dt}}f(\mathbf {x} +t\mathbf {h} )\right|_{t=0},}
which is the approach already taken for defining higher order differentials (and is most nearly the definition set forth by Cauchy). If t represents time and x position, then h represents a velocity instead of a displacement as we have heretofore regarded it. This yields yet another refinement of the notion of differential: that it should be a linear function of a kinematic velocity. The set of all velocities through a given point of space is known as the tangent space, and so df gives a linear function on the tangent space: a differential form. With this interpretation, the differential of f is known as the exterior derivative, and has broad application in differential geometry because the notion of velocities and the tangent space makes sense on any differentiable manifold. If, in addition, the output value of f also represents a position (in a Euclidean space), then a dimensional analysis confirms that the output value of df must be a velocity. If one treats the differential in this manner, then it is known as the pushforward since it "pushes" velocities from a source space into velocities in a target space.
== Other approaches ==
Although the notion of having an infinitesimal increment dx is not well-defined in modern mathematical analysis, a variety of techniques exist for defining the infinitesimal differential so that the differential of a function can be handled in a manner that does not clash with the Leibniz notation. These include:
Defining the differential as a kind of differential form, specifically the exterior derivative of a function. The infinitesimal increments are then identified with vectors in the tangent space at a point. This approach is popular in differential geometry and related fields, because it readily generalizes to mappings between differentiable manifolds.
Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.
Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.
Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers which contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.
== Examples and applications ==
Differentials may be effectively used in numerical analysis to study the propagation of experimental errors in a calculation, and thus the overall numerical stability of a problem (Courant 1937a). Suppose that the variable x represents the outcome of an experiment and y is the result of a numerical computation applied to x. The question is to what extent errors in the measurement of x influence the outcome of the computation of y. If the x is known to within Δx of its true value, then Taylor's theorem gives the following estimate on the error Δy in the computation of y:
Δ
y
=
f
′
(
x
)
Δ
x
+
(
Δ
x
)
2
2
f
″
(
ξ
)
{\displaystyle \Delta y=f'(x)\Delta x+{\frac {(\Delta x)^{2}}{2}}f''(\xi )}
where ξ = x + θΔx for some 0 < θ < 1. If Δx is small, then the second order term is negligible, so that Δy is, for practical purposes, well-approximated by dy = f'(x) Δx.
The differential is often useful to rewrite a differential equation
d
y
d
x
=
g
(
x
)
{\displaystyle {\frac {dy}{dx}}=g(x)}
in the form
d
y
=
g
(
x
)
d
x
,
{\displaystyle dy=g(x)\,dx,}
in particular when one wants to separate the variables.
== Notes ==
== See also ==
Notation for differentiation
== References ==
Boyer, Carl B. (1959), The history of the calculus and its conceptual development, New York: Dover Publications, MR 0124178.
Cauchy, Augustin-Louis (1823), Résumé des Leçons données à l'Ecole royale polytechnique sur les applications du calcul infinitésimal, archived from the original on 2007-07-08, retrieved 2009-08-19.
Courant, Richard (1937a), Differential and integral calculus. Vol. I, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60842-4, MR 1009558 {{citation}}: ISBN / Date incompatibility (help).
Courant, Richard (1937b), Differential and integral calculus. Vol. II, Wiley Classics Library, New York: John Wiley & Sons (published 1988), ISBN 978-0-471-60840-0, MR 1009559 {{citation}}: ISBN / Date incompatibility (help).
Courant, Richard; John, Fritz (1999), Introduction to Calculus and Analysis Volume 1, Classics in Mathematics, Berlin, New York: Springer-Verlag, ISBN 3-540-65058-X, MR 1746554
Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 0-387-98637-5.
Fréchet, Maurice (1925), "La notion de différentielle dans l'analyse générale", Annales Scientifiques de l'École Normale Supérieure, Série 3, 42: 293–323, doi:10.24033/asens.766, ISSN 0012-9593, MR 1509268.
Goursat, Édouard (1904), A course in mathematical analysis: Vol 1: Derivatives and differentials, definite integrals, expansion in series, applications to geometry, E. R. Hedrick, New York: Dover Publications (published 1959), MR 0106155.
Hadamard, Jacques (1935), "La notion de différentiel dans l'enseignement", Mathematical Gazette, XIX (236): 341–342, doi:10.2307/3606323, JSTOR 3606323.
Hardy, Godfrey Harold (1908), A Course of Pure Mathematics, Cambridge University Press, ISBN 978-0-521-09227-2 {{citation}}: ISBN / Date incompatibility (help).
Hille, Einar; Phillips, Ralph S. (1974), Functional analysis and semi-groups, Providence, R.I.: American Mathematical Society, MR 0423094.
Itô, Kiyosi (1993), Encyclopedic Dictionary of Mathematics (2nd ed.), MIT Press, ISBN 978-0-262-59020-4.
Kline, Morris (1977), "Chapter 13: Differentials and the law of the mean", Calculus: An intuitive and physical approach, John Wiley and Sons.
Kline, Morris (1972), Mathematical thought from ancient to modern times (3rd ed.), Oxford University Press (published 1990), ISBN 978-0-19-506136-9
Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.).
Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press.
Moerdijk, I.; Reyes, G.E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag.
Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3.
Tolstov, G.P. (2001) [1994], "Differential", Encyclopedia of Mathematics, EMS Press.
== External links ==
Differential Of A Function at Wolfram Demonstrations Project | Wikipedia/Total_differential |
In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available.
== Definition ==
Let R and S be commutative rings and φ : R → S be a ring homomorphism. An important example is for R a field and S a unital algebra over R (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module
Ω
S
/
R
{\displaystyle \Omega _{S/R}}
of differentials in different, but equivalent ways.
=== Definition using derivations ===
An R-linear derivation on S is an R-module homomorphism
d
:
S
→
M
{\displaystyle d:S\to M}
to an S-module M satisfying the Leibniz rule
d
(
f
g
)
=
f
d
g
+
g
d
f
{\displaystyle d(fg)=f\,dg+g\,df}
(it automatically follows from this definition that the image of R is in the kernel of d ). The module of Kähler differentials is defined as the S-module
Ω
S
/
R
{\displaystyle \Omega _{S/R}}
for which there is a universal derivation
d
:
S
→
Ω
S
/
R
{\displaystyle d:S\to \Omega _{S/R}}
. As with other universal properties, this means that d is the best possible derivation in the sense that any other derivation may be obtained from it by composition with an S-module homomorphism. In other words, the composition with d provides, for every S-module M, an S-module isomorphism
Hom
S
(
Ω
S
/
R
,
M
)
→
≅
Der
R
(
S
,
M
)
.
{\displaystyle \operatorname {Hom} _{S}(\Omega _{S/R},M){\xrightarrow {\cong }}\operatorname {Der} _{R}(S,M).}
One construction of ΩS/R and d proceeds by constructing a free S-module with one formal generator ds for each s in S, and imposing the relations
dr = 0,
d(s + t) = ds + dt,
d(st) = s dt + t ds,
for all r in R and all s and t in S. The universal derivation sends s to ds. The relations imply that the universal derivation is a homomorphism of R-modules.
=== Definition using the augmentation ideal ===
Another construction proceeds by letting I be the ideal in the tensor product
S
⊗
R
S
{\displaystyle S\otimes _{R}S}
defined as the kernel of the multiplication map
{
S
⊗
R
S
→
S
∑
s
i
⊗
t
i
↦
∑
s
i
⋅
t
i
{\displaystyle {\begin{cases}S\otimes _{R}S\to S\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\end{cases}}}
Then the module of Kähler differentials of S can be equivalently defined by
Ω
S
/
R
=
I
/
I
2
,
{\displaystyle \Omega _{S/R}=I/I^{2},}
and the universal derivation is the homomorphism d defined by
d
s
=
1
⊗
s
−
s
⊗
1.
{\displaystyle ds=1\otimes s-s\otimes 1.}
This construction is equivalent to the previous one because I is the kernel of the projection
{
S
⊗
R
S
→
S
⊗
R
R
∑
s
i
⊗
t
i
↦
∑
s
i
⋅
t
i
⊗
1
{\displaystyle {\begin{cases}S\otimes _{R}S\to S\otimes _{R}R\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\otimes 1\end{cases}}}
Thus we have:
S
⊗
R
S
≡
I
⊕
S
⊗
R
R
.
{\displaystyle S\otimes _{R}S\equiv I\oplus S\otimes _{R}R.}
Then
S
⊗
R
S
/
S
⊗
R
R
{\displaystyle S\otimes _{R}S/S\otimes _{R}R}
may be identified with I by the map induced by the complementary projection
∑
s
i
⊗
t
i
↦
∑
s
i
⊗
t
i
−
∑
s
i
⋅
t
i
⊗
1.
{\displaystyle \sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\otimes t_{i}-\sum s_{i}\cdot t_{i}\otimes 1.}
This identifies I with the S-module generated by the formal generators ds for s in S, subject to d being a homomorphism of R-modules which sends each element of R to zero. Taking the quotient by I2 precisely imposes the Leibniz rule.
== Examples and basic facts ==
For any commutative ring R, the Kähler differentials of the polynomial ring
S
=
R
[
t
1
,
…
,
t
n
]
{\displaystyle S=R[t_{1},\dots ,t_{n}]}
are a free S-module of rank n generated by the differentials of the variables:
Ω
R
[
t
1
,
…
,
t
n
]
/
R
1
=
⨁
i
=
1
n
R
[
t
1
,
…
t
n
]
d
t
i
.
{\displaystyle \Omega _{R[t_{1},\dots ,t_{n}]/R}^{1}=\bigoplus _{i=1}^{n}R[t_{1},\dots t_{n}]\,dt_{i}.}
Kähler differentials are compatible with extension of scalars, in the sense that for a second R-algebra R′ and
S
′
=
S
⊗
R
R
′
{\displaystyle S'=S\otimes _{R}R'}
, there is an isomorphism
Ω
S
/
R
⊗
S
S
′
≅
Ω
S
′
/
R
′
.
{\displaystyle \Omega _{S/R}\otimes _{S}S'\cong \Omega _{S'/R'}.}
As a particular case of this, Kähler differentials are compatible with localizations, meaning that if W is a multiplicative set in S, then there is an isomorphism
W
−
1
Ω
S
/
R
≅
Ω
W
−
1
S
/
R
.
{\displaystyle W^{-1}\Omega _{S/R}\cong \Omega _{W^{-1}S/R}.}
Given two ring homomorphisms
R
→
S
→
T
{\displaystyle R\to S\to T}
, there is a short exact sequence of T-modules
Ω
S
/
R
⊗
S
T
→
Ω
T
/
R
→
Ω
T
/
S
→
0.
{\displaystyle \Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to \Omega _{T/S}\to 0.}
If
T
=
S
/
I
{\displaystyle T=S/I}
for some ideal I, the term
Ω
T
/
S
{\displaystyle \Omega _{T/S}}
vanishes and the sequence can be continued at the left as follows:
I
/
I
2
→
[
f
]
↦
d
f
⊗
1
Ω
S
/
R
⊗
S
T
→
Ω
T
/
R
→
0.
{\displaystyle I/I^{2}{\xrightarrow {[f]\mapsto df\otimes 1}}\Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to 0.}
A generalization of these two short exact sequences is provided by the cotangent complex.
The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated R-algebras
T
=
R
[
t
1
,
…
,
t
n
]
/
(
f
1
,
…
,
f
m
)
{\displaystyle T=R[t_{1},\ldots ,t_{n}]/(f_{1},\ldots ,f_{m})}
. Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable,
Ω
(
R
[
t
]
/
(
f
)
)
/
R
≅
(
R
[
t
]
d
t
⊗
R
[
t
]
/
(
f
)
)
/
(
d
f
)
≅
R
[
t
]
/
(
f
,
d
f
/
d
t
)
d
t
.
{\displaystyle \Omega _{(R[t]/(f))/R}\cong (R[t]\,dt\otimes R[t]/(f))/(df)\cong R[t]/(f,df/dt)\,dt.}
== Kähler differentials for schemes ==
Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, I represents the ideal defining the diagonal in the fiber product of Spec(S) with itself over Spec(S) → Spec(R). This construction therefore has a more geometric flavor, in the sense that the notion of first infinitesimal neighbourhood of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes
f
:
X
→
Y
{\displaystyle f:X\to Y}
by setting
I
{\displaystyle {\mathcal {I}}}
to be the ideal of the diagonal in the fiber product
X
×
Y
X
{\displaystyle X\times _{Y}X}
. The cotangent sheaf
Ω
X
/
Y
=
I
/
I
2
{\displaystyle \Omega _{X/Y}={\mathcal {I}}/{\mathcal {I}}^{2}}
, together with the derivation
d
:
O
X
→
Ω
X
/
Y
{\displaystyle d:{\mathcal {O}}_{X}\to \Omega _{X/Y}}
defined analogously to before, is universal among
f
−
1
O
Y
{\displaystyle f^{-1}{\mathcal {O}}_{Y}}
-linear derivations of
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules. If U is an open affine subscheme of X whose image in Y is contained in an open affine subscheme V, then the cotangent sheaf restricts to a sheaf on U which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying U and V.
Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
Y
→
Z
{\displaystyle g:Y\to Z}
of schemes there is an exact sequence of sheaves on
X
{\displaystyle X}
f
∗
Ω
Y
/
Z
→
Ω
X
/
Z
→
Ω
X
/
Y
→
0
{\displaystyle f^{*}\Omega _{Y/Z}\to \Omega _{X/Z}\to \Omega _{X/Y}\to 0}
Also, if
X
⊂
Y
{\displaystyle X\subset Y}
is a closed subscheme given by the ideal sheaf
I
{\displaystyle {\mathcal {I}}}
, then
Ω
X
/
Y
=
0
{\displaystyle \Omega _{X/Y}=0}
and there is an exact sequence of sheaves on
X
{\displaystyle X}
I
/
I
2
→
Ω
Y
/
Z
|
X
→
Ω
X
/
Z
→
0
{\displaystyle {\mathcal {I}}/{\mathcal {I}}^{2}\to \Omega _{Y/Z}|_{X}\to \Omega _{X/Z}\to 0}
=== Examples ===
==== Finite separable field extensions ====
If
K
/
k
{\displaystyle K/k}
is a finite field extension, then
Ω
K
/
k
1
=
0
{\displaystyle \Omega _{K/k}^{1}=0}
if and only if
K
/
k
{\displaystyle K/k}
is separable. Consequently, if
K
/
k
{\displaystyle K/k}
is a finite separable field extension and
π
:
Y
→
Spec
(
K
)
{\displaystyle \pi :Y\to \operatorname {Spec} (K)}
is a smooth variety (or scheme), then the relative cotangent sequence
π
∗
Ω
K
/
k
1
→
Ω
Y
/
k
1
→
Ω
Y
/
K
1
→
0
{\displaystyle \pi ^{*}\Omega _{K/k}^{1}\to \Omega _{Y/k}^{1}\to \Omega _{Y/K}^{1}\to 0}
proves
Ω
Y
/
k
1
≅
Ω
Y
/
K
1
{\displaystyle \Omega _{Y/k}^{1}\cong \Omega _{Y/K}^{1}}
.
==== Cotangent modules of a projective variety ====
Given a projective scheme
X
∈
Sch
/
k
{\displaystyle X\in \operatorname {Sch} /\mathbb {k} }
, its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve
Proj
(
C
[
x
,
y
,
z
]
(
x
n
+
y
n
−
z
n
)
)
=
Proj
(
R
)
{\displaystyle \operatorname {Proj} \left({\frac {\mathbb {C} [x,y,z]}{(x^{n}+y^{n}-z^{n})}}\right)=\operatorname {Proj} (R)}
then we can compute the cotangent module as
Ω
R
/
C
=
R
⋅
d
x
⊕
R
⋅
d
y
⊕
R
⋅
d
z
n
x
n
−
1
d
x
+
n
y
n
−
1
d
y
−
n
z
n
−
1
d
z
{\displaystyle \Omega _{R/\mathbb {C} }={\frac {R\cdot dx\oplus R\cdot dy\oplus R\cdot dz}{nx^{n-1}dx+ny^{n-1}dy-nz^{n-1}dz}}}
Then,
Ω
X
/
C
=
Ω
R
/
C
~
{\displaystyle \Omega _{X/\mathbb {C} }={\widetilde {\Omega _{R/\mathbb {C} }}}}
==== Morphisms of schemes ====
Consider the morphism
X
=
Spec
(
C
[
t
,
x
,
y
]
(
x
y
−
t
)
)
=
Spec
(
R
)
→
Spec
(
C
[
t
]
)
=
Y
{\displaystyle X=\operatorname {Spec} \left({\frac {\mathbb {C} [t,x,y]}{(xy-t)}}\right)=\operatorname {Spec} (R)\to \operatorname {Spec} (\mathbb {C} [t])=Y}
in
Sch
/
C
{\displaystyle \operatorname {Sch} /\mathbb {C} }
. Then, using the first sequence we see that
R
⋅
d
t
~
→
R
⋅
d
t
⊕
R
⋅
d
x
⊕
R
⋅
d
y
y
d
x
+
x
d
y
−
d
t
~
→
Ω
X
/
Y
→
0
{\displaystyle {\widetilde {R\cdot dt}}\to {\widetilde {\frac {R\cdot dt\oplus R\cdot dx\oplus R\cdot dy}{ydx+xdy-dt}}}\to \Omega _{X/Y}\to 0}
hence
Ω
X
/
Y
=
R
⋅
d
x
⊕
R
⋅
d
y
y
d
x
+
x
d
y
~
{\displaystyle \Omega _{X/Y}={\widetilde {\frac {R\cdot dx\oplus R\cdot dy}{ydx+xdy}}}}
== Higher differential forms and algebraic de Rham cohomology ==
=== de Rham complex ===
As before, fix a map
X
→
Y
{\displaystyle X\to Y}
. Differential forms of higher degree are defined as the exterior powers (over
O
X
{\displaystyle {\mathcal {O}}_{X}}
),
Ω
X
/
Y
n
:=
⋀
n
Ω
X
/
Y
.
{\displaystyle \Omega _{X/Y}^{n}:=\bigwedge ^{n}\Omega _{X/Y}.}
The derivation
O
X
→
Ω
X
/
Y
{\displaystyle {\mathcal {O}}_{X}\to \Omega _{X/Y}}
extends in a natural way to a sequence of maps
0
→
O
X
→
d
Ω
X
/
Y
1
→
d
Ω
X
/
Y
2
→
d
⋯
{\displaystyle 0\to {\mathcal {O}}_{X}{\xrightarrow {d}}\Omega _{X/Y}^{1}{\xrightarrow {d}}\Omega _{X/Y}^{2}{\xrightarrow {d}}\cdots }
satisfying
d
∘
d
=
0.
{\displaystyle d\circ d=0.}
This is a cochain complex known as the de Rham complex.
The de Rham complex enjoys an additional multiplicative structure, the wedge product
Ω
X
/
Y
n
⊗
Ω
X
/
Y
m
→
Ω
X
/
Y
n
+
m
.
{\displaystyle \Omega _{X/Y}^{n}\otimes \Omega _{X/Y}^{m}\to \Omega _{X/Y}^{n+m}.}
This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra.
=== de Rham cohomology ===
The hypercohomology of the de Rham complex of sheaves is called the algebraic de Rham cohomology of X over Y and is denoted by
H
dR
n
(
X
/
Y
)
{\displaystyle H_{\text{dR}}^{n}(X/Y)}
or just
H
dR
n
(
X
)
{\displaystyle H_{\text{dR}}^{n}(X)}
if Y is clear from the context. (In many situations, Y is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by Grothendieck (1966a). It is closely related to crystalline cohomology.
As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when X = Spec S and Y = Spec R are affine schemes. In this case, because affine schemes have no higher cohomology,
H
dR
n
(
X
/
Y
)
{\displaystyle H_{\text{dR}}^{n}(X/Y)}
can be computed as the cohomology of the complex of abelian groups
0
→
S
→
d
Ω
S
/
R
1
→
d
Ω
S
/
R
2
→
d
⋯
{\displaystyle 0\to S{\xrightarrow {d}}\Omega _{S/R}^{1}{\xrightarrow {d}}\Omega _{S/R}^{2}{\xrightarrow {d}}\cdots }
which is, termwise, the global sections of the sheaves
Ω
X
/
Y
r
{\displaystyle \Omega _{X/Y}^{r}}
.
To take a very particular example, suppose that
X
=
Spec
Q
[
x
,
x
−
1
]
{\displaystyle X=\operatorname {Spec} \mathbb {Q} \left[x,x^{-1}\right]}
is the multiplicative group over
Q
.
{\displaystyle \mathbb {Q} .}
Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is
Q
[
x
,
x
−
1
]
→
d
Q
[
x
,
x
−
1
]
d
x
.
{\displaystyle \mathbb {Q} [x,x^{-1}]{\xrightarrow {d}}\mathbb {Q} [x,x^{-1}]\,dx.}
The differential d obeys the usual rules of calculus, meaning
d
(
x
n
)
=
n
x
n
−
1
d
x
.
{\displaystyle d(x^{n})=nx^{n-1}\,dx.}
The kernel and cokernel compute algebraic de Rham cohomology, so
H
dR
0
(
X
)
=
Q
H
dR
1
(
X
)
=
Q
⋅
x
−
1
d
x
{\displaystyle {\begin{aligned}H_{\text{dR}}^{0}(X)&=\mathbb {Q} \\H_{\text{dR}}^{1}(X)&=\mathbb {Q} \cdot x^{-1}dx\end{aligned}}}
and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of
Y
=
Spec
F
p
[
x
,
x
−
1
]
{\displaystyle Y=\operatorname {Spec} \mathbb {F} _{p}\left[x,x^{-1}\right]}
are much larger, namely,
H
dR
0
(
Y
)
=
⨁
k
∈
Z
F
p
⋅
x
k
p
H
dR
1
(
Y
)
=
⨁
k
∈
Z
F
p
⋅
x
k
p
−
1
d
x
{\displaystyle {\begin{aligned}H_{\text{dR}}^{0}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp}\\H_{\text{dR}}^{1}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp-1}\,dx\end{aligned}}}
Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields.
=== Grothendieck's comparison theorem ===
If X is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves
Ω
X
/
C
∙
(
−
)
→
Ω
X
an
∙
(
(
−
)
an
)
{\displaystyle \Omega _{X/\mathbb {C} }^{\bullet }(-)\to \Omega _{X^{\text{an}}}^{\bullet }((-)^{\text{an}})}
between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on
X
an
{\displaystyle X^{\text{an}}}
, the complex manifold associated to X. Here,
(
−
)
an
{\textstyle (-)^{\text{an}}}
denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, Grothendieck (1966a) showed that the comparison map induces an isomorphism
H
dR
∗
(
X
/
C
)
≅
H
dR
∗
(
X
an
)
{\displaystyle H_{\text{dR}}^{\ast }(X/\mathbb {C} )\cong H_{\text{dR}}^{\ast }(X^{\text{an}})}
from algebraic to smooth de Rham cohomology (and thus to singular cohomology
H
sing
∗
(
X
an
;
C
)
{\textstyle H_{\text{sing}}^{*}(X^{\text{an}};\mathbb {C} )}
by de Rham's theorem). In particular, if X is a smooth affine algebraic variety embedded in
C
n
{\textstyle \mathbb {C} ^{n}}
, then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on X is a quasi-isomorphism. For example, if
X
=
{
(
w
,
z
)
∈
C
2
:
w
z
=
1
}
{\displaystyle X=\{(w,z)\in \mathbb {C} ^{2}:wz=1\}}
,
then as shown above, the computation of algebraic de Rham cohomology gives explicit generators
{
1
,
z
−
1
d
z
}
{\textstyle \{1,z^{-1}dz\}}
for
H
dR
0
(
X
/
C
)
{\displaystyle H_{\text{dR}}^{0}(X/\mathbb {C} )}
and
H
dR
1
(
X
/
C
)
{\displaystyle H_{\text{dR}}^{1}(X/\mathbb {C} )}
, respectively, while all other cohomology groups vanish. Since X is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem.
Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring
k
[
x
,
y
]
/
(
y
2
−
x
3
)
{\displaystyle k[x,y]/(y^{2}-x^{3})}
with
y
{\displaystyle y}
where
deg
(
y
)
=
3
{\displaystyle \deg(y)=3}
and
deg
(
x
)
=
2
{\displaystyle \deg(x)=2}
. Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal.
A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by Cisinski & Déglise (2013).
== Applications ==
=== Canonical divisor ===
If X is a smooth variety over a field k, then
Ω
X
/
k
{\displaystyle \Omega _{X/k}}
is a vector bundle (i.e., a locally free
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module) of rank equal to the dimension of X. This implies, in particular, that
ω
X
/
k
:=
⋀
dim
X
Ω
X
/
k
{\displaystyle \omega _{X/k}:=\bigwedge ^{\dim X}\Omega _{X/k}}
is a line bundle or, equivalently, a divisor. It is referred to as the canonical divisor. The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality.
=== Classification of algebraic curves ===
The geometric genus of a smooth algebraic variety X of dimension d over a field k is defined as the dimension
g
:=
dim
H
0
(
X
,
Ω
X
/
k
d
)
.
{\displaystyle g:=\dim H^{0}(X,\Omega _{X/k}^{d}).}
For curves, this purely algebraic definition agrees with the topological definition (for
k
=
C
{\displaystyle k=\mathbb {C} }
) as the "number of handles" of the Riemann surface associated to X. There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for g being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively.
=== Tangent bundle and Riemann–Roch theorem ===
The tangent bundle of a smooth variety X is, by definition, the dual of the cotangent sheaf
Ω
X
/
k
{\displaystyle \Omega _{X/k}}
. The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle.
=== Unramified and smooth morphisms ===
The sheaf of differentials is related to various algebro-geometric notions. A morphism
f
:
X
→
Y
{\displaystyle f:X\to Y}
of schemes is unramified if and only if
Ω
X
/
Y
{\displaystyle \Omega _{X/Y}}
is zero. A special case of this assertion is that for a field k,
K
:=
k
[
t
]
/
f
{\displaystyle K:=k[t]/f}
is separable over k iff
Ω
K
/
k
=
0
{\displaystyle \Omega _{K/k}=0}
, which can also be read off the above computation.
A morphism f of finite type is a smooth morphism if it is flat and if
Ω
X
/
Y
{\displaystyle \Omega _{X/Y}}
is a locally free
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module of appropriate rank. The computation of
Ω
R
[
t
1
,
…
,
t
n
]
/
R
{\displaystyle \Omega _{R[t_{1},\ldots ,t_{n}]/R}}
above shows that the projection from affine space
A
R
n
→
Spec
(
R
)
{\displaystyle \mathbb {A} _{R}^{n}\to \operatorname {Spec} (R)}
is smooth.
=== Periods ===
Periods are, broadly speaking, integrals of certain arithmetically defined differential forms. The simplest example of a period is
2
π
i
{\displaystyle 2\pi i}
, which arises as
∫
S
1
d
z
z
=
2
π
i
.
{\displaystyle \int _{S^{1}}{\frac {dz}{z}}=2\pi i.}
Algebraic de Rham cohomology is used to construct periods as follows: For an algebraic variety X defined over
Q
,
{\displaystyle \mathbb {Q} ,}
the above-mentioned compatibility with base-change yields a natural isomorphism
H
dR
n
(
X
/
Q
)
⊗
Q
C
=
H
dR
n
(
X
⊗
Q
C
/
C
)
.
{\displaystyle H_{\text{dR}}^{n}(X/\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} =H_{\text{dR}}^{n}(X\otimes _{\mathbb {Q} }\mathbb {C} /\mathbb {C} ).}
On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold
X
an
{\displaystyle X^{\text{an}}}
associated to X, denoted here
H
dR
n
(
X
an
)
.
{\displaystyle H_{\text{dR}}^{n}(X^{\text{an}}).}
Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients,
H
n
(
X
an
,
C
)
{\displaystyle H^{n}(X^{\text{an}},\mathbb {C} )}
, which by the universal coefficient theorem is in its turn isomorphic to
H
n
(
X
an
,
Q
)
⊗
Q
C
.
{\displaystyle H^{n}(X^{\text{an}},\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} .}
Composing these isomorphisms yields two rational vector spaces which, after tensoring with
C
{\displaystyle \mathbb {C} }
become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are periods.
=== Algebraic number theory ===
In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If L / K is a finite extension with rings of integers R and S respectively then the different ideal δL / K, which encodes the ramification data, is the annihilator of the R-module ΩR/S:
δ
L
/
K
=
{
x
∈
R
:
x
d
y
=
0
for all
y
∈
R
}
.
{\displaystyle \delta _{L/K}=\{x\in R:x\,dy=0{\text{ for all }}y\in R\}.}
== Related notions ==
Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology
H
H
∙
(
R
)
{\displaystyle HH_{\bullet }(R)}
of an algebra of a smooth variety is isomorphic to the de-Rham complex
Ω
R
/
k
∙
{\displaystyle \Omega _{R/k}^{\bullet }}
for
k
{\displaystyle k}
a field of characteristic
0
{\displaystyle 0}
. A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex.
The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors.
== Notes ==
== References ==
Cisinski, Denis-Charles; Déglise, Frédéric (2013), "Mixed Weil cohomologies", Advances in Mathematics, 230 (1): 55–130, arXiv:0712.3291, doi:10.1016/j.aim.2011.10.021
Grothendieck, Alexander (1966a), "On the de Rham cohomology of algebraic varieties", Publications Mathématiques de l'IHÉS, 29 (29): 95–103, doi:10.1007/BF02684807, ISSN 0073-8301, MR 0199194, S2CID 123434721 (letter to Michael Atiyah, October 14, 1963)
Grothendieck, Alexander (1966b), Letter to John Tate (PDF)
Grothendieck, Alexander (1968), "Crystals and the de Rham cohomology of schemes" (PDF), in Giraud, Jean; Grothendieck, Alexander; Kleiman, Steven L.; et al. (eds.), Dix Exposés sur la Cohomologie des Schémas, Advanced studies in pure mathematics, vol. 3, Amsterdam: North-Holland, pp. 306–358, MR 0269663
Johnson, James (1969), "Kähler differentials and differential algebra", Annals of Mathematics, 89 (1): 92–98, doi:10.2307/1970810, JSTOR 1970810, Zbl 0179.34302
Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
Matsumura, Hideyuki (1986), Commutative ring theory, Cambridge University Press
Neukirch, Jürgen (1999), Algebraische Zahlentheorie, Grundlehren der mathematischen Wissenschaften, vol. 322, Berlin: Springer-Verlag, ISBN 978-3-540-65399-8, MR 1697859, Zbl 0956.11021
Rosenlicht, M. (1976), "On Liouville's theory of elementary functions" (PDF), Pacific Journal of Mathematics, 65 (2): 485–492, doi:10.2140/pjm.1976.65.485, Zbl 0318.12107
Fu, Guofeng; Halás, Miroslav; Li, Ziming (2011), "Some remarks on Kähler differentials and ordinary differentials in nonlinear control systems", Systems and Control Letters, 60: 699–703, doi:10.1016/j.sysconle.2011.05.006
== External links ==
Notes on p-adic algebraic de-Rham cohomology - gives many computations over characteristic 0 as motivation
A thread devoted to the relation on algebraic and analytic differential forms
Differentials (Stacks project) | Wikipedia/Kähler_differential |
Lambda calculus is a formal mathematical system based on lambda abstraction and function application. Two definitions of the language are given here: a standard definition, and a definition using mathematical formulas.
== Standard definition ==
This formal definition was given by Alonzo Church.
=== Definition ===
Lambda expressions are composed of
variables
v
1
{\displaystyle v_{1}}
,
v
2
{\displaystyle v_{2}}
, ...,
v
n
{\displaystyle v_{n}}
, ...
the abstraction symbols lambda '
λ
{\displaystyle \lambda }
' and dot '.'
parentheses ( )
The set of lambda expressions,
Λ
{\displaystyle \Lambda }
, can be defined inductively:
If
x
{\displaystyle x}
is a variable, then
x
∈
Λ
{\displaystyle x\in \Lambda }
If
x
{\displaystyle x}
is a variable and
M
∈
Λ
{\displaystyle M\in \Lambda }
, then
(
λ
x
.
M
)
∈
Λ
{\displaystyle (\lambda x.M)\in \Lambda }
If
M
,
N
∈
Λ
{\displaystyle M,N\in \Lambda }
, then
(
M
N
)
∈
Λ
{\displaystyle (M\ N)\in \Lambda }
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications.
=== Notation ===
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied.
Outermost parentheses are dropped:
M
N
{\displaystyle M\ N}
instead of
(
M
N
)
{\displaystyle (M\ N)}
Applications are assumed to be left-associative:
M
N
P
{\displaystyle M\ N\ P}
may be written instead of
(
(
M
N
)
P
)
{\displaystyle ((M\ N)\ P)}
The body of an abstraction extends as far right as possible:
λ
x
.
M
N
{\displaystyle \lambda x.M\ N}
means
λ
x
.
(
M
N
)
{\displaystyle \lambda x.(M\ N)}
and not
(
λ
x
.
M
)
N
{\displaystyle (\lambda x.M)\ N}
A sequence of abstractions is contracted:
λ
x
.
λ
y
.
λ
z
.
N
{\displaystyle \lambda x.\lambda y.\lambda z.N}
is abbreviated as
λ
x
y
z
.
N
{\displaystyle \lambda xyz.N}
=== Free and bound variables ===
The abstraction operator,
λ
{\displaystyle \lambda }
, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. All other variables are called free. For example, in the following expression
y
{\displaystyle y}
is a bound variable and
x
{\displaystyle x}
is free:
λ
y
.
x
x
y
{\displaystyle \lambda y.x\ x\ y}
. Also note that a variable is bound by its "nearest" abstraction. In the following example the single occurrence of
x
{\displaystyle x}
in the expression is bound by the second lambda:
λ
x
.
y
(
λ
x
.
z
x
)
{\displaystyle \lambda x.y(\lambda x.z\ x)}
The set of free variables of a lambda expression,
M
{\displaystyle M}
, is denoted as
FV
(
M
)
{\displaystyle \operatorname {FV} (M)}
and is defined by recursion on the structure of the terms, as follows:
FV
(
x
)
=
{
x
}
{\displaystyle \operatorname {FV} (x)=\{x\}}
, where
x
{\displaystyle x}
is a variable
FV
(
λ
x
.
M
)
=
FV
(
M
)
∖
{
x
}
{\displaystyle \operatorname {FV} (\lambda x.M)=\operatorname {FV} (M)\backslash \{x\}}
FV
(
M
N
)
=
FV
(
M
)
∪
FV
(
N
)
{\displaystyle \operatorname {FV} (M\ N)=\operatorname {FV} (M)\cup \operatorname {FV} (N)}
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
=== Reduction ===
The meaning of lambda expressions is defined by how expressions can be reduced.
There are three kinds of reduction:
α-conversion: changing bound variables (alpha);
β-reduction: applying functions to their arguments (beta);
η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example,
(
λ
x
.
M
)
N
{\displaystyle (\lambda x.M)\ N}
is a β-redex in expressing the substitution of
N
{\displaystyle N}
for
x
{\displaystyle x}
in
M
{\displaystyle M}
; if
x
{\displaystyle x}
is not free in
M
{\displaystyle M}
,
λ
x
.
M
x
{\displaystyle \lambda x.M\ x}
is an η-redex. The expression to which a redex reduces is called its reduct; using the previous example, the reducts of these expressions are respectively
M
[
x
:=
N
]
{\displaystyle M[x:=N]}
and
M
{\displaystyle M}
.
==== α-conversion ====
Alpha-conversion, sometimes known as alpha-renaming, allows bound variable names to be changed. For example, alpha-conversion of
λ
x
.
x
{\displaystyle \lambda x.x}
might yield
λ
y
.
y
{\displaystyle \lambda y.y}
. Terms that differ only by alpha-conversion are called α-equivalent. Frequently in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for alpha-conversion are not completely trivial. First, when alpha-converting an abstraction, the only variable occurrences that are renamed are those that are bound by the same abstraction. For example, an alpha-conversion of
λ
x
.
λ
x
.
x
{\displaystyle \lambda x.\lambda x.x}
could result in
λ
y
.
λ
x
.
x
{\displaystyle \lambda y.\lambda x.x}
, but it could not result in
λ
y
.
λ
x
.
y
{\displaystyle \lambda y.\lambda x.y}
. The latter has a different meaning from the original.
Second, alpha-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace
x
{\displaystyle x}
with
y
{\displaystyle y}
in
λ
x
.
λ
y
.
x
{\displaystyle \lambda x.\lambda y.x}
, we get
λ
y
.
λ
y
.
y
{\displaystyle \lambda y.\lambda y.y}
, which is not at all the same.
In programming languages with static scope, alpha-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see alpha renaming to make name resolution trivial).
===== Substitution =====
Substitution, written
E
[
V
:=
R
]
{\displaystyle E[V:=R]}
, is the process of replacing all free occurrences of the variable
V
{\displaystyle V}
in the expression
E
{\displaystyle E}
with expression
R
{\displaystyle R}
.
Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any λ expression).
x
[
x
:=
N
]
≡
N
y
[
x
:=
N
]
≡
y
, if
x
≠
y
{\displaystyle {\begin{aligned}x[x:=N]&\equiv N\\y[x:=N]&\equiv y{\text{, if }}x\neq y\end{aligned}}}
(
M
1
M
2
)
[
x
:=
N
]
≡
(
M
1
[
x
:=
N
]
)
(
M
2
[
x
:=
N
]
)
(
λ
x
.
M
)
[
x
:=
N
]
≡
λ
x
.
M
(
λ
y
.
M
)
[
x
:=
N
]
≡
λ
y
.
(
M
[
x
:=
N
]
)
, if
x
≠
y
, provided
y
∉
F
V
(
N
)
{\displaystyle {\begin{aligned}(M_{1}\ M_{2})[x:=N]&\equiv (M_{1}[x:=N])\ (M_{2}[x:=N])\\(\lambda x.M)[x:=N]&\equiv \lambda x.M\\(\lambda y.M)[x:=N]&\equiv \lambda y.(M[x:=N]){\text{, if }}x\neq y{\text{, provided }}y\notin FV(N)\end{aligned}}}
To substitute into a lambda abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for
(
λ
x
.
y
)
[
y
:=
x
]
{\displaystyle (\lambda x.y)[y:=x]}
to result in
(
λ
x
.
x
)
{\displaystyle (\lambda x.x)}
, because the substituted
x
{\displaystyle x}
was supposed to be free but ended up being bound. The correct substitution in this case is
(
λ
z
.
x
)
{\displaystyle (\lambda z.x)}
, up to α-equivalence. Notice that substitution is defined uniquely up to α-equivalence.
==== β-reduction ====
β-reduction captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of
(
(
λ
V
.
E
)
E
′
)
{\displaystyle ((\lambda V.E)\ E')}
is
E
[
V
:=
E
′
]
{\displaystyle E[V:=E']}
.
For example, assuming some encoding of
2
,
7
,
×
{\displaystyle 2,7,\times }
, we have the following β-reduction:
(
(
λ
n
.
n
×
2
)
7
)
→
7
×
2
{\displaystyle ((\lambda n.\ n\times 2)\ 7)\rightarrow 7\times 2}
.
==== η-reduction ====
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between
λ
x
.
(
f
x
)
{\displaystyle \lambda x.(fx)}
and
f
{\displaystyle f}
whenever
x
{\displaystyle x}
does not appear free in
f
{\displaystyle f}
.
=== Normalization ===
The purpose of β-reduction is to calculate a value. A value in lambda calculus is a function. So β-reduction continues until the expression looks like a function abstraction.
A lambda expression that cannot be reduced further, by either β-redex, or η-redex is in normal form. Note that alpha-conversion may convert functions. All normal forms that can be converted into each other by α-conversion are defined to be equal. See the main article on Beta normal form for details.
== Syntax definition in BNF ==
Lambda Calculus has a simple syntax. A lambda calculus program has the syntax of an expression where,
The variable list is defined as,
A variable as used by computer scientists has the syntax,
Mathematicians will sometimes restrict a variable to be a single alphabetic character. When using this convention the comma is omitted from the variable list.
A lambda abstraction has a lower precedence than an application, so;
λ
x
.
y
z
=
λ
x
.
(
y
z
)
{\displaystyle \lambda x.y\ z=\lambda x.(y\ z)}
Applications are left associative;
x
y
z
=
(
x
y
)
z
{\displaystyle x\ y\ z=(x\ y)\ z}
An abstraction with multiple parameters is equivalent to multiple abstractions of one parameter.
λ
x
.
y
.
z
=
λ
x
.
λ
y
.
z
{\displaystyle \lambda x.y.z=\lambda x.\lambda y.z}
where,
x is a variable
y is a variable list
z is an expression
== Definition as mathematical formulas ==
The problem of how variables may be renamed is difficult. This definition avoids the problem by substituting all names with canonical names, which are constructed based on the position of the definition of the name in the expression. The approach is analogous to what a compiler does, but has been adapted to work within the constraints of mathematics.
=== Semantics ===
The execution of a lambda expression proceeds using the following reductions and transformations,
α-conversion -
a
l
p
h
a
-
c
o
n
v
(
a
)
→
canonym
[
A
,
P
]
=
canonym
[
a
[
A
]
,
P
]
{\displaystyle \operatorname {alpha-conv} (a)\to \operatorname {canonym} [A,P]=\operatorname {canonym} [a[A],P]}
β-reduction -
b
e
t
a
-
r
e
d
e
x
[
λ
p
.
b
v
]
=
b
[
p
:=
v
]
{\displaystyle \operatorname {beta-redex} [\lambda p.b\ v]=b[p:=v]}
η-reduction -
x
∉
FV
(
f
)
→
e
t
a
-
r
e
d
e
x
[
λ
x
.
(
f
x
)
]
=
f
{\displaystyle x\not \in \operatorname {FV} (f)\to \operatorname {eta-redex} [\lambda x.(f\ x)]=f}
where,
canonym is a renaming of a lambda expression to give the expression standard names, based on the position of the name in the expression.
Substitution Operator,
b
[
p
:=
v
]
{\displaystyle b[p:=v]}
is the substitution of the name
p
{\displaystyle p}
by the lambda expression
v
{\displaystyle v}
in lambda expression
b
{\displaystyle b}
.
Free Variable Set
FV
(
f
)
{\displaystyle \operatorname {FV} (f)}
is the set of variables that do not belong to a lambda abstraction in
f
{\displaystyle f}
.
Execution is performing β-reductions and η-reductions on subexpressions in the canonym of a lambda expression until the result is a lambda function (abstraction) in the normal form.
All α-conversions of a λ-expression are considered to be equivalent.
=== Canonym - Canonical Names ===
Canonym is a function that takes a lambda expression and renames all names canonically, based on their positions in the expression. This might be implemented as,
canonym
[
L
,
Q
]
=
canonym
[
L
,
O
,
Q
]
canonym
[
λ
p
.
b
,
M
,
Q
]
=
λ
name
(
Q
)
.
canonym
[
b
,
M
[
p
:=
Q
]
,
Q
+
N
]
canonym
[
X
Y
,
x
,
Q
]
=
canonym
[
X
,
x
,
Q
+
F
]
canonym
[
Y
,
x
,
E
+
S
]
canonym
[
x
,
M
,
Q
]
=
name
(
M
[
x
]
)
{\displaystyle {\begin{aligned}\operatorname {canonym} [L,Q]&=\operatorname {canonym} [L,O,Q]\\\operatorname {canonym} [\lambda p.b,M,Q]&=\lambda \operatorname {name} (Q).\operatorname {canonym} [b,M[p:=Q],Q+N]\\\operatorname {canonym} [X\ Y,x,Q]&=\operatorname {canonym} [X,x,Q+F]\ \operatorname {canonym} [Y,x,E+S]\\\operatorname {canonym} [x,M,Q]&=\operatorname {name} (M[x])\end{aligned}}}
Where, N is the string "N", F is the string "F", S is the string "S", + is concatenation, and "name" converts a string into a name
=== Map operators ===
Map from one value to another if the value is in the map. O is the empty map.
O
[
x
]
=
x
{\displaystyle O[x]=x}
M
[
x
:=
y
]
[
x
]
=
y
{\displaystyle M[x:=y][x]=y}
x
≠
z
→
M
[
x
:=
y
]
[
z
]
=
M
[
z
]
{\displaystyle x\neq z\to M[x:=y][z]=M[z]}
=== Substitution operator ===
If L is a lambda expression, x is a name, and y is a lambda expression;
L
[
x
:=
y
]
{\displaystyle L[x:=y]}
means substitute x by y in L. The rules are,
(
λ
p
.
b
)
[
x
:=
y
]
=
λ
p
.
b
[
x
:=
y
]
{\displaystyle (\lambda p.b)[x:=y]=\lambda p.b[x:=y]}
(
X
Y
)
[
x
:=
y
]
=
X
[
x
:=
y
]
Y
[
x
:=
y
]
{\displaystyle (X\,Y)[x:=y]=X[x:=y]\,Y[x:=y]}
z
=
x
→
(
z
)
[
x
:=
y
]
=
y
{\displaystyle z=x\to (z)[x:=y]=y}
z
≠
x
→
(
z
)
[
x
:=
y
]
=
z
{\displaystyle z\neq x\to (z)[x:=y]=z}
Note that rule 1 must be modified if it is to be used on non canonically renamed lambda expressions. See Changes to the substitution operator.
=== Free and bound variable sets ===
The set of free variables of a lambda expression, M, is denoted as FV(M). This is the set of variable names that have instances not bound (used) in a lambda abstraction, within the lambda expression. They are the variable names that may be bound to formal parameter variables from outside the lambda expression.
The set of bound variables of a lambda expression, M, is denoted as BV(M). This is the set of variable names that have instances bound (used) in a lambda abstraction, within the lambda expression.
The rules for the two sets are given below.
Usage;
The Free Variable Set, FV is used above in the definition of the η-reduction.
The Bound Variable Set, BV, is used in the rule for β-redex of non canonical lambda expression.
=== Evaluation strategy ===
This mathematical definition is structured so that it represents the result, and not the way it gets calculated. However the result may be different between lazy and eager evaluation. This difference is described in the evaluation formulas.
The definitions given here assume that the first definition that matches the lambda expression will be used. This convention is used to make the definition more readable. Otherwise some if conditions would be required to make the definition precise.
Running or evaluating a lambda expression L is,
eval
[
canonym
[
L
,
Q
]
]
{\displaystyle \operatorname {eval} [\operatorname {canonym} [L,Q]]}
where Q is a name prefix possibly an empty string and eval is defined by,
eval
[
x
y
]
=
eval
[
apply
[
eval
[
x
]
strategy
[
y
]
]
]
apply
[
(
λ
x
.
y
)
z
]
=
canonym
[
b
e
t
a
-
r
e
d
e
x
[
(
λ
x
.
y
)
z
]
,
x
]
apply
[
x
]
=
x
if x does match the above.
eval
[
λ
x
.
(
f
x
)
]
=
eval
[
e
t
a
-
r
e
d
e
x
[
λ
x
.
(
f
x
)
]
]
eval
[
L
]
=
L
lazy
[
X
]
=
X
eager
[
X
]
=
eval
[
X
]
{\displaystyle {\begin{aligned}\operatorname {eval} [x\ y]&=\operatorname {eval} [\operatorname {apply} [\operatorname {eval} [x]\ \operatorname {strategy} [y]]]\\\operatorname {apply} [(\lambda x.y)\ z]&=\operatorname {canonym} [\operatorname {beta-redex} [(\lambda x.y)\ z],x]\\\operatorname {apply} [x]&=x{\text{ if x does match the above.}}\\\operatorname {eval} [\lambda x.(f\ x)]&=\operatorname {eval} [\operatorname {eta-redex} [\lambda x.(f\ x)]]\\\operatorname {eval} [L]&=L\\\operatorname {lazy} [X]&=X\\\operatorname {eager} [X]&=\operatorname {eval} [X]\end{aligned}}}
Then the evaluation strategy may be chosen as either,
strategy
=
lazy
strategy
=
eager
{\displaystyle {\begin{aligned}\operatorname {strategy} &=\operatorname {lazy} \\\operatorname {strategy} &=\operatorname {eager} \end{aligned}}}
The result may be different depending on the strategy used. Eager evaluation will apply all reductions possible, leaving the result in normal form, while lazy evaluation will omit some reductions in parameters, leaving the result in "weak head normal form".
==== Normal form ====
All reductions that can be applied have been applied. This is the result obtained from applying eager evaluation.
normal
[
(
λ
x
.
y
)
z
]
=
false
normal
[
λ
x
.
(
f
x
)
]
=
false
normal
[
x
y
]
=
normal
[
x
]
∧
normal
[
y
]
{\displaystyle {\begin{aligned}\operatorname {normal} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {normal} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {normal} [x\ y]&=\operatorname {normal} [x]\land \operatorname {normal} [y]\end{aligned}}}
In all other cases,
normal
[
x
]
=
true
{\displaystyle \operatorname {normal} [x]=\operatorname {true} }
==== Weak head normal form ====
(The definition below is flawed, it is in contradiction with the definition saying that weak head normal form is either head normal form or the term is an abstraction. The notion has been introduced by Simon Peyton Jones.)
Reductions to the function (the head) have been applied, but not all reductions to the parameter have been applied. This is the result obtained from applying lazy evaluation.
whnf
[
(
λ
x
.
y
)
z
]
=
false
whnf
[
λ
x
.
(
f
x
)
]
=
false
whnf
[
x
y
]
=
whnf
[
x
]
{\displaystyle {\begin{aligned}\operatorname {whnf} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {whnf} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {whnf} [x\ y]&=\operatorname {whnf} [x]\end{aligned}}}
In all other cases,
whnf
[
x
]
=
true
{\displaystyle \operatorname {whnf} [x]=\operatorname {true} }
== Derivation of standard from the math definition ==
The standard definition of lambda calculus uses some definitions which may be considered as theorems, which can be proved based on the definition as mathematical formulas.
The canonical naming definition deals with the problem of variable identity by constructing a unique name for each variable based on the position of the lambda abstraction for the variable name in the expression.
This definition introduces the rules used in the standard definition and relates explains them in terms of the canonical renaming definition.
=== Free and bound variables ===
The lambda abstraction operator, λ, takes a formal parameter variable and a body expression. When evaluated the formal parameter variable is identified with the value of the actual parameter.
Variables in a lambda expression may either be "bound" or "free". Bound variables are variable names that are already attached to formal parameter variables in the expression.
The formal parameter variable is said to bind the variable name wherever it occurs free in the body. Variable (names) that have already been matched to formal parameter variable are said to be bound. All other variables in the expression are called free.
For example, in the following expression y is a bound variable and x is free:
λ
y
.
x
x
y
{\displaystyle \lambda y.x\ x\ y}
. Also note that a variable is bound by its "nearest" lambda abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda:
λ
x
.
y
(
λ
x
.
z
x
)
{\displaystyle \lambda x.y\ (\lambda x.z\ x)}
=== Changes to the substitution operator ===
In the definition of the Substitution Operator the rule,
(
λ
p
.
b
)
[
x
:=
y
]
=
λ
p
.
b
[
x
:=
y
]
{\displaystyle (\lambda p.b)[x:=y]=\lambda p.b[x:=y]}
must be replaced with,
(
λ
x
.
b
)
[
x
:=
y
]
=
λ
x
.
b
{\displaystyle (\lambda x.b)[x:=y]=\lambda x.b}
z
≠
x
→
(
λ
z
.
b
)
[
x
:=
y
]
=
λ
z
.
b
[
x
:=
y
]
{\displaystyle z\neq x\ \to (\lambda z.b)[x:=y]=\lambda z.b[x:=y]}
This is to stop bound variables with the same name being substituted. This would not have occurred in a canonically renamed lambda expression.
For example the previous rules would have wrongly translated,
(
λ
x
.
x
z
)
[
x
:=
y
]
=
(
λ
x
.
y
z
)
{\displaystyle (\lambda x.x\ z)[x:=y]=(\lambda x.y\ z)}
The new rules block this substitution so that it remains as,
(
λ
x
.
x
z
)
[
x
:=
y
]
=
(
λ
x
.
x
z
)
{\displaystyle (\lambda x.x\ z)[x:=y]=(\lambda x.x\ z)}
=== Transformation ===
The meaning of lambda expressions is defined by how expressions can be transformed or reduced.
There are three kinds of transformation:
α-conversion: changing bound variables (alpha);
β-reduction: applying functions to their arguments (beta), calling functions;
η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules.
==== α-conversion ====
Alpha-conversion, sometimes known as alpha-renaming, allows bound variable names to be changed. For example, alpha-conversion of
λ
x
.
x
{\displaystyle \lambda x.x}
might give
λ
y
.
y
{\displaystyle \lambda y.y}
. Terms that differ only by alpha-conversion are called α-equivalent.
In an α-conversion, names may be substituted for new names if the new name is not free in the body, as this would lead to the capture of free variables.
(
y
∉
F
V
(
b
)
∧
a
(
λ
x
.
b
)
=
λ
y
.
b
[
x
:=
y
]
)
→
a
l
p
h
a
-
c
o
n
(
a
)
{\displaystyle (y\not \in FV(b)\land a(\lambda x.b)=\lambda y.b[x:=y])\to \operatorname {alpha-con} (a)}
Note that the substitution will not recurse into the body of lambda expressions with formal parameter
x
{\displaystyle x}
because of the change to the substitution operator described above.
See example;
==== β-reduction (capture avoiding) ====
β-reduction captures the idea of function application (also called a function call), and implements the substitution of the actual parameter expression for the formal parameter variable. β-reduction is defined in terms of substitution.
If no variable names are free in the actual parameter and bound in the body, β-reduction may be performed on the lambda abstraction without canonical renaming.
(
∀
z
:
z
∉
F
V
(
y
)
∨
z
∉
B
V
(
b
)
)
→
b
e
t
a
-
r
e
d
e
x
[
λ
x
.
b
y
]
=
b
[
x
:=
y
]
{\displaystyle (\forall z:z\not \in FV(y)\lor z\not \in BV(b))\to \operatorname {beta-redex} [\lambda x.b\ y]=b[x:=y]}
Alpha renaming may be used on
b
{\displaystyle b}
to rename names that are free in
y
{\displaystyle y}
but bound in
b
{\displaystyle b}
, to meet the pre-condition for this transformation.
See example;
(
(
λ
x
.
z
x
)
(
λ
y
.
z
y
)
)
[
z
:=
(
x
y
)
]
(
(
λ
a
.
z
a
)
(
λ
b
.
z
b
)
)
[
z
:=
(
x
y
)
]
{\displaystyle {\begin{array}{r}((\lambda x.z\ x)(\lambda y.z\ y))[z:=(x\ y)]\\((\lambda a.z\ a)(\lambda b.z\ b))[z:=(x\ y)]\end{array}}}
In this example,
In the β-redex,
The free variables are,
FV
(
x
y
)
=
{
x
,
y
}
{\displaystyle \operatorname {FV} (x\ y)=\{x,y\}}
The bound variables are,
BV
(
(
λ
x
.
z
x
)
(
λ
y
.
z
y
)
)
=
{
x
,
y
}
{\displaystyle \operatorname {BV} ((\lambda x.z\ x)(\lambda y.z\ y))=\{x,y\}}
The naive β-redex changed the meaning of the expression because x and y from the actual parameter became captured when the expressions were substituted in the inner abstractions.
The alpha renaming removed the problem by changing the names of x and y in the inner abstraction so that they are distinct from the names of x and y in the actual parameter.
The free variables are,
FV
(
x
y
)
=
{
x
,
y
}
{\displaystyle \operatorname {FV} (x\ y)=\{x,y\}}
The bound variables are,
BV
(
(
λ
a
.
z
a
)
(
λ
b
.
z
b
)
)
=
{
a
,
b
}
{\displaystyle \operatorname {BV} ((\lambda a.z\ a)(\lambda b.z\ b))=\{a,b\}}
The β-redex then proceeded with the intended meaning.
==== η-reduction ====
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
η-reduction may be used without change on lambda expressions that are not canonically renamed.
x
∉
F
V
(
f
)
→
η-redex
[
λ
x
.
(
f
x
)
]
=
f
{\displaystyle x\not \in \mathrm {FV} (f)\to {\text{η-redex}}[\lambda x.(fx)]=f}
The problem with using an η-redex when f has free variables is shown in this example,
This improper use of η-reduction changes the meaning by leaving x in
λ
y
.
y
x
{\displaystyle \lambda y.y\,x}
unsubstituted.
== References == | Wikipedia/Lambda_calculus_definition |
In computer science, a programming language is said to have first-class functions if it treats functions as first-class citizens. This means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures. Some programming language theorists require support for anonymous functions (function literals) as well. In languages with first-class functions, the names of functions do not have any special status; they are treated like ordinary variables with a function type. The term was coined by Christopher Strachey in the context of "functions as first-class citizens" in the mid-1960s.
First-class functions are a necessity for the functional programming style, in which the use of higher-order functions is a standard practice. A simple example of a higher-ordered function is the map function, which takes, as its arguments, a function and a list, and returns the list formed by applying the function to each member of the list. For a language to support map, it must support passing a function as an argument.
There are certain implementation difficulties in passing functions as arguments or returning them as results, especially in the presence of non-local variables introduced in nested and anonymous functions. Historically, these were termed the funarg problems, the name coming from function argument. In early imperative languages these problems were avoided by either not supporting functions as result types (e.g. ALGOL 60, Pascal) or omitting nested functions and thus non-local variables (e.g. C). The early functional language Lisp took the approach of dynamic scoping, where non-local variables refer to the closest definition of that variable at the point where the function is executed, instead of where it was defined. Proper support for lexically scoped first-class functions was introduced in Scheme and requires handling references to functions as closures instead of bare function pointers, which in turn makes garbage collection a necessity.
== Concepts ==
In this section, we compare how particular programming idioms are handled in a functional language with first-class functions (Haskell) compared to an imperative language where functions are second-class citizens (C).
=== Higher-order functions: passing functions as arguments ===
In languages where functions are first-class citizens, functions can be passed as arguments to other functions in the same way as other values (a function taking another function as argument is called a higher-order function). In the language Haskell:
Languages where functions are not first-class often still allow one to write higher-order functions through the use of features such as function pointers or delegates. In the language C:
There are a number of differences between the two approaches that are not directly related to the support of first-class functions. The Haskell sample operates on lists, while the C sample operates on arrays. Both are the most natural compound data structures in the respective languages and making the C sample operate on linked lists would have made it unnecessarily complex. This also accounts for the fact that the C function needs an additional parameter (giving the size of the array.) The C function updates the array in-place, returning no value, whereas in Haskell data structures are persistent (a new list is returned while the old is left intact.) The Haskell sample uses recursion to traverse the list, while the C sample uses iteration. Again, this is the most natural way to express this function in both languages, but the Haskell sample could easily have been expressed in terms of a fold and the C sample in terms of recursion. Finally, the Haskell function has a polymorphic type, as this is not supported by C we have fixed all type variables to the type constant int.
=== Anonymous and nested functions ===
In languages supporting anonymous functions, we can pass such a function as an argument to a higher-order function:
In a language which does not support anonymous functions, we have to bind it to a name instead:
=== Non-local variables and closures ===
Once we have anonymous or nested functions, it becomes natural for them to refer to variables outside of their body (called non-local variables):
If functions are represented with bare function pointers, we can not know anymore how the value that is outside of the function's body should be passed to it, and because of that a closure needs to be built manually. Therefore we can not speak of "first-class" functions here.
Also note that the map is now specialized to functions referring to two ints outside of their environment. This can be set up more generally, but requires more boilerplate code. If f would have been a nested function we would still have run into the same problem and this is the reason they are not supported in C.
=== Higher-order functions: returning functions as results ===
When returning a function, we are in fact returning its closure. In the C example any local variables captured by the closure will go out of scope once we return from the function that builds the closure. Forcing the closure at a later point will result in undefined behaviour, possibly corrupting the stack. This is known as the upwards funarg problem.
=== Assigning functions to variables ===
Assigning functions to variables and storing them inside (global) datastructures potentially suffers from the same difficulties as returning functions.
=== Equality of functions ===
As one can test most literals and values for equality, it is natural to ask whether a programming language can support testing functions for equality. On further inspection, this question appears more difficult and one has to distinguish between several types of function equality:
Extensional equality
Two functions f and g are considered extensionally equal if they agree on their outputs for all inputs (∀x. f(x) = g(x)). Under this definition of equality, for example, any two implementations of a stable sorting algorithm, such as insertion sort and merge sort, would be considered equal. Deciding on extensional equality is undecidable in general and even for functions with finite domains often intractable. For this reason no programming language implements function equality as extensional equality.
Intensional equality
Under intensional equality, two functions f and g are considered equal if they have the same "internal structure". This kind of equality could be implemented in interpreted languages by comparing the source code of the function bodies (such as in Interpreted Lisp 1.5) or the object code in compiled languages. Intensional equality implies extensional equality (assuming the functions are deterministic and have no hidden inputs, such as the program counter or a mutable global variable.)
Reference equality
Given the impracticality of implementing extensional and intensional equality, most languages supporting testing functions for equality use reference equality. All functions or closures are assigned a unique identifier (usually the address of the function body or the closure) and equality is decided based on equality of the identifier. Two separately defined, but otherwise identical function definitions will be considered unequal. Referential equality implies intensional and extensional equality. Referential equality breaks referential transparency and is therefore not supported in pure languages, such as Haskell.
== Type theory ==
In type theory, the type of functions accepting values of type A and returning values of type B may be written as A → B or BA. In the Curry–Howard correspondence, function types are related to logical implication; lambda abstraction corresponds to discharging hypothetical assumptions and function application corresponds to the modus ponens inference rule. Besides the usual case of programming functions, type theory also uses first-class functions to model associative arrays and similar data structures.
In category-theoretical accounts of programming, the availability of first-class functions corresponds to the closed category assumption. For instance, the simply typed lambda calculus corresponds to the internal language of Cartesian closed categories.
== Language support ==
Functional programming languages, such as Erlang, Scheme, ML, Haskell, F#, and Scala, all have first-class functions. When Lisp, one of the earliest functional languages, was designed, not all aspects of first-class functions were then properly understood, resulting in functions being dynamically scoped. The later Scheme and Common Lisp dialects do have lexically scoped first-class functions.
Many scripting languages, including Perl, Python, PHP, Lua, Tcl/Tk, JavaScript and Io, have first-class functions.
For imperative languages, a distinction has to be made between Algol and its descendants such as Pascal, the traditional C family, and the modern garbage-collected variants. The Algol family has allowed nested functions and higher-order taking function as arguments, but not higher-order functions that return functions as results (except Algol 68, which allows this). The reason for this was that it was not known how to deal with non-local variables if a nested-function was returned as a result (and Algol 68 produces runtime errors in such cases).
The C family allowed both passing functions as arguments and returning them as results, but avoided any problems by not supporting nested functions. (The gcc compiler allows them as an extension.) As the usefulness of returning functions primarily lies in the ability to return nested functions that have captured non-local variables, instead of top-level functions, these languages are generally not considered to have first-class functions.
Modern imperative languages often support garbage-collection making the implementation of first-class functions feasible. First-class functions have often only been supported in later revisions of the language, including C# 2.0 and Apple's Blocks extension to C, C++, and Objective-C. C++11 has added support for anonymous functions and closures to the language, but because of the non-garbage collected nature of the language, special care has to be taken for non-local variables in functions to be returned as results (see below).
C++
C++11 closures can capture non-local variables by copy construction, by reference (without extending their lifetime), or by move construction (the variable lives as long as the closure does). The first option is safe if the closure is returned but requires a copy and cannot be used to modify the original variable (which might not exist any more at the time the closure is called). The second option potentially avoids an expensive copy and allows to modify the original variable but is unsafe in case the closure is returned (see dangling references). The third option is safe if the closure is returned and does not require a copy but cannot be used to modify the original variable either.
Java
Java 8 closures can only capture final or "effectively final" non-local variables. Java's function types are represented as Classes. Anonymous functions take the type inferred from the context. Method references are limited. For more details, see Anonymous function § Java limitations.
Lisp
Lexically scoped Lisp variants support closures. Dynamically scoped variants do not support closures or need a special construct to create closures.
In Common Lisp, the identifier of a function in the function namespace cannot be used as a reference to a first-class value. The special operator function must be used to retrieve the function as a value: (function foo) evaluates to a function object. #'foo exists as a shorthand notation. To apply such a function object, one must use the funcall function: (funcall #'foo bar baz).
Python
Explicit partial application with functools.partial since version 2.5, and operator.methodcaller since version 2.6.
Ruby
The identifier of a regular "function" in Ruby (which is really a method) cannot be used as a value or passed. It must first be retrieved into a Method or Proc object to be used as first-class data. The syntax for calling such a function object differs from calling regular methods.
Nested method definitions do not actually nest the scope.
Explicit currying with [1].
== See also ==
Defunctionalization
eval
First-class message
Kappa calculus – a formalism which excludes first-class functions
Man or boy test
Partial application
== Notes ==
== References ==
Leonidas Fegaras. "Functional Languages and Higher-Order Functions". CSE5317/CSE4305: Design and Construction of Compilers. University of Texas at Arlington.
== External links ==
First-class functions on Rosetta Code.
Higher order functions Archived November 12, 2019, at the Wayback Machine at IBM developerWorks | Wikipedia/First-class_function |
The Knights of the Lambda Calculus is a semi-fictional organization of expert Lisp and Scheme hackers. The name refers to the lambda calculus, a mathematical formalism invented by Alonzo Church, with which Lisp is intimately connected, and references the Knights Templar.
There is no actual organization that goes by the name Knights of the Lambda Calculus; it mostly only exists as a hacker culture in-joke. The concept most likely originated at MIT. For example, in the Structure and Interpretation of Computer Programs video lectures, Gerald Jay Sussman presents the audience with the button, saying they are now members of this special group. However, according to the Jargon File, a "well-known LISPer" has been known to give out buttons with Knights insignia on them, and some people have claimed to have membership in the Knights.
== In popular culture ==
A group that evolved from, or is similar to them, called The Knights of Eastern Calculus, makes a major appearance in the anime series Serial Experiments Lain, the logo of which is a reference to Freemasonry. References to MIT professors and other American computer scientists are prominent in Episode 9 of the series. At one point in the anime, Lain is seen with code displayed on her handheld device that appears to be Lisp.
== References == | Wikipedia/Knights_of_the_Lambda_Calculus |
The SKI combinator calculus is a combinatory logic system and a computational system. It can be thought of as a computer programming language, though it is not convenient for writing software. Instead, it is important in the mathematical theory of algorithms because it is an extremely simple Turing complete language. It can be likened to a reduced version of the untyped lambda calculus. It was introduced by Moses Schönfinkel and Haskell Curry.
All operations in lambda calculus can be encoded via abstraction elimination into the SKI calculus as binary trees whose leaves are one of the three symbols S, K, and I (called combinators).
== Notation ==
Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example, ISK means ((IS)K). Using this notation, a tree whose left subtree is the tree KS and whose right subtree is the tree SK can be written as KS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)).
== Informal description ==
Informally, and using programming language jargon, a tree (xy) can be thought of as a function x applied to an argument y. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value", i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed.
The evaluation operation is defined as follows:
(x, y, and z represent expressions made from the functions S, K, and I, and set values):
I returns its argument:
Ix = x
K, when applied to any argument x, yields a one-argument constant function Kx, which, when applied to any argument y, returns x:
Kxy = x
S is a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More clearly:
Sxyz = xz(yz)
Example computation: SKSK evaluates to KK(SK) by the S-rule. Then if we evaluate KK(SK), we get K by the K-rule. As no further rule can be applied, the computation halts here.
For all trees x and all trees y, SKxy will always evaluate to y in two steps, Ky(xy) = y, so the ultimate result of evaluating SKxy will always equal the result of evaluating y. We say that SKx and I are "functionally equivalent" for any x because they always yield the same result when applied to any y.
From these definitions it can be shown that SKI calculus is not the minimum system that can fully perform the computations of lambda calculus, as all occurrences of I in any expression can be replaced by (SKK) or (SKS) or (SK x) for any x, and the resulting expression will yield the same result. So the "I" is merely syntactic sugar. Since I is optional, the system is also referred as SK calculus or SK combinator calculus.
It is possible to define a complete system using only one (improper) combinator. An example is Chris Barker's iota combinator, which can be expressed in terms of S and K as follows:
ιx = xSK = S(λx.xS)(λx.K) = S(S(λx.x)(λx.S))(KK) = S(SI(KS))(KK)
It is possible to reconstruct S, K, and I from the iota combinator. Applying ι to itself gives ιι = ιSK = SSKK = SK(KK) which is functionally equivalent to I. K can be constructed by applying ι twice to I (which is equivalent to application of ι to itself): ι(ι(ιι)) = ι(ιιSK) = ι(ISK) = ι(SK) = SKSK = K. Applying ι one more time gives ι(ι(ι(ιι))) = ιK = KSK = S.
The simplest possible term forming a basis is X = λf.f λxyz.x z (y z) λxyz.x, which satisfies X X = K, and X (X X) = S.
== Formal definition ==
The terms and derivations in this system can also be more formally defined:
Terms:
The set T of terms is defined recursively by the following rules.
S, K, and I are terms.
If τ1 and τ2 are terms, then (τ1τ2) is a term.
Nothing is a term if not required to be so by the first two rules.
Derivations:
A derivation is a finite sequence of terms defined recursively by the following rules (where α and ι are words over the alphabet {S, K, I, (, )} while β, γ and δ are terms):
If Δ is a derivation ending in an expression of the form α(Iβ)ι, then Δ followed by the term αβι is a derivation.
If Δ is a derivation ending in an expression of the form α((Kβ)γ)ι, then Δ followed by the term αβι is a derivation.
If Δ is a derivation ending in an expression of the form α(((Sβ)γ)δ)ι, then Δ followed by the term α((βδ)(γδ))ι is a derivation.
Assuming a sequence is a valid derivation to begin with, it can be extended using these rules. All derivations of length 1 are valid derivations.
== Converting lambda calculus expressions into SKI combinator calculus expressions ==
An expression in the lambda calculus can be converted into an SKI combinator calculus expression in accordance with the following rules:
λx. x = I
λx. c = Kc (provided that c does not depend on x)
λx. f x = f
λx. y z = S(λx.y)(λx.z)
== SKI expressions ==
=== Self-application and recursion ===
SII is an expression that takes an argument and applies that argument to itself:
SIIα = Iα(Iα) = αα
This is also known as U combinator, Ux = xx. One interesting property of it is that its self-application is irreducible:
SII(SII) = I(SII)(I(SII)) = SII(I(SII)) = SII(SII)
Or, using the equation as its definition directly, we immediately get U U = U U.
Another thing is that it allows one to write a function that applies one thing to the self application of another thing:
(S(Kα)(SII))β = Kαβ(SIIβ) = α(Iβ(Iβ)) = α(ββ)
or it can be seen as defining yet another combinator directly, Hxy = x(yy).
This function can be used to achieve recursion. If β is the function that applies α to the self application of something else,
β = Hα = S(Kα)(SII)
then the self-application of this β is the fixed point of that α:
SIIβ = ββ = α(ββ) = α(α(ββ)) =
…
{\displaystyle \ldots }
Or, directly again from the derived definition, Hα(Hα) = α(Hα(Hα)).
If α expresses a "computational step" computed by αρν for some ρ and ν, that assumes ρν' expresses "the rest of the computation" (for some ν' that α will "compute" from ν), then its fixed point ββ expresses the whole recursive computation, since using the same function ββ for the "rest of computation" call (with ββν = α(ββ)ν) is the very definition of recursion: ρν' = ββν' = α(ββ)ν' = ... . α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence.
This can be formalized, with
β = Hα = S(Kα)(SII) = S(KS)Kα(SII) = S(S(KS)K)(K(SII)) α
as
Yα = SIIβ = SII(Hα) = S(K(SII))H α = S(K(SII))(S(S(KS)K)(K(SII))) α
which gives us one possible encoding of the Y combinator. A shorter variation replaces its two leading subterms with just SSI, since Hα(Hα) = SHHα = SSIHα.
This becomes much shorter with the use of the B,C,W combinators, as the equivalent
Yα = S(KU)(SB(KU))α = U(BαU) = BU(CBU)α = SSI(CBU)α
And with a pseudo-Haskell syntax it becomes the exceptionally short Y = U . (. U).
Following this approach, other fixpoint combinator definitions are possible. Thus,
This Y, by Haskell Curry:
Hgx = g(xx) ; Yg = Hg(Hg) ; Y = SSI(SB(KU)) = SSI(S(S(KS)K)(K(SII)))
Turing's Θ:
Hhg = g(hhg) ; Θg = HHg ; Θ = U(B(SI)U) = SII(S(K(SI))(SII))
Y' (with SK-encoding by John Tromp):
Hgh = g(hgh) ; Y'g = HgH ; Y' = WC(SB(C(WC))) = SSK(S(K(SS(S(SSK))))K)
Θ₄ by R.Statman:
Hgyz = g(yyz) ; Θ₄g = Hg(Hg)(Hg) ; Θ₄ = B(WW)(BW(BBB))
or in general,
Hsomething = g(hsomething) ; YH g = H_____H__ g
(where anything goes instead of "_") or any other intermediary H combinator's definition, with its correspondent Y definition to jump-start it correctly. In particular,
Labcdefghijklmnopqstuvwxyzr = r(thisisafixedpointcombinator) ; Y = LLLLLLLLLLLLLLLLLLLLLLLLLL
In a strict programming language the Y combinator will expand until stack overflow, or never halt in case of tail call optimization. The Z combinator will work in strict languages (also called eager languages, where applicative evaluation order is applied).
Z
=
λ
f
.
(
λ
x
.
f
(
λ
v
.
x
x
v
)
)
(
λ
x
.
f
(
λ
v
.
x
x
v
)
)
=
λ
f
.
U
(
λ
x
.
f
(
λ
v
.
U
x
v
)
)
=
S
(
λ
f
.
U
)
(
λ
f
.
λ
x
.
f
(
λ
v
.
U
x
v
)
)
=
S
(
K
U
)
(
λ
f
.
S
(
λ
x
.
f
)
(
λ
x
.
λ
v
.
U
x
v
)
)
=
S
(
K
U
)
(
λ
f
.
S
(
K
f
)
(
λ
x
.
λ
v
.
U
x
v
)
)
=
S
(
K
U
)
(
S
(
λ
f
.
S
(
K
f
)
)
(
λ
f
.
λ
x
.
λ
v
.
U
x
v
)
)
=
S
(
K
U
)
(
S
(
S
(
λ
f
.
S
)
(
λ
f
.
K
f
)
)
(
K
(
λ
x
.
λ
v
.
U
x
v
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
λ
x
.
λ
v
.
U
x
v
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
λ
x
.
S
(
λ
v
.
U
x
)
(
λ
v
.
v
)
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
λ
x
.
S
(
S
(
λ
v
.
U
)
(
λ
v
.
x
)
)
I
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
λ
x
.
S
(
S
(
K
U
)
(
K
x
)
)
I
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
S
(
λ
x
.
S
(
S
(
K
U
)
(
K
x
)
)
)
(
λ
x
.
I
)
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
S
(
S
(
λ
x
.
S
)
(
λ
x
.
S
(
K
U
)
(
K
x
)
)
)
(
K
I
)
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
S
(
S
(
K
S
)
(
S
(
λ
x
.
S
(
K
U
)
)
(
λ
x
.
K
x
)
)
)
(
K
I
)
)
)
)
=
S
(
K
U
)
(
S
(
S
(
K
S
)
K
)
(
K
(
S
(
S
(
K
S
)
(
S
(
K
(
S
(
K
U
)
)
)
K
)
)
(
K
I
)
)
)
)
{\displaystyle {\begin{aligned}Z&=\lambda f.(\lambda x.f(\lambda v.xxv))(\lambda x.f(\lambda v.xxv))\\&=\lambda f.U(\lambda x.f(\lambda v.Uxv))\\&=S(\lambda f.U)(\lambda f.\lambda x.f(\lambda v.Uxv))\\&=S(KU)(\lambda f.S(\lambda x.f)(\lambda x.\lambda v.Uxv))\\&=S(KU)(\lambda f.S(Kf)(\lambda x.\lambda v.Uxv))\\&=S(KU)(S(\lambda f.S(Kf))(\lambda f.\lambda x.\lambda v.Uxv))\\&=S(KU)(S(S(\lambda f.S)(\lambda f.Kf))(K(\lambda x.\lambda v.Uxv)))\\&=S(KU)(S(S(KS)K)(K(\lambda x.\lambda v.Uxv)))\\&=S(KU)(S(S(KS)K)(K(\lambda x.S({\color {Red}\lambda v.Ux})(\lambda v.v))))\\&=S(KU)(S(S(KS)K)(K(\lambda x.S(S(\lambda v.U)(\lambda v.x))I)))\\&=S(KU)(S(S(KS)K)(K(\lambda x.S(S(KU)(Kx))I)))\\&=S(KU)(S(S(KS)K)(K(S(\lambda x.S(S(KU)(Kx)))(\lambda x.I))))\\&=S(KU)(S(S(KS)K)(K(S(S(\lambda x.S)(\lambda x.S(KU)(Kx)))(KI))))\\&=S(KU)(S(S(KS)K)(K(S(S(KS)(S(\lambda x.S(KU))(\lambda x.Kx)))(KI))))\\&=S(KU)(S(S(KS)K)(K(S(S(KS)(S(K(S(KU)))K))(KI))))\\\end{aligned}}}
=== The reversal expression ===
S(K(SI))K reverses the two terms following it:
S(K(SI))Kαβ →
K(SI)α(Kα)β →
SI(Kα)β →
Iβ(Kαβ) →
Iβα →
βα
It is thus equivalent to CI. And in general, S(K(Sf))K is equivalent to Cf, for any f.
=== Boolean logic ===
SKI combinator calculus can also implement Boolean logic in the form of an if-then-else structure. An if-then-else structure consists of a Boolean expression that is either true (T) or false (F) and two arguments, such that:
Txy = x
and
Fxy = y
The key is in defining the two Boolean expressions. The first works just like one of our basic combinators:
T = K
Kxy = x
The second is also fairly simple:
F = SK
SKxy = Ky(xy) = y
Once true and false are defined, all Boolean logic can be implemented in terms of if-then-else structures.
Boolean NOT (which returns the opposite of a given Boolean) works the same as the if-then-else structure, with F and T as the second and third values, so it can be implemented as a postfix operation:
NOT = λb.b (F)(T) = λb.b (SK)(K)
If this is put in an if-then-else structure, it can be shown that this has the expected result
(T)NOT = T(F)(T) = F
(F)NOT = F(F)(T) = T
Boolean OR (which returns T if either of the two Boolean values surrounding it is T) works the same as an if-then-else structure with T as the second value, so it can be implemented as an infix operation:
OR = T = K
If this is put in an if-then-else structure, it can be shown that this has the expected result:
(T)OR(T) = T(T)(T) = T
(T)OR(F) = T(T)(F) = T
(F)OR(T) = F(T)(T) = T
(F)OR(F) = F(T)(F) = F
Boolean AND (which returns T if both of the two Boolean values surrounding it are T) works the same as an if-then-else structure with F as the third value, so it can be implemented as a postfix operation:
AND = F = SK
If this is put in an if-then-else structure, it can be shown that this has the expected result:
(T)(T)AND = T(T)(F) = T
(T)(F)AND = T(F)(F) = F
(F)(T)AND = F(T)(F) = F
(F)(F)AND = F(F)(F) = F
Because this defines T, F, NOT (as a postfix operator), OR (as an infix operator), and AND (as a postfix operator) in terms of SKI notation, this proves that the SKI system can fully express Boolean logic.
As the SKI calculus is complete, it is also possible to express NOT, OR and AND as prefix operators:
NOT = S(SI(KF))(KT) (as S(SI(KF))(KT)x = SI(KF)x(KTx) = Ix(KFx)T = xFT)
OR = SI(KT) (as SI(KT)xy = Ix(KTx)y = xTy)
AND = SS(K(KF)) (as SS(K(KF))xy = Sx(K(KF)x)y = xy(KFy) = xyF)
== Connection to intuitionistic logic ==
The combinators K and S correspond to two well-known axioms of sentential logic:
AK: A → (B → A),
AS: (A → (B → C)) → ((A → B) → (A → C)).
Function application corresponds to the rule modus ponens:
MP: from A and A → B, infer B.
The axioms AK and AS, and the rule MP are complete for the implicational fragment of intuitionistic logic. In order for combinatory logic to have as a model:
The implicational fragment of classical logic, would require the combinatory analog to the law of excluded middle, i.e., Peirce's law;
Complete classical logic, would require the combinatory analog to the sentential axiom F → A.
This connection between the types of combinators and the corresponding logical axioms is an instance of the Curry–Howard isomorphism.
== Examples of reduction ==
There may be multiple ways to do a reduction. All are equivalent, as long as you don't break order of operations
S
K
I
(
K
I
S
)
{\displaystyle {\mathsf {SKI(KIS)}}}
S
K
I
(
K
I
S
)
⇒
K
(
K
I
S
)
(
I
(
K
I
S
)
)
⇒
K
I
S
⇒
I
{\displaystyle {\mathsf {SKI(KIS)}}\Rightarrow {\mathsf {K(KIS)(I(KIS))}}\Rightarrow {\mathsf {KIS}}\Rightarrow {\mathsf {I}}}
S
K
I
(
K
I
S
)
⇒
S
K
I
I
⇒
K
I
(
I
I
)
⇒
K
I
I
⇒
I
{\displaystyle {\mathsf {SKI(KIS)}}\Rightarrow {\mathsf {SKII}}\Rightarrow {\mathsf {KI(II)}}\Rightarrow {\mathsf {KII}}\Rightarrow {\mathsf {I}}}
K
S
(
I
(
S
K
S
I
)
)
{\displaystyle {\mathsf {KS(I(SKSI))}}}
K
S
(
I
(
S
K
S
I
)
)
⇒
K
S
(
I
(
K
I
(
S
I
)
)
)
⇒
K
S
(
I
(
I
)
)
⇒
K
S
(
I
I
)
⇒
K
S
I
⇒
S
{\displaystyle {\mathsf {KS(I(SKSI))}}\Rightarrow {\mathsf {KS(I(KI(SI)))}}\Rightarrow {\mathsf {KS(I(I))}}\Rightarrow {\mathsf {KS(II)}}\Rightarrow {\mathsf {KSI}}\Rightarrow {\mathsf {S}}}
K
S
(
I
(
S
K
S
I
)
)
⇒
S
{\displaystyle {\mathsf {KS(I(SKSI))}}\Rightarrow {\mathsf {S}}}
S
K
I
K
⇒
K
K
(
I
K
)
⇒
K
K
K
⇒
K
{\displaystyle {\mathsf {SKIK}}\Rightarrow {\mathsf {KK(IK)}}\Rightarrow {\mathsf {KKK}}\Rightarrow {\mathsf {K}}}
== See also ==
Combinatory logic
B, C, K, W system
Fixed point combinator
Lambda calculus
Functional programming
Unlambda programming language
The Iota and Jot programming languages, designed to be even simpler than SKI.
To Mock a Mockingbird
== References ==
== External links ==
O'Donnell, Mike "The SKI Combinator Calculus as a Universal System."
Keenan, David C. (2001) "To Dissect a Mockingbird."
Rathman, Chris, "Combinator Birds."
""Drag 'n' Drop Combinators (Java Applet)."
A Calculus of Mobile Processes, Part I (PostScript) (by Milner, Parrow, and Walker) shows a scheme for combinator graph reduction for the SKI calculus in pages 25–28.
the Nock programming language may be seen as an assembly language based on SK combinator calculus in the same way that traditional assembly language is based on Turing machines. Nock instruction 2 (the "Nock operator") is the S combinator and Nock instruction 1 is the K combinator. The other primitive instructions in Nock (instructions 0,3,4,5, and the pseudo-instruction "implicit cons") are not necessary for universal computation, but make programming more convenient by providing facilities for dealing with binary tree data structures and arithmetic; Nock also provides 5 more instructions (6,7,8,9,10) that could have been built out of these primitives. | Wikipedia/SKI_combinator_calculus |
In mathematical logic and computer science, the lambda-mu calculus is an extension of the lambda calculus introduced by Michel Parigot. It introduces two new operators: the μ operator (which is completely different both from the μ operator found in computability theory and from the μ operator of modal μ-calculus) and the bracket operator. Proof-theoretically, it provides a well-behaved formulation of classical natural deduction.
One of the main goals of this extended calculus is to be able to describe expressions corresponding to theorems in classical logic. According to the Curry–Howard isomorphism, lambda calculus on its own can express theorems in intuitionistic logic only, and several classical logical theorems can't be written at all. However with these new operators one is able to write terms that have the type of, for example, Peirce's law.
The μ operator corresponds to Felleisen's undelimited control operator C and bracket corresponds to calling a captured continuation.
== Formal definition ==
The three forms of expressions in lambda calculus are as follows:
A variable x, where x is any identifier.
An abstraction λx. M, where x is any identifier and M is any lambda expression.
An application (M N), where M and N are any lambda expressions.
In addition to the traditional λ-variables, the lambda-mu calculus includes a distinct set of μ-variables, which can be understood as continuation variables. The set of terms is divided into unnamed (all traditional lambda expressions are of this kind) and named terms. The terms that are added by the lambda-mu calculus are of the form:
[α]M is a named term, where α is a μ-variable and M is an unnamed term.
(μ α. t) is an unnamed term, where α is a μ-variable and t is a named term.
== Reduction ==
The basic reduction rules used in the lambda-mu calculus are the following:
beta reduction
(
λ
x
.
M
)
N
→
M
[
N
/
x
]
{\displaystyle (\lambda x.M)N\to M[N/x]}
structural reduction
(
μ
α
.
t
)
M
→
μ
α
.
t
[
[
α
]
(
N
M
)
/
[
α
]
N
]
{\displaystyle (\mu \alpha .t)M\to \mu \alpha .t\left[[\alpha ](NM)/[\alpha ]N\right]}
, where the substitutions are to be made for all subterms of
t
{\displaystyle t}
of the form
[
α
]
N
{\displaystyle [\alpha ]N}
.
renaming
[
α
]
μ
β
.
t
→
t
[
α
/
β
]
{\displaystyle [\alpha ]\mu \beta .t\to t[\alpha /\beta ]}
μη-reduction
μ
α
.
[
α
]
M
→
M
{\displaystyle \mu \alpha .[\alpha ]M\to M}
, if α does not occur freely in M
These rules cause the calculus to be confluent.
== Variations ==
=== Call-by-value lambda-mu calculus ===
To obtain call-by-value semantics, one must refine the beta reduction rule and add another form of structural reduction:
call-by-value beta reduction
(
λ
x
.
M
)
V
→
M
[
V
/
x
]
{\displaystyle (\lambda x.M)V\to M[V/x]}
, where V is a value
right structural reduction
V
(
μ
α
.
t
)
→
μ
α
.
t
[
[
α
]
(
V
N
)
/
[
α
]
N
]
{\displaystyle V(\mu \alpha .t)\to \mu \alpha .t\left[[\alpha ](VN)/[\alpha ]N\right]}
, where V is a value
This addition corresponds to the addition of an additional evaluation context former when moving from call-by-name to call-by-value evaluation.
=== De Groote's unstratified syntax ===
For a closer correspondence with conventional formalizations of control operators, the distinction between named and unnamed terms can be abolished, meaning that [α]M is of the same sort as other lambda-expressions and the body of μ-abstraction can also be any expression. Another variant in this vein is the Λμ-calculus.
=== Symmetric lambda-mu calculus ===
One can consider a structural reduction rule symmetric to the original one:
M
(
μ
α
.
t
)
→
μ
α
.
t
[
[
α
]
(
M
N
)
/
[
α
]
N
]
{\displaystyle M(\mu \alpha .t)\to \mu \alpha .t\left[[\alpha ](MN)/[\alpha ]N\right]}
This, however, breaks confluence and the correspondence to control operators.
== See also ==
Classical pure type systems for typed generalizations of lambda calculi with control
== References ==
== External links ==
Lambda-mu relevant discussion on Lambda the Ultimate. | Wikipedia/Lambda-mu_calculus |
In software development, an object is an entity that has state, behavior, and identity.: 78 An object can model some part of reality or can be an invention of the design process whose collaborations with other such objects serve as the mechanisms that provide some higher-level behavior. Put another way, an object represents an individual, identifiable item, unit, or entity, either real or abstract, with a well-defined role in the problem domain.: 76
A programming language can be classified based on its support for objects. A language that provides an encapsulation construct for state, behavior, and identity is classified as object-based. If the language also provides polymorphism and inheritance it is classified as object-oriented. A language that supports creating an object from a class is classified as class-based. A language that supports object creation via a template object is classified as prototype-based.
The concept of object is used in many different software contexts, including:
Possibly the most common use is in-memory objects in a computer program written in an object-based language.
Information systems can be modeled with objects representing their components and interfaces.: 39
In the relational model of database management, aspects such as table and column may act as objects.
Objects of a distributed computing system tend to be larger grained, longer lasting, and more service-oriented than programming objects.
== See also ==
Actor model – Model of concurrent computation
Business object – Entity within a multi-tiered software application
Instance (computer science) – Concrete manifestation of an object (class) in software development
Object lifetime – Time period between the creation and destruction of an object-oriented programming instance
Object copying – Technique in object-oriented programming
Semantic Web – Extension of the Web to facilitate data exchange
== References ==
== External links ==
What Is an Object? from The Java Tutorials | Wikipedia/Object_(computer_science) |
In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.
== Definitions ==
Formally, for an abstract rewriting system
(
A
,
→
)
{\displaystyle (A,\to )}
, a reduction strategy
→
S
{\displaystyle \to _{S}}
is a binary relation on
A
{\displaystyle A}
with
→
S
⊆
→
+
{\displaystyle \to _{S}\subseteq {\overset {+}{\to }}}
, where
→
+
{\displaystyle {\overset {+}{\to }}}
is the transitive closure of
→
{\displaystyle \to }
(but not the reflexive closure). In addition the normal forms of the strategy must be the same as the normal forms of the original rewriting system, i.e. for all
a
{\displaystyle a}
, there exists a
b
{\displaystyle b}
with
a
→
b
{\displaystyle a\to b}
iff
∃
b
′
.
a
→
S
b
′
{\displaystyle \exists b'.a\to _{S}b'}
.
A one step reduction strategy is one where
→
S
⊆→
{\displaystyle \to _{S}\subseteq \to }
. Otherwise it is a many step strategy.
A deterministic strategy is one where
→
S
{\displaystyle \to _{S}}
is a partial function, i.e. for each
a
∈
A
{\displaystyle a\in A}
there is at most one
b
{\displaystyle b}
such that
a
→
S
b
{\displaystyle a\to _{S}b}
. Otherwise it is a nondeterministic strategy.
== Term rewriting ==
In a term rewriting system a rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term.
One-step strategies for term rewriting include:
leftmost-innermost: in each step the leftmost of the innermost redexes is contracted, where an innermost redex is a redex not containing any redexes
leftmost-outermost: in each step the leftmost of the outermost redexes is contracted, where an outermost redex is a redex not contained in any redexes
rightmost-innermost, rightmost-outermost: similarly
Many-step strategies include:
parallel-innermost: reduces all innermost redexes simultaneously. This is well-defined because the redexes are pairwise disjoint.
parallel-outermost: similarly
Gross-Knuth reduction, also called full substitution or Kleene reduction: all redexes in the term are simultaneously reduced
Parallel outermost and Gross-Knuth reduction are hypernormalizing for all almost-orthogonal term rewriting systems, meaning that these strategies will eventually reach a normal form if it exists, even when performing (finitely many) arbitrary reductions between successive applications of the strategy.
Stratego is a domain-specific language designed specifically for programming term rewriting strategies.
== Lambda calculus ==
In the context of the lambda calculus, normal-order reduction refers to leftmost-outermost reduction in the sense given above. Normal-order reduction is normalizing, in the sense that if a term has a normal form, then normal‐order reduction will eventually reach it, hence the name normal. This is known as the standardization theorem.
Leftmost reduction is sometimes used to refer to normal order reduction, as with a pre-order traversal the notions coincide, and similarly the leftmost-outermost redex is the redex with leftmost starting character when the lambda term is considered as a string of characters. When "leftmost" is defined using an in-order traversal the notions are distinct. For example, in the term
(
λ
x
.
x
Ω
)
(
λ
y
.
I
)
{\displaystyle (\lambda x.x\Omega )(\lambda y.I)}
with
Ω
,
I
{\displaystyle \Omega ,I}
defined here, the leftmost redex of the in-order traversal is
Ω
{\displaystyle \Omega }
while the leftmost-outermost redex is the entire expression.
Applicative order reduction refers to leftmost-innermost reduction.
In contrast to normal order, applicative order reduction may not terminate, even when the term has a normal form. For example, using applicative order reduction, the following sequence of reductions is possible:
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
…
{\displaystyle {\begin{aligned}&(\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\&\ldots \end{aligned}}}
But using normal-order reduction, the same starting point reduces quickly to normal form:
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
{\displaystyle (\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))}
→
z
{\displaystyle \rightarrow z}
Full β-reduction refers to the nondeterministic one-step strategy that allows reducing any redex at each step. Takahashi's parallel β-reduction is the strategy that reduces all redexes in the term simultaneously.
=== Weak reduction ===
Normal and applicative order reduction are strong in that they allow reduction under lambda abstractions. In contrast, weak reduction does not reduce under a lambda abstraction. Call-by-name reduction is the weak reduction strategy that reduces the leftmost outermost redex not inside a lambda abstraction, while call-by-value reduction is the weak reduction strategy that reduces the leftmost innermost redex not inside a lambda abstraction. These strategies were devised to reflect the call-by-name and call-by-value evaluation strategies. In fact, applicative order reduction was also originally introduced to model the call-by-value parameter passing technique found in Algol 60 and modern programming languages. When combined with the idea of weak reduction, the resulting call-by-value reduction is indeed a faithful approximation.
Unfortunately, weak reduction is not confluent, and the traditional reduction equations of the lambda calculus are useless, because they suggest relationships that violate the weak evaluation regime. However, it is possible to extend the system to be confluent by allowing a restricted form of reduction under an abstraction, in particular when the redex does not involve the variable bound by the abstraction. For example, λx.(λy.x)z is in normal form for a weak reduction strategy because the redex (λy.x)z is contained in a lambda abstraction. But the term λx.(λy.y)z can still be reduced under the extended weak reduction strategy, because the redex (λy.y)z does not refer to x.
=== Optimal reduction ===
Optimal reduction is motivated by the existence of lambda terms where there does not exist a sequence of reductions which reduces them without duplicating work. For example, consider
((λg.(g(g(λx.x))))
(λh.((λf.(f(f(λz.z))))
(λw.(h(w(λy.y)))))))
It is composed of three nested terms, x=((λg. ... ) (λh.y)), y=((λf. ...) (λw.z) ), and z=λw.(h(w(λy.y))). There are only two possible β-reductions to be done here, on x and on y. Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known.
Optimal reduction is not a reduction strategy for the lambda calculus in a narrow sense because performing β-reduction loses the information about the substituted redexes being shared. Instead it is defined for the labelled lambda calculus, an annotated lambda calculus which captures a precise notion of the work that should be shared.: 113–114
Labels consist of a countably infinite set of atomic labels, and concatenations
a
b
{\displaystyle ab}
, overlinings
a
¯
{\displaystyle {\overline {a}}}
and underlinings
a
_
{\displaystyle {\underline {a}}}
of labels. A labelled term is a lambda calculus term where each subterm has a label. The standard initial labeling of a lambda term gives each subterm a unique atomic label.: 132 Labelled β-reduction is given by:
(
(
λ
x
.
M
)
α
N
)
β
→
β
α
¯
⋅
M
[
x
↦
α
_
⋅
N
]
{\displaystyle ((\lambda x.M)^{\alpha }N)^{\beta }\to \beta {\overline {\alpha }}\cdot M[x\mapsto {\underline {\alpha }}\cdot N]}
where
⋅
{\displaystyle \cdot }
concatenates labels,
β
⋅
T
α
=
T
β
α
{\displaystyle \beta \cdot T^{\alpha }=T^{\beta \alpha }}
, and substitution
M
[
x
↦
N
]
{\displaystyle M[x\mapsto N]}
is defined as follows (using the Barendregt convention):
x
α
[
x
↦
N
]
=
α
⋅
N
(
λ
y
.
M
)
α
[
x
↦
N
]
=
(
λ
y
.
M
[
x
↦
N
]
)
α
y
α
[
x
↦
N
]
=
y
α
(
M
N
)
α
[
x
↦
P
]
=
(
M
[
x
↦
P
]
N
[
x
↦
P
]
)
α
{\displaystyle {\begin{aligned}x^{\alpha }[x\mapsto N]&=\alpha \cdot N&\quad (\lambda y.M)^{\alpha }[x\mapsto N]&=(\lambda y.M[x\mapsto N])^{\alpha }\\y^{\alpha }[x\mapsto N]&=y^{\alpha }&\quad (MN)^{\alpha }[x\mapsto P]&=(M[x\mapsto P]N[x\mapsto P])^{\alpha }\end{aligned}}}
The system can be proven to be confluent. Optimal reduction is then defined to be normal order or leftmost-outermost reduction using reduction by families, i.e. the parallel reduction of all redexes with the same function part label. The strategy is optimal in the sense that it performs the optimal (minimal) number of family reduction steps.
A practical algorithm for optimal reduction was first described in 1989, more than a decade after optimal reduction was first defined in 1974. The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of an extension of the technique to interaction nets.: 362 Lambdascope is a more recent implementation of optimal reduction, also using interaction nets.
=== Call by need reduction ===
Call by need reduction can be defined similarly to optimal reduction as weak leftmost-outermost reduction using parallel reduction of redexes with the same label, for a slightly different labelled lambda calculus. An alternate definition changes the beta rule to an operation that finds the next "needed" computation, evaluates it, and substitutes the result into all locations. This requires extending the beta rule to allow reducing terms that are not syntactically adjacent. As with call-by-name and call-by-value, call-by-need reduction was devised to mimic the behavior of the evaluation strategy known as "call-by-need" or lazy evaluation.
== See also ==
Reduction system
Reduction semantics
Thunk
== Notes ==
== References ==
== External links ==
Lambda calculus reduction workbench | Wikipedia/Reduction_strategy |
Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.
== In mathematics ==
Combinatory logic was originally intended as a 'pre-logic' that would clarify the role of quantified variables in logic, essentially by eliminating them. Another way of eliminating quantified variables is Quine's predicate functor logic. While the expressive power of combinatory logic typically exceeds that of first-order logic, the expressive power of predicate functor logic is identical to that of first order logic (Quine 1960, 1966, 1976).
The original inventor of combinatory logic, Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper. Haskell Curry rediscovered the combinators while working as an instructor at Princeton University in late 1927. In the late 1930s, Alonzo Church and his students at Princeton invented a rival formalism for functional abstraction, the lambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was by Haskell Curry and his students, or by Robert Feys in Belgium. Curry and Feys (1958), and Curry et al. (1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book by Barendregt, which reviews the models Dana Scott devised for combinatory logic in the 1960s and 1970s.
== In computing ==
In computer science, combinatory logic is used as a simplified model of computation, used in computability theory and proof theory. Despite its simplicity, combinatory logic captures many essential features of computation.
Combinatory logic can be viewed as a variant of the lambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set of combinators, primitive functions without free variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model some non-strict functional programming languages and hardware. The purest form of this view is the programming language Unlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest.
Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations. Dana Scott in the 1960s and 1970s showed how to marry model theory and combinatory logic.
== Summary of lambda calculus ==
Lambda calculus is concerned with objects called lambda-terms, which can be represented by the following three forms of strings:
v
{\displaystyle v}
λ
v
.
E
1
{\displaystyle \lambda v.E_{1}}
(
E
1
E
2
)
{\displaystyle (E_{1}E_{2})}
where
v
{\displaystyle v}
is a variable name drawn from a predefined infinite set of variable names, and
E
1
{\displaystyle E_{1}}
and
E
2
{\displaystyle E_{2}}
are lambda-terms.
Terms of the form
λ
v
.
E
1
{\displaystyle \lambda v.E_{1}}
are called abstractions. The variable v is called the formal parameter of the abstraction, and
E
1
{\displaystyle E_{1}}
is the body of the abstraction. The term
λ
v
.
E
1
{\displaystyle \lambda v.E_{1}}
represents the function which, applied to an argument, binds the formal parameter v to the argument and then computes the resulting value of
E
1
{\displaystyle E_{1}}
— that is, it returns
E
1
{\displaystyle E_{1}}
, with every occurrence of v replaced by the argument.
Terms of the form
(
E
1
E
2
)
{\displaystyle (E_{1}E_{2})}
are called applications. Applications model function invocation or execution: the function represented by
E
1
{\displaystyle E_{1}}
is to be invoked, with
E
2
{\displaystyle E_{2}}
as its argument, and the result is computed. If
E
1
{\displaystyle E_{1}}
(sometimes called the applicand) is an abstraction, the term may be reduced:
E
2
{\displaystyle E_{2}}
, the argument, may be substituted into the body of
E
1
{\displaystyle E_{1}}
in place of the formal parameter of
E
1
{\displaystyle E_{1}}
, and the result is a new lambda term which is equivalent to the old one. If a lambda term contains no subterms of the form
(
(
λ
v
.
E
1
)
E
2
)
{\displaystyle ((\lambda v.E_{1})E_{2})}
then it cannot be reduced, and is said to be in normal form.
The expression
E
[
v
:=
a
]
{\displaystyle E[v:=a]}
represents the result of taking the term E and replacing all free occurrences of v in it with a. Thus we write
(
(
λ
v
.
E
)
a
)
⇒
E
[
v
:=
a
]
{\displaystyle ((\lambda v.E)a)\Rightarrow E[v:=a]}
By convention, we take
(
a
b
c
)
{\displaystyle (abc)}
as shorthand for
(
(
a
b
)
c
)
{\displaystyle ((ab)c)}
(i.e., application is left associative).
The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write
The square of x is
x
∗
x
{\displaystyle x*x}
(Using "
∗
{\displaystyle *}
" to indicate multiplication.) x here is the formal parameter of the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter:
The square of 3 is
3
∗
3
{\displaystyle 3*3}
To evaluate the resulting expression
3
∗
3
{\displaystyle 3*3}
, we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation.
Moreover, in lambda calculus, notions such as '3' and '
∗
{\displaystyle *}
' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v. Church encoding.
Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (including Turing machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to the Church–Turing thesis, both models can express any possible computation.
It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required. Combinatory logic is a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution.
== Combinatory calculi ==
Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built.
=== Combinatory terms ===
A combinatory term has one of the following forms:
The primitive functions are combinators, or functions that, when seen as lambda terms, contain no free variables.
To shorten the notations, a general convention is that
(
E
1
E
2
E
3
.
.
.
E
n
)
{\displaystyle (E_{1}E_{2}E_{3}...E_{n})}
, or even
E
1
E
2
E
3
.
.
.
E
n
{\displaystyle E_{1}E_{2}E_{3}...E_{n}}
, denotes the term
(
.
.
.
(
(
E
1
E
2
)
E
3
)
.
.
.
E
n
)
{\displaystyle (...((E_{1}E_{2})E_{3})...E_{n})}
. This is the same general convention (left-associativity) as for multiple application in lambda calculus.
=== Reduction in combinatory logic ===
In combinatory logic, each primitive combinator comes with a reduction rule of the form
(P x1 ... xn) = E
where E is a term mentioning only variables from the set {x1 ... xn}. It is in this way that primitive combinators behave as functions.
=== Examples of combinators ===
The simplest example of a combinator is I, the identity combinator, defined by
(I x) = x
for all terms x. Another simple combinator is K, which manufactures constant functions: (K x) is the function which, for any argument, returns x, so we say
((K x) y) = x
for all terms x and y. Or, following the convention for multiple application,
(K x y) = x
A third combinator is S, which is a generalized version of application:
(S x y z) = (x z (y z))
S applies x to y after first substituting z into
each of them. Or put another way, x is applied to y inside the environment z.
Given S and K, I itself is unnecessary, since it can be built from the other two:
((S K K) x)
= (S K K x)
= (K x (K x))
= x
for any term x. Note that although ((S K K)
x) = (I x) for any x, (S K K)
itself is not equal to I. We say the terms are extensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions are equal if they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion of intensional equality of functions: that two functions are equal only if they have identical implementations up to the expansion of primitive combinators. There are many ways to implement an identity function; (S K K) and I are among these ways. (S K S) is yet another. We will use the word equivalent to indicate extensional equality, reserving equal for identical combinatorial terms.
A more interesting combinator is the fixed point combinator or Y combinator, which can be used to implement recursion.
=== Completeness of the S-K basis ===
S and K can be composed to produce combinators that are extensionally equal to any lambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation, T[ ], which converts an arbitrary lambda term into an equivalent combinator.
T[ ] may be defined as follows:
T[x] ⇒ x
T[(E1 E2)] ⇒ (T[E1] T[E2])
T[λx.E] ⇒ (K T[E]) (if x does not occur free in E)
T[λx.x] ⇒ I
T[λx.λy.E] ⇒ T[λx.T[λy.E]] (if x occurs free in E)
T[λx.(E1 E2)] ⇒ (S T[λx.E1] T[λx.E2]) (if x occurs free in E1 or E2)
Note that T[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5).
This process is also known as abstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (see Summary of lambda calculus above).
It is related to the process of bracket abstraction, which takes an expression E built from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x = E holds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows:
[x]y := K y
[x]x := I
[x](E1 E2) := S([x]E1)([x]E2)
Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm.
==== Conversion of a lambda term to an equivalent combinatorial term ====
For example, we will convert the lambda term λx.λy.(y x) to a combinatorial term:
T[λx.λy.(y x)]
= T[λx.T[λy.(y x)]] (by 5)
= T[λx.(S T[λy.y] T[λy.x])] (by 6)
= T[λx.(S I T[λy.x])] (by 4)
= T[λx.(S I (K T[x]))] (by 3)
= T[λx.(S I (K x))] (by 1)
= (S T[λx.(S I)] T[λx.(K x)]) (by 6)
= (S (K (S I)) T[λx.(K x)]) (by 3)
= (S (K (S I)) (S T[λx.K] T[λx.x])) (by 6)
= (S (K (S I)) (S (K K) T[λx.x])) (by 3)
= (S (K (S I)) (S (K K) I)) (by 4)
If we apply this combinatorial term to any two terms x and y (by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows:
(S (K (S I)) (S (K K) I) x y)
= (K (S I) x (S (K K) I x) y)
= (S I (S (K K) I x) y)
= (I y (S (K K) I x y))
= (y (S (K K) I x y))
= (y (K K x (I x) y))
= (y (K (I x) y))
= (y (I x))
= (y x)
The combinatory representation, (S (K (S I)) (S (K K) I)) is much longer than the representation as a lambda term, λx.λy.(y x). This is typical. In general, the T[ ] construction may expand a lambda term of length n to a combinatorial term of length Θ(n3).
==== Explanation of the T[ ] transformation ====
The T[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial: λx.x is clearly equivalent to I, and λx.E is clearly equivalent to (K T[E]) if x does not appear free in E.
The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators.
It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction.
λx.(E1 E2) is a function which takes an argument, say a, and substitutes it into the lambda term (E1 E2) in place of x, yielding (E1 E2)[x : = a]. But substituting a into (E1 E2) in place of x is just the same as substituting it into both E1 and E2, so
(E1 E2)[x := a] = (E1[x := a] E2[x := a])
(λx.(E1 E2) a) = ((λx.E1 a) (λx.E2 a))
= (S λx.E1 λx.E2 a)
= ((S λx.E1 λx.E2) a)
By extensional equality,
λx.(E1 E2) = (S λx.E1 λx.E2)
Therefore, to find a combinator equivalent to λx.(E1 E2), it is sufficient to find a combinator equivalent to (S λx.E1 λx.E2), and
(S T[λx.E1] T[λx.E2])
evidently fits the bill. E1 and E2 each contain strictly fewer applications than (E1 E2), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of the
form λx.E.
=== Simplifications of the transformation ===
==== η-reduction ====
The combinators generated by the T[ ] transformation can be made smaller if we take into account the η-reduction rule:
T[λx.(E x)] = T[E] (if x is not free in E)
λx.(E x) is the function which takes an argument, x, and applies the function E to it; this is extensionally equal to the function E itself. It is therefore sufficient to convert E to combinatorial form.
Taking this simplification into account, the example above becomes:
T[λx.λy.(y x)]
= ...
= (S (K (S I)) T[λx.(K x)])
= (S (K (S I)) K) (by η-reduction)
This combinator is equivalent to the earlier, longer one:
(S (K (S I)) K x y)
= (K (S I) x (K x) y)
= (S I (K x) y)
= (I y (K x y))
= (y (K x y))
= (y x)
Similarly, the original version of the T[ ] transformation transformed the identity function λf.λx.(f x) into (S (S (K S) (S (K K) I)) (K I)). With the η-reduction rule, λf.λx.(f x) is
transformed into I.
==== One-point basis ====
There are one-point bases from which every combinator can be composed extensionally equal to any lambda term. A simple example of such a basis is {X} where:
X ≡ λx.((xS)K)
It is not difficult to verify that:
X (X (X X)) =β K and
X (X (X (X X))) =β S.
Since {K, S} is a basis, it follows that {X} is a basis too. The Iota programming language uses X as its sole combinator.
Another simple example of a one-point basis is:
X' ≡ λx.(x K S K) with
(X' X') X' =β K and
X' (X' X') =β S
The simplest known one-point basis is a slight modification of S:
S' ≡ λxλyλz. (x z) (y (λw. z))) with
S' (S' S') (S' (S' S') S' S' S' S' S') = β K and
S' (S' (S' S' (S' S' (S' S'))(S' (S' (S' S' (S' S')))))) S' S' = β S.
In fact, there exist infinitely many such bases.
==== Combinators B, C ====
In addition to S and K, Schönfinkel (1924) included two combinators which are now called B and C, with the following reductions:
(C f g x) = ((f x) g)
(B f g x) = (f (g x))
He also explains how they in turn can be expressed using only S and K:
B = (S (K S) K)
C = (S (S (K (S (K S) K)) S) (K K))
These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used by Curry, and much later by David Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows:
T[x] ⇒ x
T[(E1 E2)] ⇒ (T[E1] T[E2])
T[λx.E] ⇒ (K T[E]) (if x is not free in E)
T[λx.x] ⇒ I
T[λx.λy.E] ⇒ T[λx.T[λy.E]] (if x is free in E)
T[λx.(E1 E2)] ⇒ (S T[λx.E1] T[λx.E2]) (if x is free in both E1 and E2)
T[λx.(E1 E2)] ⇒ (C T[λx.E1] T[E2]) (if x is free in E1 but not E2)
T[λx.(E1 E2)] ⇒ (B T[E1] T[λx.E2]) (if x is free in E2 but not E1)
Using B and C combinators, the transformation of λx.λy.(y x) looks like this:
T[λx.λy.(y x)]
= T[λx.T[λy.(y x)]]
= T[λx.(C T[λy.y] x)] (by rule 7)
= T[λx.(C I x)]
= (C I) (η-reduction)
=
C
∗
{\displaystyle ={\mathsf {C}}_{*}}
(traditional canonical notation:
X
∗
=
X
I
{\displaystyle {\mathsf {X}}_{*}={\mathsf {XI}}}
)
=
I
′
{\displaystyle ={\mathsf {I}}'}
(traditional canonical notation:
X
′
=
C
X
{\displaystyle {\mathsf {X}}'={\mathsf {CX}}}
)
And indeed, (C I x y) does reduce to (y x):
(C I x y)
= (I y x)
= (y x)
The motivation here is that B and C are limited versions of S. Whereas S takes a value and substitutes it into both the applicand and its argument before performing the application, C performs the
substitution only in the applicand, and B only in the argument.
The modern names for the combinators come from Haskell Curry's doctoral thesis of 1930 (see B, C, K, W System). In Schönfinkel's original paper, what we now call S, K, I, B and C were called S, C, I, Z, and T respectively.
The reduction in combinator size that results from the new transformation rules can also be achieved without introducing B and C, as demonstrated in Section 3.2 of Tromp (2008).
===== CLK versus CLI calculus =====
A distinction must be made between the CLK as described in this article and the CLI calculus. The distinction corresponds to that between the λK and the λI calculus. Unlike the λK calculus, the λI calculus restricts abstractions to:
λx.E where x has at least one free occurrence in E.
As a consequence, combinator K is not present in the λI calculus nor in the CLI calculus. The constants of CLI are: I, B, C and S, which form a basis from which all CLI terms can be composed (modulo equality). Every λI term can be converted into an equal CLI combinator according to rules similar to those presented above for the conversion of λK terms into CLK combinators. See chapter 9 in Barendregt (1984).
=== Reverse conversion ===
The conversion L[ ] from combinatorial terms to lambda terms is trivial:
L[I] = λx.x
L[K] = λx.λy.x
L[C] = λx.λy.λz.(x z y)
L[B] = λx.λy.λz.(x (y z))
L[S] = λx.λy.λz.(x z (y z))
L[(E1 E2)] = (L[E1] L[E2])
Note, however, that this transformation is not the inverse
transformation of any of the versions of T[ ] that we have seen.
== Undecidability of combinatorial calculus ==
A normal form is any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This can be shown in a similar way as for the corresponding problems for lambda terms.
=== Undefinability by predicates ===
The undecidable problems above (equivalence, existence of normal form, etc.) take as input syntactic representations of terms under a suitable encoding (e.g., Church encoding). One may also consider a toy trivial computation model where we "compute" properties of terms by means of combinators applied directly to the terms themselves as arguments, rather than to their syntactic representations. More precisely, let a predicate be a combinator that, when applied, returns either T or F (where T and F represent the conventional Church encodings of true and false, λx.λy.x and λx.λy.y, transformed into combinatory logic; the combinatory versions have T = K and F = (K I)). A predicate N is nontrivial if there are two arguments A and B such that N A = T and N B = F. A combinator N is complete if NM has a normal form for every argument M. An analogue of Rice's theorem for this toy model then says that every complete predicate is trivial. The proof of this theorem is rather simple.
From this undefinability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there is no complete predicate, say EQUAL, such that:
(EQUAL A B) = T if A = B and
(EQUAL A B) = F if A ≠ B.
If EQUAL would exist, then for all A, λx.(EQUAL x A) would have to be a complete non trivial predicate.
However, note that it also immediately follows from this undefinability theorem that many properties of terms that are obviously decidable are not definable by complete predicates either: e.g., there is no predicate that could tell whether the first primitive function letter occurring in a term is a K. This shows that definability by predicates is a not a reasonable model of decidability.
== Applications ==
=== Compilation of functional languages ===
David Turner used his combinators to implement the SASL programming language.
Kenneth E. Iverson used primitives based on Curry's combinators in his J programming language, a successor to APL. This enabled what Iverson called tacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators.
=== Logic ===
The Curry–Howard isomorphism implies a connection between logic and programming: every proof of a theorem of intuitionistic logic corresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to a Hilbert system in proof theory.
The K and S combinators correspond to the axioms
AK: A → (B → A),
AS: (A → (B → C)) → ((A → B) → (A → C)),
and function application corresponds to the detachment (modus ponens) rule
MP: from A and A → B infer B.
The calculus consisting of AK, AS, and MP is complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the set W of all deductively closed sets of formulas, ordered by inclusion. Then
⟨
W
,
⊆
⟩
{\displaystyle \langle W,\subseteq \rangle }
is an intuitionistic Kripke frame, and we define a model
⊩
{\displaystyle \Vdash }
in this frame by
X
⊩
A
⟺
A
∈
X
.
{\displaystyle X\Vdash A\iff A\in X.}
This definition obeys the conditions on satisfaction of →: on one hand, if
X
⊩
A
→
B
{\displaystyle X\Vdash A\to B}
, and
Y
∈
W
{\displaystyle Y\in W}
is such that
Y
⊇
X
{\displaystyle Y\supseteq X}
and
Y
⊩
A
{\displaystyle Y\Vdash A}
, then
Y
⊩
B
{\displaystyle Y\Vdash B}
by modus ponens. On the other hand, if
X
⊮
A
→
B
{\displaystyle X\not \Vdash A\to B}
, then
X
,
A
⊬
B
{\displaystyle X,A\not \vdash B}
by the deduction theorem, thus the deductive closure of
X
∪
{
A
}
{\displaystyle X\cup \{A\}}
is an element
Y
∈
W
{\displaystyle Y\in W}
such that
Y
⊇
X
{\displaystyle Y\supseteq X}
,
Y
⊩
A
{\displaystyle Y\Vdash A}
, and
Y
⊮
B
{\displaystyle Y\not \Vdash B}
.
Let A be any formula which is not provable in the calculus. Then A does not belong to the deductive closure X of the empty set, thus
X
⊮
A
{\displaystyle X\not \Vdash A}
, and A is not intuitionistically valid.
== See also ==
Applicative computing systems
B, C, K, W system
Categorical abstract machine
Combinatory categorial grammar
Explicit substitution
Fixed point combinator
Graph reduction machine
Lambda calculus and Cylindric algebra, other approaches to modelling quantification and eliminating variables
SKI combinator calculus
Supercombinator
To Mock a Mockingbird
== References ==
== Literature ==
Barendregt, Hendrik Pieter (1984). The Lambda Calculus, Its Syntax and Semantics. Studies in Logic and the Foundations of Mathematics. Vol. 103. North Holland. ISBN 0-444-87508-5.
Cherlin, Edward (1991). "Pure functions in APL and J". Proceedings of the international conference on APL '91 - APL '91. pp. 88–93. doi:10.1145/114054.114065. ISBN 0897914414. S2CID 25802202.
Curry, Haskell Brooks (1930). "Grundlagen der Kombinatorischen Logik" [Foundations of combinatorial logic]. American Journal of Mathematics (in German). 52 (3). The Johns Hopkins University Press: 509–536. doi:10.2307/2370619. JSTOR 2370619.
Curry, Haskell Brooks; Feys, Robert (1958). Combinatory Logic. Vol. I. Amsterdam: North Holland. ISBN 0-7204-2208-6. {{cite book}}: ISBN / Date incompatibility (help)
Curry, Haskell Brooks; Hindley, J. Roger; Seldin, Jonathan P. (1972). Combinatory Logic. Vol. II. Amsterdam: North Holland. ISBN 0-7204-2208-6.
Engeler, E. (1995). The Combinatory Programme (PDF). Birkhäuser. pp. 5–6.
Field, Anthony J.; Harrison, Peter G. (1998). Functional Programming. Addison-Wesley. ISBN 0-201-19249-7.
Goldberg, Mayer (2004). "A construction of one-point bases in extended lambda calculi". Information Processing Letters. 89 (6): 281–286. doi:10.1016/j.ipl.2003.12.005.
Hindley, J. Roger; Meredith, David (1990). "Principal type-schemes and condensed detachment". Journal of Symbolic Logic. 55 (1): 90–105. doi:10.2307/2274956. JSTOR 2274956. MR 1043546. S2CID 6930576.
Hindley, J. Roger; Seldin, Jonathan P. (2008) [1986]. Lambda-Calculus and Combinators: An Introduction (2nd ed.). Cambridge University Press. ISBN 9780521898850.
Lachowski, Łukasz (2018). "On the Complexity of the Standard Translation of Lambda Calculus into Combinatory Logic". Reports on Mathematical Logic. 2018 (53): 19–42. doi:10.4467/20842589RM.18.002.8835. Retrieved 9 September 2018.
Paulson, Lawrence C. (1995). Foundations of Functional Programming. University of Cambridge.
Quine, Willard Van Orman (1960). "Variables explained away". Proceedings of the American Philosophical Society. 104 (3): 343–347. JSTOR 985250. Reprinted as Chapter 23 of Quine (1996)
Quine, Willard Van Orman (1996) [1960]. "Variables explained away". Selected Logic Papers (Enl. ed., 2. print ed.). Cambridge, Mass.: Harvard University Press. pp. 227–235. ISBN 9780674798373.
Schönfinkel, Moses (1924). "Über die Bausteine der mathematischen Logik" (PDF). Mathematische Annalen (in German). 92 (3–4): 305–316. doi:10.1007/bf01448013. S2CID 118507515. The article that founded combinatory logic. English translation: Schönfinkel (1967)
Schönfinkel, Moses (1967) [1924]. Van Heijenoort, Jean (ed.). Über die Bausteine der mathematischen Logik [On the building blocks of mathematical logic]. From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931. Translated by Bauer-Mengelberg, Stefan. Cambridge, MA, USA: Harvard University Press. pp. 355–366. ISBN 978-0674324497. OCLC 503886453.
Seldin, Jonathan P. (3 March 2008). "The Logic of Curry and Church" (PDF). Retrieved 17 September 2023.
Smullyan, Raymond (1985). To Mock a Mockingbird And Other Logic Puzzles Including an Amazing Adventure in Combinatory Logic. Knopf. ISBN 0-394-53491-3. A gentle introduction to combinatory logic, presented as a series of recreational puzzles using bird watching metaphors.
Smullyan, Raymond (1994). Diagonalization and Self-Reference. Oxford logic guides. Vol. 27. Oxford and New York: Oxford University Press. ISBN 978-0198534501. Chapters 17–20 are a more formal introduction to combinatory logic, with a special emphasis on fixed point results.
Sørensen, Morten Heine B; Urzyczyn, Paweł (2006) [1999]. Lectures on the Curry–Howard Isomorphism (PDF). Studies in Logic and the Foundations of Mathematics. Vol. 149 (1st ed.). Elsevier. p. 442. ISBN 978-0444520777. Archived from the original (PDF) on 2005-10-16. Retrieved 2017-04-22.
Tromp, John (2008). "Binary Lambda Calculus and Combinatory Logic" (PDF). In Calude, Cristian S. (ed.). Randomness And Complexity, from Leibniz To Chaitin. World Scientific Publishing Company. Archived from the original (PDF) on 2016-03-04.
Turner, David A. (1979). "Another Algorithm for Bracket Abstraction". The Journal of Symbolic Logic. 44 (2): 267–270. doi:10.2307/2273733. JSTOR 2273733. S2CID 35835482.
Wolfengagen, V. E. (2003). Combinatory logic in programming: Computations with objects through examples and exercises (2nd ed.). Moscow: "Center JurInfoR" Ltd. ISBN 5-89158-101-9.
Wolfram, Stephen (2021). Combinators: A Centennial View. Wolfram Media. ISBN 978-1-57955-043-1. A celebration of the development of combinators, a hundred years after they were introduced by Schönfinkel (1924) (eBook: ISBN 978-1-57955-044-8)
== External links ==
Stanford Encyclopedia of Philosophy: "Combinatory Logic" by Katalin Bimbó.
1920–1931 Curry's block notes.
Keenan, David C. (2001) "To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction."
Rathman, Chris, "Combinator Birds." A table distilling much of the essence of Smullyan (1985).
Drag 'n' Drop Combinators. (Java Applet)
Binary Lambda Calculus and Combinatory Logic.
Combinatory logic reduction web server
Wolfram, Stephen (29 April 2020). Combinators: 100-Year Celebration. Wolfram Physics Project on YouTube. Retrieved 26 September 2023. | Wikipedia/Combinator_calculus |
In a programming language, an evaluation strategy is a set of rules for evaluating expressions. The term is often used to refer to the more specific notion of a parameter-passing strategy that defines the kind of value that is passed to the function for each parameter (the binding strategy) and whether to evaluate the parameters of a function call, and if so in what order (the evaluation order). The notion of reduction strategy is distinct, although some authors conflate the two terms and the definition of each term is not widely agreed upon. A programming language's evaluation strategy is part of its high-level semantics. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies.
The calling convention consists of the low-level platform-specific details of parameter passing.
== Example ==
To illustrate, executing a function call f(a,b) may first evaluate the arguments a and b, store the results in references or memory locations ref_a and ref_b, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy.
== Table ==
This is a table of evaluation strategies and representative languages by year introduced. The representative languages are listed in chronological order, starting with the language(s) that introduced the strategy and followed by prominent languages that use the strategy.: 434
== Evaluation orders ==
While the order of operations defines the abstract syntax tree of the expression, the evaluation order defines the order in which expressions are evaluated. For example, the Python program
outputs 123 due to Python's left-to-right evaluation order, but a similar program in OCaml:
outputs 213 due to OCaml's right-to-left evaluation order.
The evaluation order is mainly visible in code with side effects, but it also affects the performance of the code because a rigid order inhibits instruction scheduling. For this reason language standards such as C++ traditionally left the order unspecified, although languages such as Java and C# define the evaluation order as left-to-right: 240–241 and the C++17 standard has added constraints on the evaluation order.
=== Strict evaluation ===
Applicative order is a family of evaluation orders in which a function's arguments are evaluated completely before the function is applied.
This has the effect of making the function strict, i.e. the function's result is undefined if any of the arguments are undefined, so applicative order evaluation is more commonly called strict evaluation. Furthermore, a function call is performed as soon as it is encountered in a procedure, so it is also called eager evaluation or greedy evaluation. Some authors refer to strict evaluation as "call by value" due to the call-by-value binding strategy requiring strict evaluation.
Common Lisp, Eiffel and Java evaluate function arguments left-to-right. C leaves the order undefined. Scheme requires the execution order to be the sequential execution of an unspecified permutation of the arguments. OCaml similarly leaves the order unspecified, but in practice evaluates arguments right-to-left due to the design of its abstract machine. All of these are strict evaluation.
=== Non-strict evaluation ===
A non-strict evaluation order is an evaluation order that is not strict, that is, a function may return a result before all of its arguments are fully evaluated.: 46–47 The prototypical example is normal order evaluation, which does not evaluate any of the arguments until they are needed in the body of the function. Normal order evaluation has the property that it terminates without error whenever any other evaluation order would have terminated without error. The name "normal order" comes from the lambda calculus, where normal order reduction will find a normal form if there is one (it is a "normalizing" reduction strategy). Lazy evaluation is classified in this article as a binding technique rather than an evaluation order. But this distinction is not always followed and some authors define lazy evaluation as normal order evaluation or vice-versa, or confuse non-strictness with lazy evaluation.: 43–44
Boolean expressions in many languages use a form of non-strict evaluation called short-circuit evaluation, where evaluation evaluates the left expression but may skip the right expression if the result can be determined—for example, in a disjunctive expression (OR) where true is encountered, or in a conjunctive expression (AND) where false is encountered, and so forth. Conditional expressions similarly use non-strict evaluation - only one of the branches is evaluated.
=== Comparison of applicative order and normal order evaluation ===
With normal order evaluation, expressions containing an expensive computation, an error, or an infinite loop will be ignored if not needed, allowing the specification of user-defined control flow constructs, a facility not available with applicative order evaluation. Normal order evaluation uses complex structures such as thunks for unevaluated expressions, compared to the call stack used in applicative order evaluation. Normal order evaluation has historically had a lack of usable debugging tools due to its complexity.
== Strict binding strategies ==
=== Call by value ===
In call by value (or pass by value), the evaluated value of the argument expression is bound to the corresponding variable in the function (frequently by copying the value into a new memory region). If the function or procedure is able to assign values to its parameters, only its local variable is assigned—that is, anything passed into a function call is unchanged in the caller's scope when the function returns. For example, in Pascal, passing an array by value will cause the entire array to be copied, and any mutations to this array will be invisible to the caller:
==== Semantic drift ====
Strictly speaking, under call by value, no operations performed by the called routine can be visible to the caller, other than as part of the return value. This implies a form of purely functional programming in the implementation semantics. However, the circumlocution "call by value where the value is a reference" has become common in some languages, for example, the Java community. Compared to traditional pass by value, the value which is passed is not a value as understood by the ordinary meaning of value, such as an integer that can be written as a literal, but an implementation-internal reference handle. Mutations to this reference handle are visible in the caller. Due to the visible mutation, this form of "call by value" is more properly referred to as call by sharing.
In purely functional languages, values and data structures are immutable, so there is no possibility for a function to modify any of its arguments. As such, there is typically no semantic difference between passing by value and passing by reference or a pointer to the data structure, and implementations frequently use call by reference internally for the efficiency benefits. Nonetheless, these languages are typically described as call by value languages.
=== Call by reference ===
Call by reference (or pass by reference) is an evaluation strategy where a parameter is bound to an implicit reference to the variable used as argument, rather than a copy of its value. This typically means that the function can modify (i.e., assign to) the variable used as argument—something that will be seen by its caller. Call by reference can therefore be used to provide an additional channel of communication between the called function and the calling function. Pass by reference can significantly improve performance: calling a function with a many-megabyte structure as an argument does not have to copy the large structure, only the reference to the structure (which is generally a machine word and only a few bytes). However, a call-by-reference language makes it more difficult for a programmer to track the effects of a function call, and may introduce subtle bugs.
Due to variation in syntax, the difference between call by reference (where the reference type is implicit) and call by sharing (where the reference type is explicit) is often unclear on first glance. A simple litmus test is if it's possible to write a traditional swap(a, b) function in the language. For example in Fortran:
Therefore, Fortran's inout intent implements call-by-reference; any variable can be implicitly converted to a reference handle. In contrast the closest one can get in Java is:
where an explicit Box type must be used to introduce a handle. Java is call-by-sharing but not call-by-reference.
=== Call by copy-restore ===
Call by copy-restore—also known as "copy-in copy-out", "call by value result", "call by value return" (as termed in the Fortran community)—is a variation of call by reference. With call by copy-restore, the contents of the argument are copied to a new variable local to the call invocation. The function may then modify this variable, similarly to call by reference, but as the variable is local, the modifications are not visible outside of the call invocation during the call. When the function call returns, the updated contents of this variable are copied back to overwrite the original argument ("restored").
The semantics of call by copy-restore is similar in many cases to call by reference, but differs when two or more function arguments alias one another (i.e., point to the same variable in the caller's environment). Under call by reference, writing to one argument will affect the other during the function's execution. Under call by copy-restore, writing to one argument will not affect the other during the function's execution, but at the end of the call, the values of the two arguments may differ, and it is unclear which argument is copied back first and therefore what value the caller's variable receives. For example, Ada specifies that the copy-out assignment for each in out or out parameter occurs in an arbitrary order. From the following program (illegal in Ada 2012) it can be seen that the behavior of GNAT is to copy in left-to-right order on return:
If the program returned 1 it would be copying right-to-left, and under call by reference semantics the program would return 3.
When the reference is passed to the caller uninitialized (for example an out parameter in Ada as opposed to an in out parameter), this evaluation strategy may be called "call by result".
This strategy has gained attention in multiprocessing and remote procedure calls, as unlike call-by-reference it does not require frequent communication between threads of execution for variable access.
=== Call by sharing ===
Call by sharing (also known as "pass by sharing", "call by object", or "call by object-sharing") is an evaluation strategy that is intermediate between call by value and call by reference. Rather than every variable being exposed as a reference, only a specific class of values, termed "references", "boxed types", or "objects", have reference semantics, and it is the addresses of these pointers that are passed into the function. Like call by value, the value of the address passed is a copy, and direct assignment to the parameter of the function overwrites the copy and is not visible to the calling function. Like call by reference, mutating the target of the pointer is visible to the calling function. Mutations of a mutable object within the function are visible to the caller because the object is not copied or cloned—it is shared, hence the name "call by sharing".
The technique was first noted by Barbara Liskov in 1974 for the CLU language. It is used by many modern languages such as Python (the shared values being called "objects"), Java (objects), Ruby (objects), JavaScript (objects), Scheme (data structures such as vectors), AppleScript (lists, records, dates, and script objects), OCaml and ML (references, records, arrays, objects, and other compound data types), Maple (rtables and tables), and Tcl (objects). The term "call by sharing" as used in this article is not in common use; the terminology is inconsistent across different sources. For example, in the Java community, they say that Java is call by value.
For immutable objects, there is no real difference between call by sharing and call by value, except if object identity is visible in the language. The use of call by sharing with mutable objects is an alternative to input/output parameters: the parameter is not assigned to (the argument is not overwritten and object identity is not changed), but the object (argument) is mutated.
For example, in Python, lists are mutable and passed with call by sharing, so:
outputs [1] because the append method modifies the object on which it is called.
In contrast, assignments within a function are not noticeable to the caller. For example, this code binds the formal argument to a new object, but it is not visible to the caller because it does not mutate a_list:
=== Call by address ===
Call by address, pass by address, or call/pass by pointer is a parameter passing method where the address of the argument is passed as the formal parameter. Inside the function, the address (pointer) may be used to access or modify the value of the argument. For example, the swap operation can be implemented as follows in C:
Some authors treat & as part of the syntax of calling swap. Under this view, C supports the call-by-reference parameter passing strategy. Other authors take a differing view that the presented implementation of swap in C is only a simulation of call-by-reference using pointers. Under this "simulation" view, mutable variables in C are not first-class (that is, l-values are not expressions), rather pointer types are. In this view, the presented swap program is syntactic sugar for a program that uses pointers throughout, for example this program (read and assign have been added to highlight the similarities to the Java Box call-by-sharing program above):
Because in this program, swap operates on pointers and cannot change the pointers themselves, but only the values the pointers point to, this view holds that C's main evaluation strategy is more similar to call-by-sharing.
C++ confuses the issue further by allowing swap to be declared and used with a very lightweight "reference" syntax:
Semantically, this is equivalent to the C examples. As such, many authors consider call-by-address to be a unique parameter passing strategy distinct from call-by-value, call-by-reference, and call-by-sharing.
=== Call by unification ===
In logic programming, the evaluation of an expression may simply correspond to the unification of the terms involved combined with the application of some form of resolution. Unification must be classified as a strict binding strategy because it is fully performed. However, unification can also be performed on unbounded variables, so calls may not necessarily commit to final values for all its variables.
== Non-strict binding strategies ==
=== Call by name ===
Call by name is an evaluation strategy where the arguments to a function are not evaluated before the function is called—rather, they are substituted directly into the function body (using capture-avoiding substitution) and then left to be evaluated whenever they appear in the function. If an argument is not used in the function body, the argument is never evaluated; if it is used several times, it is re-evaluated each time it appears. (See Jensen's device for a programming technique that exploits this.)
Call-by-name evaluation is occasionally preferable to call-by-value evaluation. If a function's argument is not used in the function, call by name will save time by not evaluating the argument, whereas call by value will evaluate it regardless. If the argument is a non-terminating computation, the advantage is enormous. However, when the function argument is used, call by name is often slower, requiring a mechanism such as a thunk.
.NET languages can simulate call by name using delegates or Expression<T> parameters. The latter results in an abstract syntax tree being given to the function. Eiffel provides agents, which represent an operation to be evaluated when needed. Seed7 provides call by name with function parameters. Java programs can accomplish similar lazy evaluation using lambda expressions and the java.util.function.Supplier<T> interface.
=== Call by need ===
Call by need is a memoized variant of call by name, where, if the function argument is evaluated, that value is stored for subsequent use. If the argument is pure (i.e., free of side effects), this produces the same results as call by name, saving the cost of recomputing the argument.
Haskell is a well-known language that uses call-by-need evaluation. Because evaluation of expressions may happen arbitrarily far into a computation, Haskell supports only side effects (such as mutation) via the use of monads. This eliminates any unexpected behavior from variables whose values change prior to their delayed evaluation.
In R's implementation of call by need, all arguments are passed, meaning that R allows arbitrary side effects.
Lazy evaluation is the most common implementation of call-by-need semantics, but variations like optimistic evaluation exist. .NET languages implement call by need using the type Lazy<T>.
Graph reduction is an efficient implementation of lazy evaluation.
=== Call by macro expansion ===
Call by macro expansion is similar to call by name, but uses textual substitution rather than capture-avoiding substitution. Macro substitution may therefore result in variable capture, leading to mistakes and undesired behavior. Hygienic macros avoid this problem by checking for and replacing shadowed variables that are not parameters.
=== Call by future ===
"Call by future", also known as "parallel call by name" or "lenient evaluation", is a concurrent evaluation strategy combining non-strict semantics with eager evaluation. The method requires fine-grained dynamic scheduling and synchronization but is suitable for massively parallel machines.
The strategy creates a future (promise) for the function's body and each of its arguments. These futures are computed concurrently with the flow of the rest of the program. When a future A requires the value of another future B that has not yet been computed, future A blocks until future B finishes computing and has a value. If future B has already finished computing the value is returned immediately. Conditionals block until their condition is evaluated, and lambdas do not create futures until they are fully applied.
If implemented with processes or threads, creating a future will spawn one or more new processes or threads (for the promises), accessing the value will synchronize these with the main thread, and terminating the computation of the future corresponds to killing the promises computing its value. If implemented with a coroutine, as in .NET async/await, creating a future calls a coroutine (an async function), which may yield to the caller, and in turn be yielded back to when the value is used, cooperatively multitasking.
The strategy is non-deterministic, as the evaluation can occur at any time between creation of the future (i.e., when the expression is given) and use of the future's value. The strategy is non-strict because the function body may return a value before the arguments are evaluated. However, in most implementations, execution may still get stuck evaluating an unneeded argument. For example, the program
may either have g finish before f, and output 1, or may result in an error due to evaluating 1/0.
Call-by-future is similar to call by need in that values are computed only once. With careful handling of errors and nontermination, in particular terminating futures partway through if it is determined they will not be needed, call-by-future also has the same termination properties as call-by-need evaluation. However, call-by-future may perform unnecessary speculative work compared to call-by-need, such as deeply evaluating a lazy data structure. This can be avoided by using lazy futures that do not start computation until it is certain the value is needed.
=== Optimistic evaluation ===
Optimistic evaluation is a call-by-need variant where the function's argument is partly evaluated in a call-by-value style for some amount of time (which may be adjusted at runtime). After that time has passed, evaluation is aborted and the function is applied using call by need. This approach avoids some of call-by-need's runtime expenses while retaining desired termination characteristics.
== See also ==
Beta normal form
Comparison of programming languages
De re and de dicto
eval
Lambda calculus
Call-by-push-value
Partial evaluation
== References ==
== Further reading ==
Baker-Finch, Clem; King, David; Hall, Jon; Trinder, Phil (1999-03-10). "An Operational Semantics for Parallel Call-by-Need" (ps). Research Report. 99 (1). Faculty of Mathematics & Computing, The Open University.
Ennals, Robert; Peyton Jones, Simon (2003). Optimistic Evaluation: A Fast Evaluation Strategy for Non-Strict Programs (PDF). International Conference on Functional Programming. ACM Press.
Ludäscher, Bertram (2001-01-24). "CSE 130 lecture notes". CSE 130: Programming Languages: Principles & Paradigms.
Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. ISBN 0-262-16209-1.
Sestoft, Peter (2002). Mogensen, T; Schmidt, D; Sudborough, I. H. (eds.). Demonstrating Lambda Calculus Reduction (PDF). Lecture Notes in Computer Science. Vol. 2566. Springer-Verlag. pp. 420–435. ISBN 3-540-00326-6. {{cite book}}: |work= ignored (help)
"Call by Value and Call by Reference in C Programming". Call by Value and Call by Reference in C Programming explained. Archived from the original on 2013-01-21.
== External links ==
The interactive on-line Geometry of Interaction visualiser, implementing a graph-based machine for several common evaluation strategies. | Wikipedia/Evaluation_strategy |
Applicative computing systems, or ACS are the systems of object calculi founded on combinatory logic and lambda calculus.
The only essential notion which is under consideration in these systems is the representation of object. In combinatory logic the only metaoperator is application in a sense of applying one object to other. In lambda calculus two metaoperators are used: application – the same as in combinatory logic, and functional abstraction which binds the only variable in one object.
== Features ==
The objects generated in these systems are the functional entities with the following features:
the number of argument places, or object arity is not fixed but is enabling step by step in interoperations with other objects;
in a process of generating the compound object one of its counterparts—function—is applied to other one—argument—but in other contexts they can change their roles, i.e. functions and arguments are considered on the equal rights;
the self-applying of functions is allowed, i.e. any object can be applied to itself.
ACS give a sound ground for applicative approach to programming.
== Research challenge ==
Applicative computing systems' lack of storage and history sensitivity is the basic reason they have not provided a foundation for computer design. Moreover, most applicative systems employ the substitution operation of the lambda calculus as their basic operation. This operation is one of virtually unlimited power, but its complete and efficient realization presents great difficulties to the machine designer.
== See also ==
Applicative programming language
Categorical abstract machine
Combinatory logic
Functional programming
Lambda calculus
== References ==
== Further reading ==
Hindley, J. Roger; Seldin, Jonathan P., eds. (September 1980), To H. B. Curry: Essays on combinatory logic, lambda calculus and formalism, Boston, MA: Academic Press, ISBN 978-0-12-349050-6 [This volume reflects the research program and philosophy of H. Curry, one of the founders of computational models and the deductive framework for reasoning in terms of objects.]
Wolfengagen, V.E. (2003). Combinatory logic in programming. Computations with objects through examples and exercises (2nd ed.). JurInfoR. CiteSeerX 10.1.1.62.4421. ISBN 9785891581265. OCLC 491339472. | Wikipedia/Applicative_computing_systems |
A function pointer, also called a subroutine pointer or procedure pointer, is a pointer referencing executable code, rather than data. Dereferencing the function pointer yields the referenced function, which can be invoked and passed arguments just as in a normal function call. Such an invocation is also known as an "indirect" call, because the function is being invoked indirectly through a variable instead of directly through a fixed identifier or address.
Function pointers allow different code to be executed at runtime. They can also be passed to a function to enable callbacks.
Function pointers are supported by third-generation programming languages (such as PL/I, COBOL, Fortran, dBASE dBL, and C) and object-oriented programming languages (such as C++, C#, and D).
== Simple function pointers ==
The simplest implementation of a function (or subroutine) pointer is as a variable containing the address of the function within executable memory. Older third-generation languages such as PL/I and COBOL, as well as more modern languages such as Pascal and C generally implement function pointers in this manner.
=== Example in C ===
The following C program illustrates the use of two function pointers:
func1 takes one double-precision (double) parameter and returns another double, and is assigned to a function which converts centimeters to inches.
func2 takes a pointer to a constant character array as well as an integer and returns a pointer to a character, and is assigned to a C string handling function which returns a pointer to the first occurrence of a given character in a character array.
The next program uses a function pointer to invoke one of two functions (sin or cos) indirectly from another function (compute_sum, computing an approximation of the function's Riemann integration). The program operates by having function main call function compute_sum twice, passing it a pointer to the library function sin the first time, and a pointer to function cos the second time. Function compute_sum in turn invokes one of the two functions indirectly by dereferencing its function pointer argument funcp multiple times, adding together the values that the invoked function returns and returning the resulting sum. The two sums are written to the standard output by main.
== Functors ==
Functors, or function objects, are similar to function pointers, and can be used in similar ways. A functor is an object of a class type that implements the function-call operator, allowing the object to be used within expressions using the same syntax as a function call. Functors are more powerful than simple function pointers, being able to contain their own data values, and allowing the programmer to emulate closures. They are also used as callback functions if it is necessary to use a member function as a callback function.
Many "pure" object-oriented languages do not support function pointers. Something similar can be implemented in these kinds of languages, though, using references to interfaces that define a single method (member function). CLI languages such as C# and Visual Basic .NET implement type-safe function pointers with delegates.
In other languages that support first-class functions, functions are regarded as data, and can be passed, returned, and created dynamically directly by other functions, eliminating the need for function pointers.
Extensively using function pointers to call functions may produce a slow-down for the code on modern processors, because a branch predictor may not be able to figure out where to branch to (it depends on the value of the function pointer at run time) although this effect can be overstated as it is often amply compensated for by significantly reduced non-indexed table lookups.
== Method pointers ==
C++ includes support for object-oriented programming, so classes can have methods (usually referred to as member functions). Non-static member functions (instance methods) have an implicit parameter (the this pointer) which is the pointer to the object it is operating on, so the type of the object must be included as part of the type of the function pointer. The method is then used on an object of that class by using one of the "pointer-to-member" operators: .* or ->* (for an object or a pointer to object, respectively).
Although function pointers in C and C++ can be implemented as simple addresses, so that typically sizeof(Fx)==sizeof(void *), member pointers in C++ are sometimes implemented as "fat pointers", typically two or three times the size of a simple function pointer, in order to deal with virtual methods and virtual inheritance.
== In C++ ==
In C++, in addition to the method used in C, it is also possible to use the C++ standard library class template std::function, of which the instances are function objects:
=== Pointers to member functions in C++ ===
This is how C++ uses function pointers when dealing with member functions of classes or structs. These are invoked using an object pointer or a this call. They are type safe in that you can only call members of that class (or derivatives) using a pointer of that type. This example also demonstrates the use of a typedef for the pointer to member function added for simplicity. Function pointers to static member functions are done in the traditional 'C' style because there is no object pointer for this call required.
== Alternate C and C++ syntax ==
The C and C++ syntax given above is the canonical one used in all the textbooks - but it's difficult to read and explain. Even the above typedef examples use this syntax. However, every C and C++ compiler supports a more clear and concise mechanism to declare function pointers: use typedef, but don't store the pointer as part of the definition. Note that the only way this kind of typedef can actually be used is with a pointer - but that highlights the pointer-ness of it.
=== C and C++ ===
=== C++ ===
These examples use the above definitions. In particular, note that the above definition for Fn can be used in pointer-to-member-function definitions:
== PL/I ==
PL/I procedures can be nested, that is, procedure A may contain procedure B, which in turn may contain C. In addition to data declared in B, B can also reference any data declared in A, as long as it doesn’t override the definition. Likewise C can reference data in both A and B. Therefore, PL/I ENTRY variables need to contain context, to provide procedure C with the addresses of the values of data in B and A at the time C was called.
== See also ==
Delegation (computing)
Function object
Higher-order function
Procedural parameter
Closure
Anonymous functions
== References ==
== External links ==
FAQ on Function Pointers, things to avoid with function pointers, some information on using function objects
Function Pointer Tutorials Archived 2018-06-30 at the Wayback Machine, a guide to C/C++ function pointers, callbacks, and function objects (functors)
Member Function Pointers and the Fastest Possible C++ Delegates, CodeProject article by Don Clugston
Pointer Tutorials Archived 2009-04-05 at the Wayback Machine, C++ documentation and tutorials
C pointers explained Archived 2019-06-09 at the Wayback Machine a visual guide of pointers in C
Secure Function Pointer and Callbacks in Windows Programming, CodeProject article by R. Selvam
The C Book, Function Pointers in C by "The C Book"
Function Pointers in dBASE dBL, Function Pointer in dBASE dBL | Wikipedia/Function_pointer |
In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.
== Definitions ==
Formally, for an abstract rewriting system
(
A
,
→
)
{\displaystyle (A,\to )}
, a reduction strategy
→
S
{\displaystyle \to _{S}}
is a binary relation on
A
{\displaystyle A}
with
→
S
⊆
→
+
{\displaystyle \to _{S}\subseteq {\overset {+}{\to }}}
, where
→
+
{\displaystyle {\overset {+}{\to }}}
is the transitive closure of
→
{\displaystyle \to }
(but not the reflexive closure). In addition the normal forms of the strategy must be the same as the normal forms of the original rewriting system, i.e. for all
a
{\displaystyle a}
, there exists a
b
{\displaystyle b}
with
a
→
b
{\displaystyle a\to b}
iff
∃
b
′
.
a
→
S
b
′
{\displaystyle \exists b'.a\to _{S}b'}
.
A one step reduction strategy is one where
→
S
⊆→
{\displaystyle \to _{S}\subseteq \to }
. Otherwise it is a many step strategy.
A deterministic strategy is one where
→
S
{\displaystyle \to _{S}}
is a partial function, i.e. for each
a
∈
A
{\displaystyle a\in A}
there is at most one
b
{\displaystyle b}
such that
a
→
S
b
{\displaystyle a\to _{S}b}
. Otherwise it is a nondeterministic strategy.
== Term rewriting ==
In a term rewriting system a rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term.
One-step strategies for term rewriting include:
leftmost-innermost: in each step the leftmost of the innermost redexes is contracted, where an innermost redex is a redex not containing any redexes
leftmost-outermost: in each step the leftmost of the outermost redexes is contracted, where an outermost redex is a redex not contained in any redexes
rightmost-innermost, rightmost-outermost: similarly
Many-step strategies include:
parallel-innermost: reduces all innermost redexes simultaneously. This is well-defined because the redexes are pairwise disjoint.
parallel-outermost: similarly
Gross-Knuth reduction, also called full substitution or Kleene reduction: all redexes in the term are simultaneously reduced
Parallel outermost and Gross-Knuth reduction are hypernormalizing for all almost-orthogonal term rewriting systems, meaning that these strategies will eventually reach a normal form if it exists, even when performing (finitely many) arbitrary reductions between successive applications of the strategy.
Stratego is a domain-specific language designed specifically for programming term rewriting strategies.
== Lambda calculus ==
In the context of the lambda calculus, normal-order reduction refers to leftmost-outermost reduction in the sense given above. Normal-order reduction is normalizing, in the sense that if a term has a normal form, then normal‐order reduction will eventually reach it, hence the name normal. This is known as the standardization theorem.
Leftmost reduction is sometimes used to refer to normal order reduction, as with a pre-order traversal the notions coincide, and similarly the leftmost-outermost redex is the redex with leftmost starting character when the lambda term is considered as a string of characters. When "leftmost" is defined using an in-order traversal the notions are distinct. For example, in the term
(
λ
x
.
x
Ω
)
(
λ
y
.
I
)
{\displaystyle (\lambda x.x\Omega )(\lambda y.I)}
with
Ω
,
I
{\displaystyle \Omega ,I}
defined here, the leftmost redex of the in-order traversal is
Ω
{\displaystyle \Omega }
while the leftmost-outermost redex is the entire expression.
Applicative order reduction refers to leftmost-innermost reduction.
In contrast to normal order, applicative order reduction may not terminate, even when the term has a normal form. For example, using applicative order reduction, the following sequence of reductions is possible:
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
→
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
…
{\displaystyle {\begin{aligned}&(\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\\rightarrow &(\lambda x.z)((\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www)(\lambda w.www))\\&\ldots \end{aligned}}}
But using normal-order reduction, the same starting point reduces quickly to normal form:
(
λ
x
.
z
)
(
(
λ
w
.
w
w
w
)
(
λ
w
.
w
w
w
)
)
{\displaystyle (\mathbf {\lambda } x.z)((\lambda w.www)(\lambda w.www))}
→
z
{\displaystyle \rightarrow z}
Full β-reduction refers to the nondeterministic one-step strategy that allows reducing any redex at each step. Takahashi's parallel β-reduction is the strategy that reduces all redexes in the term simultaneously.
=== Weak reduction ===
Normal and applicative order reduction are strong in that they allow reduction under lambda abstractions. In contrast, weak reduction does not reduce under a lambda abstraction. Call-by-name reduction is the weak reduction strategy that reduces the leftmost outermost redex not inside a lambda abstraction, while call-by-value reduction is the weak reduction strategy that reduces the leftmost innermost redex not inside a lambda abstraction. These strategies were devised to reflect the call-by-name and call-by-value evaluation strategies. In fact, applicative order reduction was also originally introduced to model the call-by-value parameter passing technique found in Algol 60 and modern programming languages. When combined with the idea of weak reduction, the resulting call-by-value reduction is indeed a faithful approximation.
Unfortunately, weak reduction is not confluent, and the traditional reduction equations of the lambda calculus are useless, because they suggest relationships that violate the weak evaluation regime. However, it is possible to extend the system to be confluent by allowing a restricted form of reduction under an abstraction, in particular when the redex does not involve the variable bound by the abstraction. For example, λx.(λy.x)z is in normal form for a weak reduction strategy because the redex (λy.x)z is contained in a lambda abstraction. But the term λx.(λy.y)z can still be reduced under the extended weak reduction strategy, because the redex (λy.y)z does not refer to x.
=== Optimal reduction ===
Optimal reduction is motivated by the existence of lambda terms where there does not exist a sequence of reductions which reduces them without duplicating work. For example, consider
((λg.(g(g(λx.x))))
(λh.((λf.(f(f(λz.z))))
(λw.(h(w(λy.y)))))))
It is composed of three nested terms, x=((λg. ... ) (λh.y)), y=((λf. ...) (λw.z) ), and z=λw.(h(w(λy.y))). There are only two possible β-reductions to be done here, on x and on y. Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known.
Optimal reduction is not a reduction strategy for the lambda calculus in a narrow sense because performing β-reduction loses the information about the substituted redexes being shared. Instead it is defined for the labelled lambda calculus, an annotated lambda calculus which captures a precise notion of the work that should be shared.: 113–114
Labels consist of a countably infinite set of atomic labels, and concatenations
a
b
{\displaystyle ab}
, overlinings
a
¯
{\displaystyle {\overline {a}}}
and underlinings
a
_
{\displaystyle {\underline {a}}}
of labels. A labelled term is a lambda calculus term where each subterm has a label. The standard initial labeling of a lambda term gives each subterm a unique atomic label.: 132 Labelled β-reduction is given by:
(
(
λ
x
.
M
)
α
N
)
β
→
β
α
¯
⋅
M
[
x
↦
α
_
⋅
N
]
{\displaystyle ((\lambda x.M)^{\alpha }N)^{\beta }\to \beta {\overline {\alpha }}\cdot M[x\mapsto {\underline {\alpha }}\cdot N]}
where
⋅
{\displaystyle \cdot }
concatenates labels,
β
⋅
T
α
=
T
β
α
{\displaystyle \beta \cdot T^{\alpha }=T^{\beta \alpha }}
, and substitution
M
[
x
↦
N
]
{\displaystyle M[x\mapsto N]}
is defined as follows (using the Barendregt convention):
x
α
[
x
↦
N
]
=
α
⋅
N
(
λ
y
.
M
)
α
[
x
↦
N
]
=
(
λ
y
.
M
[
x
↦
N
]
)
α
y
α
[
x
↦
N
]
=
y
α
(
M
N
)
α
[
x
↦
P
]
=
(
M
[
x
↦
P
]
N
[
x
↦
P
]
)
α
{\displaystyle {\begin{aligned}x^{\alpha }[x\mapsto N]&=\alpha \cdot N&\quad (\lambda y.M)^{\alpha }[x\mapsto N]&=(\lambda y.M[x\mapsto N])^{\alpha }\\y^{\alpha }[x\mapsto N]&=y^{\alpha }&\quad (MN)^{\alpha }[x\mapsto P]&=(M[x\mapsto P]N[x\mapsto P])^{\alpha }\end{aligned}}}
The system can be proven to be confluent. Optimal reduction is then defined to be normal order or leftmost-outermost reduction using reduction by families, i.e. the parallel reduction of all redexes with the same function part label. The strategy is optimal in the sense that it performs the optimal (minimal) number of family reduction steps.
A practical algorithm for optimal reduction was first described in 1989, more than a decade after optimal reduction was first defined in 1974. The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of an extension of the technique to interaction nets.: 362 Lambdascope is a more recent implementation of optimal reduction, also using interaction nets.
=== Call by need reduction ===
Call by need reduction can be defined similarly to optimal reduction as weak leftmost-outermost reduction using parallel reduction of redexes with the same label, for a slightly different labelled lambda calculus. An alternate definition changes the beta rule to an operation that finds the next "needed" computation, evaluates it, and substitutes the result into all locations. This requires extending the beta rule to allow reducing terms that are not syntactically adjacent. As with call-by-name and call-by-value, call-by-need reduction was devised to mimic the behavior of the evaluation strategy known as "call-by-need" or lazy evaluation.
== See also ==
Reduction system
Reduction semantics
Thunk
== Notes ==
== References ==
== External links ==
Lambda calculus reduction workbench | Wikipedia/Reduction_strategy_(lambda_calculus) |
In computer science, graph reduction implements an efficient version of non-strict evaluation, an evaluation strategy where the arguments to a function are not immediately evaluated. This form of non-strict evaluation is also known as lazy evaluation and used in functional programming languages. The technique was first developed by Chris Wadsworth in 1971.
== Motivation ==
A simple example of evaluating an arithmetic expression follows:
(
(
2
+
2
)
+
(
2
+
2
)
)
+
(
3
+
3
)
=
(
(
2
+
2
)
+
(
2
+
2
)
)
+
6
=
(
(
2
+
2
)
+
4
)
+
6
=
(
4
+
4
)
+
6
=
8
+
6
=
14
{\displaystyle {\begin{aligned}&{}&&((2+2)+(2+2))+(3+3)\\&{}&=&((2+2)+(2+2))+6\\&{}&=&((2+2)+4)+6\\&{}&=&(4+4)+6\\&{}&=&8+6\\&{}&=&14\end{aligned}}}
The above reduction sequence employs a strategy known as outermost tree reduction. The same expression can be evaluated using innermost tree reduction, yielding the reduction sequence:
(
(
2
+
2
)
+
(
2
+
2
)
)
+
(
3
+
3
)
=
(
(
2
+
2
)
+
4
)
+
(
3
+
3
)
=
(
4
+
4
)
+
(
3
+
3
)
=
(
4
+
4
)
+
6
=
8
+
6
=
14
{\displaystyle {\begin{aligned}&{}&&((2+2)+(2+2))+(3+3)\\&{}&=&((2+2)+4)+(3+3)\\&{}&=&(4+4)+(3+3)\\&{}&=&(4+4)+6\\&{}&=&8+6\\&{}&=&14\end{aligned}}}
Notice that the reduction order is made explicit by the addition of parentheses. This expression could also have been simply evaluated right to left, because addition is an associative operation.
Represented as a tree, the expression above looks like this:
This is where the term tree reduction comes from. When represented as a tree, we can think of innermost reduction as working from the bottom up, while outermost works from the top down.
The expression can also be represented as a directed acyclic graph, allowing sub-expressions to be shared:
As for trees, outermost and innermost reduction also applies to graphs. Hence we have graph reduction.
Now evaluation with outermost graph reduction can proceed as follows:
Notice that evaluation now only requires four steps. Outermost graph reduction is referred to as lazy evaluation and innermost graph reduction is referred to as eager evaluation.
== Combinator graph reduction ==
Combinator graph reduction is a fundamental implementation technique for functional programming languages, in which a program is converted into a combinator representation which is mapped to a directed graph data structure in computer memory, and program execution then consists of rewriting parts of this graph ("reducing" it) so as to move towards useful results.
== History ==
The concept of a graph reduction that allows evaluated values to be shared was first developed by Chris Wadsworth in his 1971 Ph.D. dissertation. This dissertation was cited by Peter Henderson and James H. Morris Jr. in 1976 paper, “A lazy evaluator” that introduced the notion of lazy evaluation. In 1976 David Turner incorporated lazy evaluation into SASL using combinators.
SASL was an early functional programming language first developed by Turner in 1972.
== See also ==
Graph reduction machine
SECD machine
== Notes ==
== References ==
Bird, Richard (1998). Introduction to Functional Programming using Haskell. Prentice Hall. ISBN 0-13-484346-0.
== Further reading ==
Peyton Jones, Simon L. (1987). The Implementation of Functional Programming Languages. Prentice Hall. ISBN 013453333X. LCCN 86020535. Retrieved 2022-04-15. | Wikipedia/Graph_reduction |
In computer science and computer programming, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
Different models of computation give rise to different reasons that an algorithm may be non-deterministic, and different ways to evaluate its performance or correctness:
A concurrent algorithm can perform differently on different runs due to a race condition. This can happen even with a single-threaded algorithm when it interacts with resources external to it. In general, such an algorithm is considered to perform correctly only when all possible runs produce the desired results.
A probabilistic algorithm's behavior depends on a random number generator called by the algorithm. These are subdivided into Las Vegas algorithms, for which (like concurrent algorithms) all runs must produce correct output, and Monte Carlo algorithms which are allowed to fail or produce incorrect results with low probability. The performance of such an algorithm is often measured probabilistically, for instance using an analysis of its expected time.
In computational complexity theory, nondeterminism is often modeled using an explicit mechanism for making a nondeterministic choice, such as in a nondeterministic Turing machine. For these models, a nondeterministic algorithm is considered to perform correctly when, for each input, there exists a run that produces the desired result, even when other runs produce incorrect results. This existential power makes nondeterministic algorithms of this sort more efficient than known deterministic algorithms for many problems. The P versus NP problem encapsulates this conjectured greater efficiency available to nondeterministic algorithms. Algorithms of this sort are used to define complexity classes based on nondeterministic time and nondeterministic space complexity. They may be simulated using nondeterministic programming, a method for specifying nondeterministic algorithms and searching for the choices that lead to a correct run, often using a backtracking search.
The notion of nondeterminism was introduced by Robert W. Floyd in 1967.
== References ==
== Further reading ==
Cormen, Thomas H. (2009). Introduction to Algorithms (3rd ed.). MIT Press. ISBN 978-0-262-03384-8.
"Nondeterministic algorithm". National Institute of Standards and Technology. Retrieved July 7, 2013.
"Non-deterministic Algorithms". New York University Computer Science. Retrieved July 7, 2013. | Wikipedia/Nondeterministic_algorithm |
Binary combinatory logic (BCL) is a computer programming language that uses binary terms 0 and 1 to create a complete formulation of combinatory logic using only the symbols 0 and 1. Using the S and K combinators, complex boolean algebra functions can be made. BCL has applications in the theory of program-size complexity (Kolmogorov complexity).
== Definition ==
=== S-K Basis ===
Utilizing K and S combinators of the Combinatory logic, logical functions can be represented in as functions of combinators:
=== Syntax ===
Backus–Naur form:
=== Semantics ===
The denotational semantics of BCL may be specified as follows:
[ 00 ] == K
[ 01 ] == S
[ 1 <term1> <term2> ] == ( [<term1>] [<term2>] )
where "[...]" abbreviates "the meaning of ...". Here K and S are the KS-basis combinators, and ( ) is the application operation, of combinatory logic. (The prefix 1 corresponds to a left parenthesis, right parentheses being unnecessary for disambiguation.)
Thus there are four equivalent formulations of BCL, depending on the manner of encoding the triplet (K, S, left parenthesis). These are (00, 01, 1) (as in the present version), (01, 00, 1), (10, 11, 0), and (11, 10, 0).
The operational semantics of BCL, apart from eta-reduction (which is not required for Turing completeness), may be very compactly specified by the following rewriting rules for subterms of a given term, parsing from the left:
1100xy → x
11101xyz → 11xz1yz
where x, y, and z are arbitrary subterms. (Note, for example, that because parsing is from the left, 10000 is not a subterm of 11010000.)
BCL can be used to replicate algorithms like Turing machines and Cellular automata, BCL is Turing complete.
== See also ==
Iota and Jot
== References ==
== Further reading ==
Tromp, John (October 2007). "Binary Lambda Calculus and Combinatory Logic". Randomness and Complexity, from Leibniz to Chaitin: 237–260. doi:10.1142/9789812770837_0014. ISBN 978-981-277-082-0.
Tromp, John (April 2023). "Functional Bits: Lambda Calculus based Algorithmic Information Theory" (PDF). tromp.github.io.
== External links ==
John's Lambda Calculus and Combinatory Logic Playground
A minimal implementation in C
Lambda Calculus in 383 Bytes
Brauner, Paul (10 January 2018). "Lambda Diagrams YouTube Playlist". YouTube. Archived from the original on 2021-12-21. | Wikipedia/Binary_lambda_calculus |
In programming languages, name resolution is the resolution of the tokens within program expressions to the intended program components.
== Overview ==
Expressions in computer programs reference variables, data types, functions, classes, objects, libraries, packages and other entities by name. In that context, name resolution refers to the association of those not-necessarily-unique names with the intended program entities. The algorithms that determine what those identifiers refer to in specific contexts are part of the language definition.
The complexity of these algorithms is influenced by the sophistication of the language. For example, name resolution in assembly language usually involves only a single simple table lookup, while name resolution in C++ is extremely complicated as it involves:
namespaces, which make it possible for an identifier to have different meanings depending on its associated namespace;
scopes, which make it possible for an identifier to have different meanings at different scope levels, and which involves various scope overriding and hiding rules. At the most basic level name resolution usually attempts to find the binding in the smallest enclosing scope, so that for example local variables supersede global variables; this is called shadowing.
visibility rules, which determine whether identifiers from specific namespaces or scopes are visible from the current context;
overloading, which makes it possible for an identifier to have different meanings depending on how it is used, even in a single namespace or scope;
accessibility, which determines whether identifiers from an otherwise visible scope are actually accessible and participate in the name resolution process.
== Static versus dynamic ==
In programming languages, name resolution can be performed either at compile time or at runtime. The former is called static name resolution, the latter is called dynamic name resolution.
A somewhat common misconception is that dynamic typing implies dynamic name resolution. For example, Erlang is dynamically typed but has static name resolution. However, static typing does imply static name resolution.
Static name resolution catches, at compile time, use of variables that are not in scope; preventing programmer errors. Languages with dynamic scope resolution sacrifice this safety for more flexibility; they can typically set and get variables in the same scope at runtime.
For example, in the Python interactive REPL:
However, relying on dynamic name resolution in code is discouraged by the Python community. The feature also may be removed in a later version of Python.
Examples of languages that use static name resolution include C, C++, E, Erlang, Haskell, Java, Pascal, Scheme, and Smalltalk. Examples of languages that use dynamic name resolution include some Lisp dialects, Perl, PHP, Python, Rebol, and Tcl.
== Name masking ==
Masking occurs when the same identifier is used for different entities in overlapping lexical scopes. At the level of variables (rather than names), this is known as variable shadowing. An identifier I' (for variable X') masks an identifier I (for variable X) when two conditions are met
I' has the same name as I
I' is defined in a scope which is a subset of the scope of I
The outer variable X is said to be shadowed by the inner variable X'.
For example, the parameter "foo" shadows the local variable "foo" in this common pattern:
Name masking can cause complications in function overloading, due to overloading not happening across scopes in some languages, notably C++, thus requiring all overloaded functions to be redeclared or explicitly imported into a given namespace.
== Alpha renaming to make name resolution trivial ==
In programming languages with lexical scoping that do not reflect over variable names, α-conversion (or α-renaming) can be used to make name resolution easy by finding a substitution that makes sure that no variable name masks another name in a containing scope. Alpha-renaming can make static code analysis easier since only the alpha renamer needs to understand the language's scoping rules.
For example, in this code:
within the Point constructor, the instance variables x and y are shadowed by local variables of the same name. This might be alpha-renamed to:
In the new version, there is no masking, so it is immediately obvious which uses correspond to which declarations.
== See also ==
Namespace (programming)
Scope (programming)
Naming collision
== References == | Wikipedia/Name_resolution_(programming_languages) |
In mathematics, a fundamental solution for a linear partial differential operator L is a formulation in the language of distribution theory of the older idea of a Green's function (although unlike Green's functions, fundamental solutions do not address boundary conditions).
In terms of the Dirac delta "function" δ(x), a fundamental solution F is a solution of the inhomogeneous equation
Here F is a priori only assumed to be a distribution.
This concept has long been utilized for the Laplacian in two and three dimensions. It was investigated for all dimensions for the Laplacian by Marcel Riesz.
The existence of a fundamental solution for any operator with constant coefficients — the most important case, directly linked to the possibility of using convolution to solve an arbitrary right hand side — was shown by Bernard Malgrange and Leon Ehrenpreis, and a proof is available in Joel Smoller (1994). In the context of functional analysis, fundamental solutions are usually developed via the Fredholm alternative and explored in Fredholm theory.
== Example ==
Consider the following differential equation Lf = sin(x) with
L
=
d
2
d
x
2
.
{\displaystyle L={\frac {d^{2}}{dx^{2}}}.}
The fundamental solutions can be obtained by solving LF = δ(x), explicitly,
d
2
d
x
2
F
(
x
)
=
δ
(
x
)
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}F(x)=\delta (x)\,.}
Since for the unit step function (also known as the Heaviside function) H we have
d
d
x
H
(
x
)
=
δ
(
x
)
,
{\displaystyle {\frac {d}{dx}}H(x)=\delta (x)\,,}
there is a solution
d
d
x
F
(
x
)
=
H
(
x
)
+
C
.
{\displaystyle {\frac {d}{dx}}F(x)=H(x)+C\,.}
Here C is an arbitrary constant introduced by the integration. For convenience, set C = −1/2.
After integrating
d
F
d
x
{\displaystyle {\frac {dF}{dx}}}
and choosing the new integration constant as zero, one has
F
(
x
)
=
x
H
(
x
)
−
1
2
x
=
1
2
|
x
|
.
{\displaystyle F(x)=xH(x)-{\frac {1}{2}}x={\frac {1}{2}}|x|~.}
== Motivation ==
Once the fundamental solution is found, it is straightforward to find a solution of the original equation, through convolution of the fundamental solution and the desired right hand side.
Fundamental solutions also play an important role in the numerical solution of partial differential equations by the boundary element method.
=== Application to the example ===
Consider the operator L and the differential equation mentioned in the example,
d
2
d
x
2
f
(
x
)
=
sin
(
x
)
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=\sin(x)\,.}
We can find the solution
f
(
x
)
{\displaystyle f(x)}
of the original equation by convolution (denoted by an asterisk) of the right-hand side
sin
(
x
)
{\displaystyle \sin(x)}
with the fundamental solution
F
(
x
)
=
1
2
|
x
|
{\textstyle F(x)={\frac {1}{2}}|x|}
:
f
(
x
)
=
(
F
∗
sin
)
(
x
)
:=
∫
−
∞
∞
1
2
|
x
−
y
|
sin
(
y
)
d
y
.
{\displaystyle f(x)=(F*\sin )(x):=\int _{-\infty }^{\infty }{\frac {1}{2}}|x-y|\sin(y)\,dy\,.}
This shows that some care must be taken when working with functions which do not have enough regularity (e.g. compact support, L1 integrability) since, we know that the desired solution is f(x) = −sin(x), while the above integral diverges for all x. The two expressions for f are, however, equal as distributions.
=== An example that more clearly works ===
d
2
d
x
2
f
(
x
)
=
I
(
x
)
,
{\displaystyle {\frac {d^{2}}{dx^{2}}}f(x)=I(x)\,,}
where I is the characteristic (indicator) function of the unit interval [0,1]. In that case, it can be verified that the convolution of I with F(x) = |x|/2 is
(
I
∗
F
)
(
x
)
=
{
1
2
x
2
−
1
2
x
+
1
4
,
0
≤
x
≤
1
|
1
2
x
−
1
4
|
,
otherwise
{\displaystyle (I*F)(x)={\begin{cases}{\frac {1}{2}}x^{2}-{\frac {1}{2}}x+{\frac {1}{4}},&0\leq x\leq 1\\|{\frac {1}{2}}x-{\frac {1}{4}}|,&{\text{otherwise}}\end{cases}}}
which is a solution, i.e., has second derivative equal to I.
=== Proof that the convolution is a solution ===
Denote the convolution of functions F and g as F ∗ g. Say we are trying to find the solution of Lf = g(x). We want to prove that F ∗ g is a solution of the previous equation, i.e. we want to prove that L(F ∗ g) = g. When applying the differential operator with constant coefficients, L, to the convolution, it is known that
L
(
F
∗
g
)
=
(
L
F
)
∗
g
,
{\displaystyle L(F*g)=(LF)*g\,,}
provided L has constant coefficients.
If F is the fundamental solution, the right side of the equation reduces to
δ
∗
g
.
{\displaystyle \delta *g~.}
But since the delta function is an identity element for convolution, this is simply g(x). Summing up,
L
(
F
∗
g
)
=
(
L
F
)
∗
g
=
δ
(
x
)
∗
g
(
x
)
=
∫
−
∞
∞
δ
(
x
−
y
)
g
(
y
)
d
y
=
g
(
x
)
.
{\displaystyle L(F*g)=(LF)*g=\delta (x)*g(x)=\int _{-\infty }^{\infty }\delta (x-y)g(y)\,dy=g(x)\,.}
Therefore, if F is the fundamental solution, the convolution F ∗ g is one solution of Lf = g(x). This does not mean that it is the only solution. Several solutions for different initial conditions can be found.
== Fundamental solutions for some partial differential equations ==
The following can be obtained by means of Fourier transform:
=== Laplace equation ===
For the Laplace equation,
[
−
Δ
]
Φ
(
x
,
x
′
)
=
δ
(
x
−
x
′
)
{\displaystyle [-\Delta ]\Phi (\mathbf {x} ,\mathbf {x} ')=\delta (\mathbf {x} -\mathbf {x} ')}
the fundamental solutions in two and three dimensions, respectively, are
Φ
2D
(
x
,
x
′
)
=
−
1
2
π
ln
|
x
−
x
′
|
,
Φ
3D
(
x
,
x
′
)
=
1
4
π
|
x
−
x
′
|
.
{\displaystyle \Phi _{\textrm {2D}}(\mathbf {x} ,\mathbf {x} ')=-{\frac {1}{2\pi }}\ln |\mathbf {x} -\mathbf {x} '|,\qquad \Phi _{\textrm {3D}}(\mathbf {x} ,\mathbf {x} ')={\frac {1}{4\pi |\mathbf {x} -\mathbf {x} '|}}~.}
=== Screened Poisson equation ===
For the screened Poisson equation,
[
−
Δ
+
k
2
]
Φ
(
x
,
x
′
)
=
δ
(
x
−
x
′
)
,
k
∈
R
,
{\displaystyle [-\Delta +k^{2}]\Phi (\mathbf {x} ,\mathbf {x} ')=\delta (\mathbf {x} -\mathbf {x} '),\quad k\in \mathbb {R} ,}
the fundamental solutions are
Φ
2D
(
x
,
x
′
)
=
1
2
π
K
0
(
k
|
x
−
x
′
|
)
,
Φ
3D
(
x
,
x
′
)
=
exp
(
−
k
|
x
−
x
′
|
)
4
π
|
x
−
x
′
|
,
{\displaystyle \Phi _{\textrm {2D}}(\mathbf {x} ,\mathbf {x} ')={\frac {1}{2\pi }}K_{0}(k|\mathbf {x} -\mathbf {x} '|),\qquad \Phi _{\textrm {3D}}(\mathbf {x} ,\mathbf {x} ')={\frac {\exp(-k|\mathbf {x} -\mathbf {x} '|)}{4\pi |\mathbf {x} -\mathbf {x} '|}},}
where
K
0
{\displaystyle K_{0}}
is a modified Bessel function of the second kind.
In higher dimensions the fundamental solution of the screened Poisson equation is given by the Bessel potential.
=== Biharmonic equation ===
For the Biharmonic equation,
[
−
Δ
2
]
Φ
(
x
,
x
′
)
=
δ
(
x
−
x
′
)
{\displaystyle [-\Delta ^{2}]\Phi (\mathbf {x} ,\mathbf {x} ')=\delta (\mathbf {x} -\mathbf {x} ')}
the biharmonic equation has the fundamental solutions
Φ
2D
(
x
,
x
′
)
=
−
|
x
−
x
′
|
2
8
π
ln
|
x
−
x
′
|
,
Φ
3D
(
x
,
x
′
)
=
|
x
−
x
′
|
8
π
.
{\displaystyle \Phi _{\textrm {2D}}(\mathbf {x} ,\mathbf {x} ')=-{\frac {|\mathbf {x} -\mathbf {x} '|^{2}}{8\pi }}\ln |\mathbf {x} -\mathbf {x} '|,\qquad \Phi _{\textrm {3D}}(\mathbf {x} ,\mathbf {x} ')={\frac {|\mathbf {x} -\mathbf {x} '|}{8\pi }}~.}
== Signal processing ==
In signal processing, the analog of the fundamental solution of a differential equation is called the impulse response of a filter.
== See also ==
Green's function
Impulse response
Parametrix
== References ==
"Fundamental solution", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
For adjustment to Green's function on the boundary see Shijue Wu notes. | Wikipedia/Fundamental_solution |
In mathematics, a hyperbolic partial differential equation of order
n
{\displaystyle n}
is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first
n
−
1
{\displaystyle n-1}
derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is
∂
2
u
∂
t
2
=
c
2
∂
2
u
∂
x
2
{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}}
The equation has the property that, if u and its first time derivative are arbitrarily specified initial data on the line t = 0 (with sufficient smoothness properties), then there exists a solution for all time t.
The solutions of hyperbolic equations are "wave-like". If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed. They travel along the characteristics of the equation. This feature qualitatively distinguishes hyperbolic equations from elliptic partial differential equations and parabolic partial differential equations. A perturbation of the initial (or boundary) data of an elliptic or parabolic equation is felt at once by essentially all points in the domain.
Although the definition of hyperbolicity is fundamentally a qualitative one, there are precise criteria that depend on the particular kind of differential equation under consideration. There is a well-developed theory for linear differential operators, due to Lars Gårding, in the context of microlocal analysis. Nonlinear differential equations are hyperbolic if their linearizations are hyperbolic in the sense of Gårding. There is a somewhat different theory for first order systems of equations coming from systems of conservation laws.
== Definition ==
A partial differential equation is hyperbolic at a point
P
{\displaystyle P}
provided that the Cauchy problem is uniquely solvable in a neighborhood of
P
{\displaystyle P}
for any initial data given on a non-characteristic hypersurface passing through
P
{\displaystyle P}
. Here the prescribed initial data consist of all (transverse) derivatives of the function on the surface up to one less than the order of the differential equation.
== Examples ==
By a linear change of variables, any equation of the form
A
∂
2
u
∂
x
2
+
2
B
∂
2
u
∂
x
∂
y
+
C
∂
2
u
∂
y
2
+
(lower order derivative terms)
=
0
{\displaystyle A{\frac {\partial ^{2}u}{\partial x^{2}}}+2B{\frac {\partial ^{2}u}{\partial x\partial y}}+C{\frac {\partial ^{2}u}{\partial y^{2}}}+{\text{(lower order derivative terms)}}=0}
with
B
2
−
A
C
>
0
{\displaystyle B^{2}-AC>0}
can be transformed to the wave equation, apart from lower order terms which are inessential for the qualitative understanding of the equation.: 400 This definition is analogous to the definition of a planar hyperbola.
The one-dimensional wave equation:
∂
2
u
∂
t
2
−
c
2
∂
2
u
∂
x
2
=
0
{\displaystyle {\frac {\partial ^{2}u}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}=0}
is an example of a hyperbolic equation. The two-dimensional and three-dimensional wave equations also fall into the category of hyperbolic PDE. This type of second-order hyperbolic partial differential equation may be transformed to a hyperbolic system of first-order differential equations.: 402
== Hyperbolic systems of first-order equations ==
The following is a system of first-order partial differential equations for
s
{\displaystyle s}
unknown functions
u
→
=
(
u
1
,
…
,
u
s
)
{\displaystyle {\vec {u}}=(u_{1},\ldots ,u_{s})}
,
u
→
=
u
→
(
x
→
,
t
)
{\displaystyle {\vec {u}}={\vec {u}}({\vec {x}},t)}
, where
x
→
∈
R
d
{\displaystyle {\vec {x}}\in \mathbb {R} ^{d}}
:
where
f
→
j
∈
C
1
(
R
s
,
R
s
)
{\displaystyle {\vec {f}}^{j}\in C^{1}(\mathbb {R} ^{s},\mathbb {R} ^{s})}
are once continuously differentiable functions, nonlinear in general.
Next, for each
f
→
j
{\displaystyle {\vec {f}}^{j}}
define the
s
×
s
{\displaystyle s\times s}
Jacobian matrix
A
j
:=
(
∂
f
1
j
∂
u
1
⋯
∂
f
1
j
∂
u
s
⋮
⋱
⋮
∂
f
s
j
∂
u
1
⋯
∂
f
s
j
∂
u
s
)
,
for
j
=
1
,
…
,
d
.
{\displaystyle A^{j}:={\begin{pmatrix}{\frac {\partial f_{1}^{j}}{\partial u_{1}}}&\cdots &{\frac {\partial f_{1}^{j}}{\partial u_{s}}}\\\vdots &\ddots &\vdots \\{\frac {\partial f_{s}^{j}}{\partial u_{1}}}&\cdots &{\frac {\partial f_{s}^{j}}{\partial u_{s}}}\end{pmatrix}},{\text{ for }}j=1,\ldots ,d.}
The system (∗) is hyperbolic if for all
α
1
,
…
,
α
d
∈
R
{\displaystyle \alpha _{1},\ldots ,\alpha _{d}\in \mathbb {R} }
the matrix
A
:=
α
1
A
1
+
⋯
+
α
d
A
d
{\displaystyle A:=\alpha _{1}A^{1}+\cdots +\alpha _{d}A^{d}}
has only real eigenvalues and is diagonalizable.
If the matrix
A
{\displaystyle A}
has s distinct real eigenvalues, it follows that it is diagonalizable. In this case the system (∗) is called strictly hyperbolic.
If the matrix
A
{\displaystyle A}
is symmetric, it follows that it is diagonalizable and the eigenvalues are real. In this case the system (∗) is called symmetric hyperbolic.
=== Hyperbolic system and conservation laws ===
There is a connection between a hyperbolic system and a conservation law. Consider a hyperbolic system of one partial differential equation for one unknown function
u
=
u
(
x
→
,
t
)
{\displaystyle u=u({\vec {x}},t)}
. Then the system (∗) has the form
Here,
u
{\displaystyle u}
can be interpreted as a quantity that moves around according to the flux given by
f
→
=
(
f
1
,
…
,
f
d
)
{\displaystyle {\vec {f}}=(f^{1},\ldots ,f^{d})}
. To see that the quantity
u
{\displaystyle u}
is conserved, integrate (∗∗) over a domain
Ω
{\displaystyle \Omega }
∫
Ω
∂
u
∂
t
d
Ω
+
∫
Ω
∇
⋅
f
→
(
u
)
d
Ω
=
0.
{\displaystyle \int _{\Omega }{\frac {\partial u}{\partial t}}\,d\Omega +\int _{\Omega }\nabla \cdot {\vec {f}}(u)\,d\Omega =0.}
If
u
{\displaystyle u}
and
f
→
{\displaystyle {\vec {f}}}
are sufficiently smooth functions, we can use the divergence theorem and change the order of the integration and
∂
/
∂
t
{\displaystyle \partial /\partial t}
to get a conservation law for the quantity
u
{\displaystyle u}
in the general form
d
d
t
∫
Ω
u
d
Ω
+
∫
∂
Ω
f
→
(
u
)
⋅
n
→
d
Γ
=
0
,
{\displaystyle {\frac {d}{dt}}\int _{\Omega }u\,d\Omega +\int _{\partial \Omega }{\vec {f}}(u)\cdot {\vec {n}}\,d\Gamma =0,}
which means that the time rate of change of
u
{\displaystyle u}
in the domain
Ω
{\displaystyle \Omega }
is equal to the net flux of
u
{\displaystyle u}
through its boundary
∂
Ω
{\displaystyle \partial \Omega }
. Since this is an equality, it can be concluded that
u
{\displaystyle u}
is conserved within
Ω
{\displaystyle \Omega }
.
== See also ==
Elliptic partial differential equation
Hypoelliptic operator
Parabolic partial differential equation
== References ==
== Further reading ==
A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
== External links ==
"Hyperbolic partial differential equation, numerical methods", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Linear Hyperbolic Equations at EqWorld: The World of Mathematical Equations.
Nonlinear Hyperbolic Equations at EqWorld: The World of Mathematical Equations. | Wikipedia/Hyperbolic_partial_differential_equation |
In mathematics, Schwartz space
S
{\displaystyle {\mathcal {S}}}
is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space
S
∗
{\displaystyle {\mathcal {S}}^{*}}
of
S
{\displaystyle {\mathcal {S}}}
, that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function.
Schwartz space is named after French mathematician Laurent Schwartz.
== Definition ==
Let
N
{\displaystyle \mathbb {N} }
be the set of non-negative integers, and for any
n
∈
N
{\displaystyle n\in \mathbb {N} }
, let
N
n
:=
N
×
⋯
×
N
⏟
n
times
{\displaystyle \mathbb {N} ^{n}:=\underbrace {\mathbb {N} \times \dots \times \mathbb {N} } _{n{\text{ times}}}}
be the n-fold Cartesian product.
The Schwartz space or space of rapidly decreasing functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
is the function space
S
(
R
n
,
C
)
:=
{
f
∈
C
∞
(
R
n
,
C
)
∣
∀
α
,
β
∈
N
n
,
‖
f
‖
α
,
β
<
∞
}
,
{\displaystyle {\mathcal {S}}\left(\mathbb {R} ^{n},\mathbb {C} \right):=\left\{f\in C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )\mid \forall {\boldsymbol {\alpha }},{\boldsymbol {\beta }}\in \mathbb {N} ^{n},\|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}<\infty \right\},}
where
C
∞
(
R
n
,
C
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )}
is the function space of smooth functions from
R
n
{\displaystyle \mathbb {R} ^{n}}
into
C
{\displaystyle \mathbb {C} }
, and
‖
f
‖
α
,
β
:=
sup
x
∈
R
n
|
x
α
(
D
β
f
)
(
x
)
|
.
{\displaystyle \|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}:=\sup _{{\boldsymbol {x}}\in \mathbb {R} ^{n}}\left|{\boldsymbol {x}}^{\boldsymbol {\alpha }}({\boldsymbol {D}}^{\boldsymbol {\beta }}f)({\boldsymbol {x}})\right|.}
Here,
sup
{\displaystyle \sup }
denotes the supremum, and we used multi-index notation, i.e.
x
α
:=
x
1
α
1
x
2
α
2
…
x
n
α
n
{\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}:=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\ldots x_{n}^{\alpha _{n}}}
and
D
β
:=
∂
1
β
1
∂
2
β
2
…
∂
n
β
n
{\displaystyle D^{\boldsymbol {\beta }}:=\partial _{1}^{\beta _{1}}\partial _{2}^{\beta _{2}}\ldots \partial _{n}^{\beta _{n}}}
.
To put common language to this definition, one could consider a rapidly decreasing function as essentially a function f(x) such that f(x), f ′(x), f ′′(x), ... all exist everywhere on R and go to zero as x→ ±∞ faster than any reciprocal power of x. In particular, 𝒮(Rn, C) is a subspace of the function space C∞(Rn, C) of smooth functions from Rn into C.
== Examples of functions in the Schwartz space ==
If
α
{\displaystyle {\boldsymbol {\alpha }}}
is a multi-index, and a is a positive real number, then
x
α
e
−
a
|
x
|
2
∈
S
(
R
n
)
.
{\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}e^{-a|{\boldsymbol {x}}|^{2}}\in {\mathcal {S}}(\mathbb {R} ^{n}).}
Any smooth function f with compact support is in 𝒮(Rn). This is clear since any derivative of f is continuous and supported in the support of f, so (
x
α
D
α
)
f
{\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}{\boldsymbol {D}}^{\boldsymbol {\alpha }})f}
has a maximum in Rn by the extreme value theorem.
Because the Schwartz space is a vector space, any polynomial
ϕ
(
x
)
{\displaystyle \phi ({\boldsymbol {x}})}
can be multiplied by a factor
e
−
a
|
x
|
2
{\displaystyle e^{-a\vert {\boldsymbol {x}}\vert ^{2}}}
for
a
>
0
{\displaystyle a>0}
a real constant, to give an element of the Schwartz space. In particular, there is an embedding of polynomials into a Schwartz space.
== Properties ==
=== Analytic properties ===
From Leibniz's rule, it follows that 𝒮(Rn) is also closed under pointwise multiplication:
If f, g ∈ 𝒮(Rn) then the product fg ∈ 𝒮(Rn).
In particular, this implies that 𝒮(Rn) is an R-algebra. More generally, if f ∈ 𝒮(R) and H is a bounded smooth function with bounded derivatives of all orders, then fH ∈ 𝒮(R).
The Fourier transform is a linear isomorphism F:𝒮(Rn) → 𝒮(Rn).
If f ∈ 𝒮(Rn) then f is Lipschitz continuous and hence uniformly continuous on Rn.
𝒮(Rn) is a distinguished locally convex Fréchet Schwartz TVS over the complex numbers.
Both 𝒮(Rn) and its strong dual space are also:
complete Hausdorff locally convex spaces,
nuclear Montel spaces,
ultrabornological spaces,
reflexive barrelled Mackey spaces.
=== Relation of Schwartz spaces with other topological vector spaces ===
If 1 ≤ p ≤ ∞, then 𝒮(Rn) ⊂ Lp(Rn).
If 1 ≤ p < ∞, then 𝒮(Rn) is dense in Lp(Rn).
The space of all bump functions, C∞c(Rn), is included in 𝒮(Rn).
== See also ==
Bump function
Schwartz–Bruhat function
Nuclear space
== References ==
=== Sources ===
Hörmander, L. (1990). The Analysis of Linear Partial Differential Operators I, (Distribution theory and Fourier Analysis) (2nd ed.). Berlin: Springer-Verlag. ISBN 3-540-52343-X.
Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Functional Analysis I (Revised and enlarged ed.). San Diego: Academic Press. ISBN 0-12-585050-6.
Stein, Elias M.; Shakarchi, Rami (2003). Fourier Analysis: An Introduction (Princeton Lectures in Analysis I). Princeton: Princeton University Press. ISBN 0-691-11384-X.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
This article incorporates material from Space of rapidly decreasing functions on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Schwartz_function |
A parabolic partial differential equation is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena in, for example, engineering science, quantum mechanics and financial mathematics. Examples include the heat equation, time-dependent Schrödinger equation and the Black–Scholes equation.
== Definition ==
To define the simplest kind of parabolic PDE, consider a real-valued function
u
(
x
,
y
)
{\displaystyle u(x,y)}
of two independent real variables,
x
{\displaystyle x}
and
y
{\displaystyle y}
. A second-order, linear, constant-coefficient PDE for
u
{\displaystyle u}
takes the form
A
u
x
x
+
2
B
u
x
y
+
C
u
y
y
+
D
u
x
+
E
u
y
+
F
=
0
,
{\displaystyle Au_{xx}+2Bu_{xy}+Cu_{yy}+Du_{x}+Eu_{y}+F=0,}
where the subscripts denote the first- and second-order partial derivatives with respect to
x
{\displaystyle x}
and
y
{\displaystyle y}
. The PDE is classified as parabolic if the coefficients of the principal part (i.e. the terms containing the second derivatives of
u
{\displaystyle u}
) satisfy the condition
B
2
−
A
C
=
0.
{\displaystyle B^{2}-AC=0.}
Usually
x
{\displaystyle x}
represents one-dimensional position and
y
{\displaystyle y}
represents time, and the PDE is solved subject to prescribed initial and boundary conditions. Equations with
B
2
−
A
C
<
0
{\displaystyle B^{2}-AC<0}
are termed elliptic while those with
B
2
−
A
C
>
0
{\displaystyle B^{2}-AC>0}
are hyperbolic. The name "parabolic" is used because the assumption on the coefficients is the same as the condition for the analytic geometry equation
A
x
2
+
2
B
x
y
+
C
y
2
+
D
x
+
E
y
+
F
=
0
{\displaystyle Ax^{2}+2Bxy+Cy^{2}+Dx+Ey+F=0}
to define a planar parabola.
The basic example of a parabolic PDE is the one-dimensional heat equation
u
t
=
α
u
x
x
,
{\displaystyle u_{t}=\alpha \,u_{xx},}
where
u
(
x
,
t
)
{\displaystyle u(x,t)}
is the temperature at position
x
{\displaystyle x}
along a thin rod at time
t
{\displaystyle t}
and
α
{\displaystyle \alpha }
is a positive constant called the thermal diffusivity.
The heat equation says, roughly, that temperature at a given time and point rises or falls at a rate proportional to the difference between the temperature at that point and the average temperature near that point. The quantity
u
x
x
{\displaystyle u_{xx}}
measures how far off the temperature is from satisfying the mean value property of harmonic functions.
The concept of a parabolic PDE can be generalized in several ways.
For instance, the flow of heat through a material body is governed by the three-dimensional heat equation
u
t
=
α
Δ
u
,
{\displaystyle u_{t}=\alpha \,\Delta u,}
where
Δ
u
:=
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
+
∂
2
u
∂
z
2
,
{\displaystyle \Delta u:={\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}},}
denotes the Laplace operator acting on
u
{\displaystyle u}
. This equation is the prototype of a multi-dimensional parabolic PDE.
Noting that
−
Δ
{\displaystyle -\Delta }
is an elliptic operator suggests a broader definition of a parabolic PDE:
u
t
=
−
L
u
,
{\displaystyle u_{t}=-Lu,}
where
L
{\displaystyle L}
is a second-order elliptic operator
(implying that
L
{\displaystyle L}
must be positive;
a case where
u
t
=
+
L
u
{\displaystyle u_{t}=+Lu}
is considered below).
A system of partial differential equations for a vector
u
{\displaystyle u}
can also be parabolic.
For example, such a system is hidden in an equation of the form
∇
⋅
(
a
(
x
)
∇
u
(
x
)
)
+
b
(
x
)
T
∇
u
(
x
)
+
c
u
(
x
)
=
f
(
x
)
{\displaystyle \nabla \cdot (a(x)\nabla u(x))+b(x)^{\text{T}}\nabla u(x)+cu(x)=f(x)}
if the matrix-valued function
a
(
x
)
{\displaystyle a(x)}
has a kernel of dimension 1.
== Solution ==
Under broad assumptions, an initial/boundary-value problem for a linear parabolic PDE has a solution for all time. The solution
u
(
x
,
t
)
{\displaystyle u(x,t)}
, as a function of
x
{\displaystyle x}
for a fixed time
t
>
0
{\displaystyle t>0}
, is generally smoother than the initial data
u
(
x
,
0
)
=
u
0
(
x
)
{\displaystyle u(x,0)=u_{0}(x)}
, according to parabolic regularity theory.
For a nonlinear parabolic PDE, a solution of an initial/boundary-value problem might explode in a singularity within a finite amount of time. It can be difficult to determine whether a solution exists for all time, or to understand the singularities that do arise. Such interesting questions arise in the solution of the Poincaré conjecture via Ricci flow.
== Backward parabolic equation ==
One occasionally encounters a so-called backward parabolic PDE, which takes the form
u
t
=
L
u
{\displaystyle u_{t}=Lu}
(note the absence of a minus sign).
An initial-value problem for the backward heat equation,
{
u
t
=
−
Δ
u
on
Ω
×
(
0
,
T
)
,
u
=
0
on
∂
Ω
×
(
0
,
T
)
,
u
=
f
on
Ω
×
{
0
}
.
{\displaystyle {\begin{cases}u_{t}=-\Delta u&{\textrm {on}}\ \ \Omega \times (0,T),\\u=0&{\textrm {on}}\ \ \partial \Omega \times (0,T),\\u=f&{\textrm {on}}\ \ \Omega \times \left\{0\right\}.\end{cases}}}
is equivalent to a final-value problem for the ordinary heat equation,
{
u
t
=
Δ
u
on
Ω
×
(
0
,
T
)
,
u
=
0
on
∂
Ω
×
(
0
,
T
)
,
u
=
f
on
Ω
×
{
T
}
.
{\displaystyle {\begin{cases}u_{t}=\Delta u&{\textrm {on}}\ \ \Omega \times (0,T),\\u=0&{\textrm {on}}\ \ \partial \Omega \times (0,T),\\u=f&{\textrm {on}}\ \ \Omega \times \left\{T\right\}.\end{cases}}}
Similarly to a final-value problem for a parabolic PDE, an initial-value problem for a backward parabolic PDE is usually not well-posed (solutions often grow unbounded in finite time, or even fail to exist). Nonetheless, these problems are important for the study of the reflection of singularities of solutions to various other PDEs.
== See also ==
Elliptic partial differential equation
Hyperbolic partial differential equation
== Notes ==
== References ==
Taylor, Michael E. (1975). "Reflection of singularities of solutions to systems of differential equations". Communications on Pure and Applied Mathematics. 28 (4): 457–478. CiteSeerX 10.1.1.697.9255. doi:10.1002/cpa.3160280403. ISSN 0010-3640.
Zauderer, Erich (2006). Partial Differential Equations of Applied Mathematics. Hoboken, N.J: Wiley-Interscience. ISBN 978-0-471-69073-3. OCLC 70158521.
== Further reading ==
Perthame, Benoît (2015), Parabolic Equations in Biology : Growth, Reaction, Movement and Diffusion, Springer, ISBN 978-3-319-19499-8
Evans, Lawrence C. (2010) [1998], Partial differential equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, R.I.: American Mathematical Society, doi:10.1090/gsm/019, ISBN 978-0-8218-4974-3, MR 2597943
"Parabolic partial differential equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
"Parabolic partial differential equation, numerical methods", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Parabolic_partial_differential_equation |
In mathematics, a homogeneous function is a function of several variables such that the following holds: If each of the function's arguments is multiplied by the same scalar, then the function's value is multiplied by some power of this scalar; the power is called the degree of homogeneity, or simply the degree. That is, if k is an integer, a function f of n variables is homogeneous of degree k if
f
(
s
x
1
,
…
,
s
x
n
)
=
s
k
f
(
x
1
,
…
,
x
n
)
{\displaystyle f(sx_{1},\ldots ,sx_{n})=s^{k}f(x_{1},\ldots ,x_{n})}
for every
x
1
,
…
,
x
n
,
{\displaystyle x_{1},\ldots ,x_{n},}
and
s
≠
0.
{\displaystyle s\neq 0.}
This is also referred to a kth-degree or kth-order homogeneous function.
For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k.
The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function
f
:
V
→
W
{\displaystyle f:V\to W}
between two F-vector spaces is homogeneous of degree
k
{\displaystyle k}
if
for all nonzero
s
∈
F
{\displaystyle s\in F}
and
v
∈
V
.
{\displaystyle v\in V.}
This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that
v
∈
C
{\displaystyle \mathbf {v} \in C}
implies
s
v
∈
C
{\displaystyle s\mathbf {v} \in C}
for every nonzero scalar s.
In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for
s
>
0
,
{\displaystyle s>0,}
and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point.
A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes.
== Definitions ==
The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article.
There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers.
The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to considering more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous.
The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number.
=== General homogeneity ===
Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that
s
x
∈
C
{\displaystyle sx\in C}
for all
x
∈
C
{\displaystyle x\in C}
and all nonzero
s
∈
F
.
{\displaystyle s\in F.}
A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies
f
(
s
x
)
=
s
k
f
(
x
)
{\displaystyle f(sx)=s^{k}f(x)}
for some integer k, every
x
∈
C
,
{\displaystyle x\in C,}
and every nonzero
s
∈
F
.
{\displaystyle s\in F.}
The integer k is called the degree of homogeneity, or simply the degree of f.
A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero.
Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degree) play an essential role in the Proj construction of projective schemes.
=== Positive homogeneity ===
When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function.
This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined.
Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since
|
−
x
|
=
|
x
|
≠
−
|
x
|
{\displaystyle |-x|=|x|\neq -|x|}
if
x
≠
0.
{\displaystyle x\neq 0.}
This remains true in the complex case, since the field of the complex numbers
C
{\displaystyle \mathbb {C} }
and every complex vector space can be considered as real vector spaces.
Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions.
== Examples ==
=== Simple example ===
The function
f
(
x
,
y
)
=
x
2
+
y
2
{\displaystyle f(x,y)=x^{2}+y^{2}}
is homogeneous of degree 2:
f
(
t
x
,
t
y
)
=
(
t
x
)
2
+
(
t
y
)
2
=
t
2
(
x
2
+
y
2
)
=
t
2
f
(
x
,
y
)
.
{\displaystyle f(tx,ty)=(tx)^{2}+(ty)^{2}=t^{2}\left(x^{2}+y^{2}\right)=t^{2}f(x,y).}
=== Absolute value and norms ===
The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since
|
s
x
|
=
s
|
x
|
{\displaystyle |sx|=s|x|}
if
s
>
0
,
{\displaystyle s>0,}
and
|
s
x
|
=
−
s
|
x
|
{\displaystyle |sx|=-s|x|}
if
s
<
0.
{\displaystyle s<0.}
The absolute value of a complex number is a positively homogeneous function of degree
1
{\displaystyle 1}
over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers.
More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function.
=== Linear Maps ===
Any linear map
f
:
V
→
W
{\displaystyle f:V\to W}
between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity:
f
(
α
v
)
=
α
f
(
v
)
{\displaystyle f(\alpha \mathbf {v} )=\alpha f(\mathbf {v} )}
for all
α
∈
F
{\displaystyle \alpha \in {F}}
and
v
∈
V
.
{\displaystyle v\in V.}
Similarly, any multilinear function
f
:
V
1
×
V
2
×
⋯
V
n
→
W
{\displaystyle f:V_{1}\times V_{2}\times \cdots V_{n}\to W}
is homogeneous of degree
n
,
{\displaystyle n,}
by the definition of multilinearity:
f
(
α
v
1
,
…
,
α
v
n
)
=
α
n
f
(
v
1
,
…
,
v
n
)
{\displaystyle f\left(\alpha \mathbf {v} _{1},\ldots ,\alpha \mathbf {v} _{n}\right)=\alpha ^{n}f(\mathbf {v} _{1},\ldots ,\mathbf {v} _{n})}
for all
α
∈
F
{\displaystyle \alpha \in {F}}
and
v
1
∈
V
1
,
v
2
∈
V
2
,
…
,
v
n
∈
V
n
.
{\displaystyle v_{1}\in V_{1},v_{2}\in V_{2},\ldots ,v_{n}\in V_{n}.}
=== Homogeneous polynomials ===
Monomials in
n
{\displaystyle n}
variables define homogeneous functions
f
:
F
n
→
F
.
{\displaystyle f:\mathbb {F} ^{n}\to \mathbb {F} .}
For example,
f
(
x
,
y
,
z
)
=
x
5
y
2
z
3
{\displaystyle f(x,y,z)=x^{5}y^{2}z^{3}\,}
is homogeneous of degree 10 since
f
(
α
x
,
α
y
,
α
z
)
=
(
α
x
)
5
(
α
y
)
2
(
α
z
)
3
=
α
10
x
5
y
2
z
3
=
α
10
f
(
x
,
y
,
z
)
.
{\displaystyle f(\alpha x,\alpha y,\alpha z)=(\alpha x)^{5}(\alpha y)^{2}(\alpha z)^{3}=\alpha ^{10}x^{5}y^{2}z^{3}=\alpha ^{10}f(x,y,z).\,}
The degree is the sum of the exponents on the variables; in this example,
10
=
5
+
2
+
3.
{\displaystyle 10=5+2+3.}
A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example,
x
5
+
2
x
3
y
2
+
9
x
y
4
{\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}}
is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions.
Given a homogeneous polynomial of degree
k
{\displaystyle k}
with real coefficients that takes only positive values, one gets a positively homogeneous function of degree
k
/
d
{\displaystyle k/d}
by raising it to the power
1
/
d
.
{\displaystyle 1/d.}
So for example, the following function is positively homogeneous of degree 1 but not homogeneous:
(
x
2
+
y
2
+
z
2
)
1
2
.
{\displaystyle \left(x^{2}+y^{2}+z^{2}\right)^{\frac {1}{2}}.}
=== Min/max ===
For every set of weights
w
1
,
…
,
w
n
,
{\displaystyle w_{1},\dots ,w_{n},}
the following functions are positively homogeneous of degree 1, but not homogeneous:
min
(
x
1
w
1
,
…
,
x
n
w
n
)
{\displaystyle \min \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)}
(Leontief utilities)
max
(
x
1
w
1
,
…
,
x
n
w
n
)
{\displaystyle \max \left({\frac {x_{1}}{w_{1}}},\dots ,{\frac {x_{n}}{w_{n}}}\right)}
=== Rational functions ===
Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if
f
{\displaystyle f}
is homogeneous of degree
m
{\displaystyle m}
and
g
{\displaystyle g}
is homogeneous of degree
n
,
{\displaystyle n,}
then
f
/
g
{\displaystyle f/g}
is homogeneous of degree
m
−
n
{\displaystyle m-n}
away from the zeros of
g
.
{\displaystyle g.}
=== Non-examples ===
The homogeneous real functions of a single variable have the form
x
↦
c
x
k
{\displaystyle x\mapsto cx^{k}}
for some constant c. So, the affine function
x
↦
x
+
5
,
{\displaystyle x\mapsto x+5,}
the natural logarithm
x
↦
ln
(
x
)
,
{\displaystyle x\mapsto \ln(x),}
and the exponential function
x
↦
e
x
{\displaystyle x\mapsto e^{x}}
are not homogeneous.
== Euler's theorem ==
Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely:
As a consequence, if
f
:
R
n
→
R
{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }
is continuously differentiable and homogeneous of degree
k
,
{\displaystyle k,}
its first-order partial derivatives
∂
f
/
∂
x
i
{\displaystyle \partial f/\partial x_{i}}
are homogeneous of degree
k
−
1.
{\displaystyle k-1.}
This results from Euler's theorem by differentiating the partial differential equation with respect to one variable.
In the case of a function of a single real variable (
n
=
1
{\displaystyle n=1}
), the theorem implies that a continuously differentiable and positively homogeneous function of degree k has the form
f
(
x
)
=
c
+
x
k
{\displaystyle f(x)=c_{+}x^{k}}
for
x
>
0
{\displaystyle x>0}
and
f
(
x
)
=
c
−
x
k
{\displaystyle f(x)=c_{-}x^{k}}
for
x
<
0.
{\displaystyle x<0.}
The constants
c
+
{\displaystyle c_{+}}
and
c
−
{\displaystyle c_{-}}
are not necessarily the same, as it is the case for the absolute value.
== Application to differential equations ==
The substitution
v
=
y
/
x
{\displaystyle v=y/x}
converts the ordinary differential equation
I
(
x
,
y
)
d
y
d
x
+
J
(
x
,
y
)
=
0
,
{\displaystyle I(x,y){\frac {\mathrm {d} y}{\mathrm {d} x}}+J(x,y)=0,}
where
I
{\displaystyle I}
and
J
{\displaystyle J}
are homogeneous functions of the same degree, into the separable differential equation
x
d
v
d
x
=
−
J
(
1
,
v
)
I
(
1
,
v
)
−
v
.
{\displaystyle x{\frac {\mathrm {d} v}{\mathrm {d} x}}=-{\frac {J(1,v)}{I(1,v)}}-v.}
== Generalizations ==
=== Homogeneity under a monoid action ===
The definitions given above are all specialized cases of the following more general notion of homogeneity in which
X
{\displaystyle X}
can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid.
Let
M
{\displaystyle M}
be a monoid with identity element
1
∈
M
,
{\displaystyle 1\in M,}
let
X
{\displaystyle X}
and
Y
{\displaystyle Y}
be sets, and suppose that on both
X
{\displaystyle X}
and
Y
{\displaystyle Y}
there are defined monoid actions of
M
.
{\displaystyle M.}
Let
k
{\displaystyle k}
be a non-negative integer and let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a map. Then
f
{\displaystyle f}
is said to be homogeneous of degree
k
{\displaystyle k}
over
M
{\displaystyle M}
if for every
x
∈
X
{\displaystyle x\in X}
and
m
∈
M
,
{\displaystyle m\in M,}
f
(
m
x
)
=
m
k
f
(
x
)
.
{\displaystyle f(mx)=m^{k}f(x).}
If in addition there is a function
M
→
M
,
{\displaystyle M\to M,}
denoted by
m
↦
|
m
|
,
{\displaystyle m\mapsto |m|,}
called an absolute value then
f
{\displaystyle f}
is said to be absolutely homogeneous of degree
k
{\displaystyle k}
over
M
{\displaystyle M}
if for every
x
∈
X
{\displaystyle x\in X}
and
m
∈
M
,
{\displaystyle m\in M,}
f
(
m
x
)
=
|
m
|
k
f
(
x
)
.
{\displaystyle f(mx)=|m|^{k}f(x).}
A function is homogeneous over
M
{\displaystyle M}
(resp. absolutely homogeneous over
M
{\displaystyle M}
) if it is homogeneous of degree
1
{\displaystyle 1}
over
M
{\displaystyle M}
(resp. absolutely homogeneous of degree
1
{\displaystyle 1}
over
M
{\displaystyle M}
).
More generally, it is possible for the symbols
m
k
{\displaystyle m^{k}}
to be defined for
m
∈
M
{\displaystyle m\in M}
with
k
{\displaystyle k}
being something other than an integer (for example, if
M
{\displaystyle M}
is the real numbers and
k
{\displaystyle k}
is a non-zero real number then
m
k
{\displaystyle m^{k}}
is defined even though
k
{\displaystyle k}
is not an integer). If this is the case then
f
{\displaystyle f}
will be called homogeneous of degree
k
{\displaystyle k}
over
M
{\displaystyle M}
if the same equality holds:
f
(
m
x
)
=
m
k
f
(
x
)
for every
x
∈
X
and
m
∈
M
.
{\displaystyle f(mx)=m^{k}f(x)\quad {\text{ for every }}x\in X{\text{ and }}m\in M.}
The notion of being absolutely homogeneous of degree
k
{\displaystyle k}
over
M
{\displaystyle M}
is generalized similarly.
=== Distributions (generalized functions) ===
A continuous function
f
{\displaystyle f}
on
R
n
{\displaystyle \mathbb {R} ^{n}}
is homogeneous of degree
k
{\displaystyle k}
if and only if
∫
R
n
f
(
t
x
)
φ
(
x
)
d
x
=
t
k
∫
R
n
f
(
x
)
φ
(
x
)
d
x
{\displaystyle \int _{\mathbb {R} ^{n}}f(tx)\varphi (x)\,dx=t^{k}\int _{\mathbb {R} ^{n}}f(x)\varphi (x)\,dx}
for all compactly supported test functions
φ
{\displaystyle \varphi }
; and nonzero real
t
.
{\displaystyle t.}
Equivalently, making a change of variable
y
=
t
x
,
{\displaystyle y=tx,}
f
{\displaystyle f}
is homogeneous of degree
k
{\displaystyle k}
if and only if
t
−
n
∫
R
n
f
(
y
)
φ
(
y
t
)
d
y
=
t
k
∫
R
n
f
(
y
)
φ
(
y
)
d
y
{\displaystyle t^{-n}\int _{\mathbb {R} ^{n}}f(y)\varphi \left({\frac {y}{t}}\right)\,dy=t^{k}\int _{\mathbb {R} ^{n}}f(y)\varphi (y)\,dy}
for all
t
{\displaystyle t}
and all test functions
φ
.
{\displaystyle \varphi .}
The last display makes it possible to define homogeneity of distributions. A distribution
S
{\displaystyle S}
is homogeneous of degree
k
{\displaystyle k}
if
t
−
n
⟨
S
,
φ
∘
μ
t
⟩
=
t
k
⟨
S
,
φ
⟩
{\displaystyle t^{-n}\langle S,\varphi \circ \mu _{t}\rangle =t^{k}\langle S,\varphi \rangle }
for all nonzero real
t
{\displaystyle t}
and all test functions
φ
.
{\displaystyle \varphi .}
Here the angle brackets denote the pairing between distributions and test functions, and
μ
t
:
R
n
→
R
n
{\displaystyle \mu _{t}:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
is the mapping of scalar division by the real number
t
.
{\displaystyle t.}
== Glossary of name variants ==
Let
f
:
X
→
Y
{\displaystyle f:X\to Y}
be a map between two vector spaces over a field
F
{\displaystyle \mathbb {F} }
(usually the real numbers
R
{\displaystyle \mathbb {R} }
or complex numbers
C
{\displaystyle \mathbb {C} }
). If
S
{\displaystyle S}
is a set of scalars, such as
Z
,
{\displaystyle \mathbb {Z} ,}
[
0
,
∞
)
,
{\displaystyle [0,\infty ),}
or
R
{\displaystyle \mathbb {R} }
for example, then
f
{\displaystyle f}
is said to be homogeneous over
S
{\displaystyle S}
if
f
(
s
x
)
=
s
f
(
x
)
{\textstyle f(sx)=sf(x)}
for every
x
∈
X
{\displaystyle x\in X}
and scalar
s
∈
S
.
{\displaystyle s\in S.}
For instance, every additive map between vector spaces is homogeneous over the rational numbers
S
:=
Q
{\displaystyle S:=\mathbb {Q} }
although it might not be homogeneous over the real numbers
S
:=
R
.
{\displaystyle S:=\mathbb {R} .}
The following commonly encountered special cases and variations of this definition have their own terminology:
(Strict) Positive homogeneity:
f
(
r
x
)
=
r
f
(
x
)
{\displaystyle f(rx)=rf(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all positive real
r
>
0.
{\displaystyle r>0.}
When the function
f
{\displaystyle f}
is valued in a vector space or field, then this property is logically equivalent to nonnegative homogeneity, which by definition means:
f
(
r
x
)
=
r
f
(
x
)
{\displaystyle f(rx)=rf(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all non-negative real
r
≥
0.
{\displaystyle r\geq 0.}
It is for this reason that positive homogeneity is often also called nonnegative homogeneity. However, for functions valued in the extended real numbers
[
−
∞
,
∞
]
=
R
∪
{
±
∞
}
,
{\displaystyle [-\infty ,\infty ]=\mathbb {R} \cup \{\pm \infty \},}
which appear in fields like convex analysis, the multiplication
0
⋅
f
(
x
)
{\displaystyle 0\cdot f(x)}
will be undefined whenever
f
(
x
)
=
±
∞
{\displaystyle f(x)=\pm \infty }
and so these statements are not necessarily always interchangeable.
This property is used in the definition of a sublinear function.
Minkowski functionals are exactly those non-negative extended real-valued functions with this property.
Real homogeneity:
f
(
r
x
)
=
r
f
(
x
)
{\displaystyle f(rx)=rf(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all real
r
.
{\displaystyle r.}
This property is used in the definition of a real linear functional.
Homogeneity:
f
(
s
x
)
=
s
f
(
x
)
{\displaystyle f(sx)=sf(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all scalars
s
∈
F
.
{\displaystyle s\in \mathbb {F} .}
It is emphasized that this definition depends on the scalar field
F
{\displaystyle \mathbb {F} }
underlying the domain
X
.
{\displaystyle X.}
This property is used in the definition of linear functionals and linear maps.
Conjugate homogeneity:
f
(
s
x
)
=
s
¯
f
(
x
)
{\displaystyle f(sx)={\overline {s}}f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all scalars
s
∈
F
.
{\displaystyle s\in \mathbb {F} .}
If
F
=
C
{\displaystyle \mathbb {F} =\mathbb {C} }
then
s
¯
{\displaystyle {\overline {s}}}
typically denotes the complex conjugate of
s
{\displaystyle s}
. But more generally, as with semilinear maps for example,
s
¯
{\displaystyle {\overline {s}}}
could be the image of
s
{\displaystyle s}
under some distinguished automorphism of
F
.
{\displaystyle \mathbb {F} .}
Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space).
All of the above definitions can be generalized by replacing the condition
f
(
r
x
)
=
r
f
(
x
)
{\displaystyle f(rx)=rf(x)}
with
f
(
r
x
)
=
|
r
|
f
(
x
)
,
{\displaystyle f(rx)=|r|f(x),}
in which case that definition is prefixed with the word "absolute" or "absolutely."
For example,
Absolute homogeneity:
f
(
s
x
)
=
|
s
|
f
(
x
)
{\displaystyle f(sx)=|s|f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all scalars
s
∈
F
.
{\displaystyle s\in \mathbb {F} .}
This property is used in the definition of a seminorm and a norm.
If
k
{\displaystyle k}
is a fixed real number then the above definitions can be further generalized by replacing the condition
f
(
r
x
)
=
r
f
(
x
)
{\displaystyle f(rx)=rf(x)}
with
f
(
r
x
)
=
r
k
f
(
x
)
{\displaystyle f(rx)=r^{k}f(x)}
(and similarly, by replacing
f
(
r
x
)
=
|
r
|
f
(
x
)
{\displaystyle f(rx)=|r|f(x)}
with
f
(
r
x
)
=
|
r
|
k
f
(
x
)
{\displaystyle f(rx)=|r|^{k}f(x)}
for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree
k
{\displaystyle k}
" (where in particular, all of the above definitions are "of degree
1
{\displaystyle 1}
").
For instance,
Real homogeneity of degree
k
{\displaystyle k}
:
f
(
r
x
)
=
r
k
f
(
x
)
{\displaystyle f(rx)=r^{k}f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all real
r
.
{\displaystyle r.}
Homogeneity of degree
k
{\displaystyle k}
:
f
(
s
x
)
=
s
k
f
(
x
)
{\displaystyle f(sx)=s^{k}f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all scalars
s
∈
F
.
{\displaystyle s\in \mathbb {F} .}
Absolute real homogeneity of degree
k
{\displaystyle k}
:
f
(
r
x
)
=
|
r
|
k
f
(
x
)
{\displaystyle f(rx)=|r|^{k}f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all real
r
.
{\displaystyle r.}
Absolute homogeneity of degree
k
{\displaystyle k}
:
f
(
s
x
)
=
|
s
|
k
f
(
x
)
{\displaystyle f(sx)=|s|^{k}f(x)}
for all
x
∈
X
{\displaystyle x\in X}
and all scalars
s
∈
F
.
{\displaystyle s\in \mathbb {F} .}
A nonzero continuous function that is homogeneous of degree
k
{\displaystyle k}
on
R
n
∖
{
0
}
{\displaystyle \mathbb {R} ^{n}\backslash \lbrace 0\rbrace }
extends continuously to
R
n
{\displaystyle \mathbb {R} ^{n}}
if and only if
k
>
0.
{\displaystyle k>0.}
== See also ==
Homogeneous space
Triangle center function – Point in a triangle that can be seen as its middle under some criteriaPages displaying short descriptions of redirect targets
== Notes ==
Proofs
== References ==
== Sources ==
Blatter, Christian (1979). "20. Mehrdimensionale Differentialrechnung, Aufgaben, 1.". Analysis II (in German) (2nd ed.). Springer Verlag. p. 188. ISBN 3-540-09484-9.
Kubrusly, Carlos S. (2011). The Elements of Operator Theory (Second ed.). Boston: Birkhäuser. ISBN 978-0-8176-4998-2. OCLC 710154895.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
== External links ==
"Homogeneous function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Eric Weisstein. "Euler's Homogeneous Function Theorem". MathWorld. | Wikipedia/Euler's_homogeneous_function_theorem |
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function
f
{\displaystyle f}
in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as
D
f
=
λ
f
{\displaystyle Df=\lambda f}
for some scalar eigenvalue
λ
.
{\displaystyle \lambda .}
The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.
An eigenfunction is a type of eigenvector.
== Eigenfunctions ==
In general, an eigenvector of a linear operator D defined on some vector space is a nonzero vector in the domain of D that, when D acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where D is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function f is an eigenfunction of D if it satisfies the equation
where λ is a scalar. The solutions to Equation (1) may also be subject to boundary conditions. Because of the boundary conditions, the possible values of λ are generally limited, for example to a discrete set λ1, λ2, … or to a continuous set over some range. The set of all possible eigenvalues of D is sometimes called its spectrum, which may be discrete, continuous, or a combination of both.
Each value of λ corresponds to one or more eigenfunctions. If multiple linearly independent eigenfunctions have the same eigenvalue, the eigenvalue is said to be degenerate and the maximum number of linearly independent eigenfunctions associated with the same eigenvalue is the eigenvalue's degree of degeneracy or geometric multiplicity.
=== Derivative example ===
A widely used class of linear operators acting on infinite dimensional spaces are differential operators on the space C∞ of infinitely differentiable real or complex functions of a real or complex argument t. For example, consider the derivative operator
d
d
t
{\textstyle {\frac {d}{dt}}}
with eigenvalue equation
d
d
t
f
(
t
)
=
λ
f
(
t
)
.
{\displaystyle {\frac {d}{dt}}f(t)=\lambda f(t).}
This differential equation can be solved by multiplying both sides by
d
t
f
(
t
)
{\textstyle {\frac {dt}{f(t)}}}
and integrating. Its solution, the exponential function
f
(
t
)
=
f
0
e
λ
t
,
{\displaystyle f(t)=f_{0}e^{\lambda t},}
is the eigenfunction of the derivative operator, where f0 is a parameter that depends on the boundary conditions. Note that in this case the eigenfunction is itself a function of its associated eigenvalue λ, which can take any real or complex value. In particular, note that for λ = 0 the eigenfunction f(t) is a constant.
Suppose in the example that f(t) is subject to the boundary conditions f(0) = 1 and
d
f
d
t
|
t
=
0
=
2
{\textstyle \left.{\frac {df}{dt}}\right|_{t=0}=2}
. We then find that
f
(
t
)
=
e
2
t
,
{\displaystyle f(t)=e^{2t},}
where λ = 2 is the only eigenvalue of the differential equation that also satisfies the boundary condition.
=== Link to eigenvalues and eigenvectors of matrices ===
Eigenfunctions can be expressed as column vectors and linear operators can be expressed as matrices, although they may have infinite dimensions. As a result, many of the concepts related to eigenvectors of matrices carry over to the study of eigenfunctions.
Define the inner product in the function space on which D is defined as
⟨
f
,
g
⟩
=
∫
Ω
f
∗
(
t
)
g
(
t
)
d
t
,
{\displaystyle \langle f,g\rangle =\int _{\Omega }\ f^{*}(t)g(t)dt,}
integrated over some range of interest for t called Ω. The * denotes the complex conjugate.
Suppose the function space has an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. For the orthonormal basis,
⟨
u
i
,
u
j
⟩
=
∫
Ω
u
i
∗
(
t
)
u
j
(
t
)
d
t
=
δ
i
j
=
{
1
i
=
j
0
i
≠
j
,
{\displaystyle \langle u_{i},u_{j}\rangle =\int _{\Omega }\ u_{i}^{*}(t)u_{j}(t)dt=\delta _{ij}={\begin{cases}1&i=j\\0&i\neq j\end{cases}},}
where δij is the Kronecker delta and can be thought of as the elements of the identity matrix.
Functions can be written as a linear combination of the basis functions,
f
(
t
)
=
∑
j
=
1
n
b
j
u
j
(
t
)
,
{\displaystyle f(t)=\sum _{j=1}^{n}b_{j}u_{j}(t),}
for example through a Fourier expansion of f(t). The coefficients bj can be stacked into an n by 1 column vector b = [b1 b2 … bn]T. In some special cases, such as the coefficients of the Fourier series of a sinusoidal function, this column vector has finite dimension.
Additionally, define a matrix representation of the linear operator D with elements
A
i
j
=
⟨
u
i
,
D
u
j
⟩
=
∫
Ω
u
i
∗
(
t
)
D
u
j
(
t
)
d
t
.
{\displaystyle A_{ij}=\langle u_{i},Du_{j}\rangle =\int _{\Omega }\ u_{i}^{*}(t)Du_{j}(t)dt.}
We can write the function Df(t) either as a linear combination of the basis functions or as D acting upon the expansion of f(t),
D
f
(
t
)
=
∑
j
=
1
n
c
j
u
j
(
t
)
=
∑
j
=
1
n
b
j
D
u
j
(
t
)
.
{\displaystyle Df(t)=\sum _{j=1}^{n}c_{j}u_{j}(t)=\sum _{j=1}^{n}b_{j}Du_{j}(t).}
Taking the inner product of each side of this equation with an arbitrary basis function ui(t),
∑
j
=
1
n
c
j
∫
Ω
u
i
∗
(
t
)
u
j
(
t
)
d
t
=
∑
j
=
1
n
b
j
∫
Ω
u
i
∗
(
t
)
D
u
j
(
t
)
d
t
,
c
i
=
∑
j
=
1
n
b
j
A
i
j
.
{\displaystyle {\begin{aligned}\sum _{j=1}^{n}c_{j}\int _{\Omega }\ u_{i}^{*}(t)u_{j}(t)dt&=\sum _{j=1}^{n}b_{j}\int _{\Omega }\ u_{i}^{*}(t)Du_{j}(t)dt,\\c_{i}&=\sum _{j=1}^{n}b_{j}A_{ij}.\end{aligned}}}
This is the matrix multiplication Ab = c written in summation notation and is a matrix equivalent of the operator D acting upon the function f(t) expressed in the orthonormal basis. If f(t) is an eigenfunction of D with eigenvalue λ, then Ab = λb.
=== Eigenvalues and eigenfunctions of Hermitian operators ===
Many of the operators encountered in physics are Hermitian. Suppose the linear operator D acts on a function space that is a Hilbert space with an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. In this basis, the operator D has a matrix representation A with elements
A
i
j
=
⟨
u
i
,
D
u
j
⟩
=
∫
Ω
d
t
u
i
∗
(
t
)
D
u
j
(
t
)
.
{\displaystyle A_{ij}=\langle u_{i},Du_{j}\rangle =\int _{\Omega }dt\ u_{i}^{*}(t)Du_{j}(t).}
integrated over some range of interest for t denoted Ω.
By analogy with Hermitian matrices, D is a Hermitian operator if Aij = Aji*, or:
⟨
u
i
,
D
u
j
⟩
=
⟨
D
u
i
,
u
j
⟩
,
∫
Ω
d
t
u
i
∗
(
t
)
D
u
j
(
t
)
=
∫
Ω
d
t
u
j
(
t
)
[
D
u
i
(
t
)
]
∗
.
{\displaystyle {\begin{aligned}\langle u_{i},Du_{j}\rangle &=\langle Du_{i},u_{j}\rangle ,\\[-1pt]\int _{\Omega }dt\ u_{i}^{*}(t)Du_{j}(t)&=\int _{\Omega }dt\ u_{j}(t)[Du_{i}(t)]^{*}.\end{aligned}}}
Consider the Hermitian operator D with eigenvalues λ1, λ2, … and corresponding eigenfunctions f1(t), f2(t), …. This Hermitian operator has the following properties:
Its eigenvalues are real, λi = λi*
Its eigenfunctions obey an orthogonality condition,
⟨
f
i
,
f
j
⟩
=
0
{\displaystyle \langle f_{i},f_{j}\rangle =0}
if i ≠ j
The second condition always holds for λi ≠ λj. For degenerate eigenfunctions with the same eigenvalue λi, orthogonal eigenfunctions can always be chosen that span the eigenspace associated with λi, for example by using the Gram-Schmidt process. Depending on whether the spectrum is discrete or continuous, the eigenfunctions can be normalized by setting the inner product of the eigenfunctions equal to either a Kronecker delta or a Dirac delta function, respectively.
For many Hermitian operators, notably Sturm–Liouville operators, a third property is
Its eigenfunctions form a basis of the function space on which the operator is defined
As a consequence, in many important cases, the eigenfunctions of the Hermitian operator form an orthonormal basis. In these cases, an arbitrary function can be expressed as a linear combination of the eigenfunctions of the Hermitian operator.
== Applications ==
=== Vibrating strings ===
Let h(x, t) denote the transverse displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. Applying the laws of mechanics to infinitesimal portions of the string, the function h satisfies the partial differential equation
∂
2
h
∂
t
2
=
c
2
∂
2
h
∂
x
2
,
{\displaystyle {\frac {\partial ^{2}h}{\partial t^{2}}}=c^{2}{\frac {\partial ^{2}h}{\partial x^{2}}},}
which is called the (one-dimensional) wave equation. Here c is a constant speed that depends on the tension and mass of the string.
This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:
d
2
d
x
2
X
=
−
ω
2
c
2
X
,
d
2
d
t
2
T
=
−
ω
2
T
.
{\displaystyle {\frac {d^{2}}{dx^{2}}}X=-{\frac {\omega ^{2}}{c^{2}}}X,\qquad {\frac {d^{2}}{dt^{2}}}T=-\omega ^{2}T.}
Each of these is an eigenvalue equation with eigenvalues
−
ω
2
c
2
{\textstyle -{\frac {\omega ^{2}}{c^{2}}}}
and −ω2, respectively. For any values of ω and c, the equations are satisfied by the functions
X
(
x
)
=
sin
(
ω
x
c
+
φ
)
,
T
(
t
)
=
sin
(
ω
t
+
ψ
)
,
{\displaystyle X(x)=\sin \left({\frac {\omega x}{c}}+\varphi \right),\qquad T(t)=\sin(\omega t+\psi ),}
where the phase angles φ and ψ are arbitrary real constants.
If we impose boundary conditions, for example that the ends of the string are fixed at x = 0 and x = L, namely X(0) = X(L) = 0, and that T(0) = 0, we constrain the eigenvalues. For these boundary conditions, sin(φ) = 0 and sin(ψ) = 0, so the phase angles φ = ψ = 0, and
sin
(
ω
L
c
)
=
0.
{\displaystyle \sin \left({\frac {\omega L}{c}}\right)=0.}
This last boundary condition constrains ω to take a value ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form
h
(
x
,
t
)
=
sin
(
n
π
x
L
)
sin
(
ω
n
t
)
.
{\displaystyle h(x,t)=\sin \left({\frac {n\pi x}{L}}\right)\sin(\omega _{n}t).}
In the example of a string instrument, the frequency ωn is the frequency of the n-th harmonic, which is called the (n − 1)-th overtone.
=== Schrödinger equation ===
In quantum mechanics, the Schrödinger equation
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
H
Ψ
(
r
,
t
)
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,t)=H\Psi (\mathbf {r} ,t)}
with the Hamiltonian operator
H
=
−
ℏ
2
2
m
∇
2
+
V
(
r
,
t
)
{\displaystyle H=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(\mathbf {r} ,t)}
can be solved by separation of variables if the Hamiltonian does not depend explicitly on time. In that case, the wave function Ψ(r,t) = φ(r)T(t) leads to the two differential equations,
Both of these differential equations are eigenvalue equations with eigenvalue E. As shown in an earlier example, the solution of Equation (3) is the exponential
T
(
t
)
=
e
−
i
E
t
/
ℏ
.
{\displaystyle T(t)=e^{{-iEt}/{\hbar }}.}
Equation (2) is the time-independent Schrödinger equation. The eigenfunctions φk of the Hamiltonian operator are stationary states of the quantum mechanical system, each with a corresponding energy Ek. They represent allowable energy states of the system and may be constrained by boundary conditions.
The Hamiltonian operator H is an example of a Hermitian operator whose eigenfunctions form an orthonormal basis. When the Hamiltonian does not depend explicitly on time, general solutions of the Schrödinger equation are linear combinations of the stationary states multiplied by the oscillatory T(t),
Ψ
(
r
,
t
)
=
∑
k
c
k
φ
k
(
r
)
e
−
i
E
k
t
/
ℏ
{\textstyle \Psi (\mathbf {r} ,t)=\sum _{k}c_{k}\varphi _{k}(\mathbf {r} )e^{{-iE_{k}t}/{\hbar }}}
or, for a system with a continuous spectrum,
Ψ
(
r
,
t
)
=
∫
d
E
c
E
φ
E
(
r
)
e
−
i
E
t
/
ℏ
.
{\displaystyle \Psi (\mathbf {r} ,t)=\int dE\,c_{E}\varphi _{E}(\mathbf {r} )e^{{-iEt}/{\hbar }}.}
The success of the Schrödinger equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.
=== Signals and systems ===
In the study of signals and systems, an eigenfunction of a system is a signal f(t) that, when input into the system, produces a response y(t) = λf(t), where λ is a complex scalar eigenvalue.
== See also ==
Eigenvalues and eigenvectors
Hilbert–Schmidt theorem
Spectral theory of ordinary differential equations
Fixed point combinator
Fourier transform eigenfunctions
== Notes ==
=== Citations ===
== Works cited ==
== External links ==
More images (non-GPL) at Atom in a Box | Wikipedia/Eigenfunctions |
In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
, functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle.
In an invariant differential operator
D
{\displaystyle D}
, the term differential operator indicates that the value
D
f
{\displaystyle Df}
of the map depends only on
f
(
x
)
{\displaystyle f(x)}
and the derivatives of
f
{\displaystyle f}
in
x
{\displaystyle x}
. The word invariant indicates that the operator contains some symmetry. This means that there is a group
G
{\displaystyle G}
with a group action on the functions (or other objects in question) and this action is preserved by the operator:
D
(
g
⋅
f
)
=
g
⋅
(
D
f
)
.
{\displaystyle D(g\cdot f)=g\cdot (Df).}
Usually, the action of the group has the meaning of a change of coordinates (change of observer) and the invariance means that the operator has the same expression in all admissible coordinates.
== Invariance on homogeneous spaces ==
Let M = G/H be a homogeneous space for a Lie group G and a Lie subgroup H. Every representation
ρ
:
H
→
A
u
t
(
V
)
{\displaystyle \rho :H\rightarrow \mathrm {Aut} (\mathbb {V} )}
gives rise to a vector bundle
V
=
G
×
H
V
where
(
g
h
,
v
)
∼
(
g
,
ρ
(
h
)
v
)
∀
g
∈
G
,
h
∈
H
and
v
∈
V
.
{\displaystyle V=G\times _{H}\mathbb {V} \;{\text{where}}\;(gh,v)\sim (g,\rho (h)v)\;\forall \;g\in G,\;h\in H\;{\text{and}}\;v\in \mathbb {V} .}
Sections
φ
∈
Γ
(
V
)
{\displaystyle \varphi \in \Gamma (V)}
can be identified with
Γ
(
V
)
=
{
φ
:
G
→
V
:
φ
(
g
h
)
=
ρ
(
h
−
1
)
φ
(
g
)
∀
g
∈
G
,
h
∈
H
}
.
{\displaystyle \Gamma (V)=\{\varphi :G\rightarrow \mathbb {V} \;:\;\varphi (gh)=\rho (h^{-1})\varphi (g)\;\forall \;g\in G,\;h\in H\}.}
In this form the group G acts on sections via
(
ℓ
g
φ
)
(
g
′
)
=
φ
(
g
−
1
g
′
)
.
{\displaystyle (\ell _{g}\varphi )(g')=\varphi (g^{-1}g').}
Now let V and W be two vector bundles over M. Then a differential operator
d
:
Γ
(
V
)
→
Γ
(
W
)
{\displaystyle d:\Gamma (V)\rightarrow \Gamma (W)}
that maps sections of V to sections of W is called invariant if
d
(
ℓ
g
φ
)
=
ℓ
g
(
d
φ
)
.
{\displaystyle d(\ell _{g}\varphi )=\ell _{g}(d\varphi ).}
for all sections
φ
{\displaystyle \varphi }
in
Γ
(
V
)
{\displaystyle \Gamma (V)}
and elements g in G. All linear invariant differential operators on homogeneous parabolic geometries, i.e. when G is semi-simple and H is a parabolic subgroup, are given dually by homomorphisms of generalized Verma modules.
== Invariance in terms of abstract indices ==
Given two connections
∇
{\displaystyle \nabla }
and
∇
^
{\displaystyle {\hat {\nabla }}}
and a one form
ω
{\displaystyle \omega }
, we have
∇
a
ω
b
=
∇
^
a
ω
b
−
Q
a
b
c
ω
c
{\displaystyle \nabla _{a}\omega _{b}={\hat {\nabla }}_{a}\omega _{b}-Q_{ab}{}^{c}\omega _{c}}
for some tensor
Q
a
b
c
{\displaystyle Q_{ab}{}^{c}}
. Given an equivalence class of connections
[
∇
]
{\displaystyle [\nabla ]}
, we say that an operator is invariant if the form of the operator does not change when we change from one connection in the equivalence class to another. For example, if we consider the equivalence class of all torsion free connections, then the tensor Q is symmetric in its lower indices, i.e.
Q
a
b
c
=
Q
(
a
b
)
c
{\displaystyle Q_{ab}{}^{c}=Q_{(ab)}{}^{c}}
. Therefore we can compute
∇
[
a
ω
b
]
=
∇
^
[
a
ω
b
]
,
{\displaystyle \nabla _{[a}\omega _{b]}={\hat {\nabla }}_{[a}\omega _{b]},}
where brackets denote skew symmetrization. This shows the invariance of the exterior derivative when acting on one forms.
Equivalence classes of connections arise naturally in differential geometry, for example:
in conformal geometry an equivalence class of connections is given by the Levi Civita connections of all metrics in the conformal class;
in projective geometry an equivalence class of connection is given by all connections that have the same geodesics;
in CR geometry an equivalence class of connections is given by the Tanaka-Webster connections for each choice of pseudohermitian structure
== Examples ==
The usual gradient operator
∇
{\displaystyle \nabla }
acting on real valued functions on Euclidean space is invariant with respect to all Euclidean transformations.
The differential acting on functions on a manifold with values in 1-forms (its expression is
d
=
∑
j
∂
j
d
x
j
{\displaystyle d=\sum _{j}\partial _{j}\,dx_{j}}
in any local coordinates) is invariant with respect to all smooth transformations of the manifold (the action of the transformation on differential forms is just the pullback).
More generally, the exterior derivative
d
:
Ω
n
(
M
)
→
Ω
n
+
1
(
M
)
{\displaystyle d:\Omega ^{n}(M)\rightarrow \Omega ^{n+1}(M)}
that acts on n-forms of any smooth manifold M is invariant with respect to all smooth transformations. It can be shown that the exterior derivative is the only linear invariant differential operator between those bundles.
The Dirac operator in physics is invariant with respect to the Poincaré group (if we choose the proper action of the Poincaré group on spinor valued functions. This is, however, a subtle question and if we want to make this mathematically rigorous, we should say that it is invariant with respect to a group which is a double cover of the Poincaré group)
The conformal Killing equation
X
a
↦
∇
(
a
X
b
)
−
1
n
∇
c
X
c
g
a
b
{\displaystyle X^{a}\mapsto \nabla _{(a}X_{b)}-{\frac {1}{n}}\nabla _{c}X^{c}g_{ab}}
is a conformally invariant linear differential operator between vector fields and symmetric trace-free tensors.
== Conformal invariance ==
Given a metric
g
(
x
,
y
)
=
x
1
y
n
+
2
+
x
n
+
2
y
1
+
∑
i
=
2
n
+
1
x
i
y
i
{\displaystyle g(x,y)=x_{1}y_{n+2}+x_{n+2}y_{1}+\sum _{i=2}^{n+1}x_{i}y_{i}}
on
R
n
+
2
{\displaystyle \mathbb {R} ^{n+2}}
, we can write the sphere
S
n
{\displaystyle S^{n}}
as the space of generators of the nil cone
S
n
=
{
[
x
]
∈
R
P
n
+
1
:
g
(
x
,
x
)
=
0
}
.
{\displaystyle S^{n}=\{[x]\in \mathbb {RP} _{n+1}\;:\;g(x,x)=0\}.}
In this way, the flat model of conformal geometry is the sphere
S
n
=
G
/
P
{\displaystyle S^{n}=G/P}
with
G
=
S
O
0
(
n
+
1
,
1
)
{\displaystyle G=SO_{0}(n+1,1)}
and P the stabilizer of a point in
R
n
+
2
{\displaystyle \mathbb {R} ^{n+2}}
. A classification of all linear conformally invariant differential operators on the sphere is known (Eastwood and Rice, 1987).
== See also ==
Differential operators
Laplace invariant
Invariant factorization of LPDOs
== Notes ==
== References ==
Slovák, Jan (1993). Invariant Operators on Conformal Manifolds. Research Lecture Notes, University of Vienna (Dissertation).
Kolář, Ivan; Michor, Peter; Slovák, Jan (1993). Natural operators in differential geometry (PDF). Springer-Verlag, Berlin, Heidelberg, New York. Archived from the original (PDF) on 2017-03-30. Retrieved 2011-01-05.
Eastwood, M. G.; Rice, J. W. (1987). "Conformally invariant differential operators on Minkowski space and their curved analogues". Commun. Math. Phys. 109 (2): 207–228. Bibcode:1987CMaPh.109..207E. doi:10.1007/BF01215221. S2CID 121161256.
Kroeske, Jens (2008). "Invariant bilinear differential pairings on parabolic geometries". PhD Thesis from the University of Adelaide. arXiv:0904.3311. Bibcode:2009PhDT.......274K. | Wikipedia/Invariant_differential_operator |
In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory, e.g. in mathematical models that include ultrametric pseudo-differential equations in a non-Archimedean space.
== History ==
The study of pseudo-differential operators began in the mid 1960s with the work of Kohn, Nirenberg, Hörmander, Unterberger and Bokobza.
They played an influential role in the second proof of the Atiyah–Singer index theorem via K-theory. Atiyah and Singer thanked Hörmander for assistance with understanding the theory of pseudo-differential operators.
== Motivation ==
=== Linear differential operators with constant coefficients ===
Consider a linear differential operator with constant coefficients,
P
(
D
)
:=
∑
α
a
α
D
α
{\displaystyle P(D):=\sum _{\alpha }a_{\alpha }\,D^{\alpha }}
which acts on smooth functions
u
{\displaystyle u}
with compact support in Rn.
This operator can be written as a composition of a Fourier transform, a simple multiplication by the
polynomial function (called the symbol)
P
(
ξ
)
=
∑
α
a
α
ξ
α
,
{\displaystyle P(\xi )=\sum _{\alpha }a_{\alpha }\,\xi ^{\alpha },}
and an inverse Fourier transform, in the form:
Here,
α
=
(
α
1
,
…
,
α
n
)
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
is a multi-index,
a
α
{\displaystyle a_{\alpha }}
are complex numbers, and
D
α
=
(
−
i
∂
1
)
α
1
⋯
(
−
i
∂
n
)
α
n
{\displaystyle D^{\alpha }=(-i\partial _{1})^{\alpha _{1}}\cdots (-i\partial _{n})^{\alpha _{n}}}
is an iterated partial derivative, where ∂j means differentiation with respect to the j-th variable. We introduce the constants
−
i
{\displaystyle -i}
to facilitate the calculation of Fourier transforms.
Derivation of formula (1)
The Fourier transform of a smooth function u, compactly supported in Rn, is
u
^
(
ξ
)
:=
∫
e
−
i
y
ξ
u
(
y
)
d
y
{\displaystyle {\hat {u}}(\xi ):=\int e^{-iy\xi }u(y)\,dy}
and Fourier's inversion formula gives
u
(
x
)
=
1
(
2
π
)
n
∫
e
i
x
ξ
u
^
(
ξ
)
d
ξ
=
1
(
2
π
)
n
∬
e
i
(
x
−
y
)
ξ
u
(
y
)
d
y
d
ξ
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\hat {u}}(\xi )d\xi ={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }u(y)\,dy\,d\xi }
By applying P(D) to this representation of u and using
P
(
D
x
)
e
i
(
x
−
y
)
ξ
=
e
i
(
x
−
y
)
ξ
P
(
ξ
)
{\displaystyle P(D_{x})\,e^{i(x-y)\xi }=e^{i(x-y)\xi }\,P(\xi )}
one obtains formula (1).
=== Representation of solutions to partial differential equations ===
To solve the partial differential equation
P
(
D
)
u
=
f
{\displaystyle P(D)\,u=f}
we (formally) apply the Fourier transform on both sides and obtain the algebraic equation
P
(
ξ
)
u
^
(
ξ
)
=
f
^
(
ξ
)
.
{\displaystyle P(\xi )\,{\hat {u}}(\xi )={\hat {f}}(\xi ).}
If the symbol P(ξ) is never zero when ξ ∈ Rn, then it is possible to divide by P(ξ):
u
^
(
ξ
)
=
1
P
(
ξ
)
f
^
(
ξ
)
{\displaystyle {\hat {u}}(\xi )={\frac {1}{P(\xi )}}{\hat {f}}(\xi )}
By Fourier's inversion formula, a solution is
u
(
x
)
=
1
(
2
π
)
n
∫
e
i
x
ξ
1
P
(
ξ
)
f
^
(
ξ
)
d
ξ
.
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\int e^{ix\xi }{\frac {1}{P(\xi )}}{\hat {f}}(\xi )\,d\xi .}
Here it is assumed that:
P(D) is a linear differential operator with constant coefficients,
its symbol P(ξ) is never zero,
both u and ƒ have a well defined Fourier transform.
The last assumption can be weakened by using the theory of distributions.
The first two assumptions can be weakened as follows.
In the last formula, write out the Fourier transform of ƒ to obtain
u
(
x
)
=
1
(
2
π
)
n
∬
e
i
(
x
−
y
)
ξ
1
P
(
ξ
)
f
(
y
)
d
y
d
ξ
.
{\displaystyle u(x)={\frac {1}{(2\pi )^{n}}}\iint e^{i(x-y)\xi }{\frac {1}{P(\xi )}}f(y)\,dy\,d\xi .}
This is similar to formula (1), except that 1/P(ξ) is not a polynomial function, but a function of a more general kind.
== Definition of pseudo-differential operators ==
Here we view pseudo-differential operators as a generalization of differential operators.
We extend formula (1) as follows. A pseudo-differential operator P(x,D) on Rn is an operator whose value on the function u(x) is the function of x:
where
u
^
(
ξ
)
{\displaystyle {\hat {u}}(\xi )}
is the Fourier transform of u and the symbol P(x,ξ) in the integrand belongs to a certain symbol class.
For instance, if P(x,ξ) is an infinitely differentiable function on Rn × Rn with the property
|
∂
ξ
α
∂
x
β
P
(
x
,
ξ
)
|
≤
C
α
,
β
(
1
+
|
ξ
|
)
m
−
|
α
|
{\displaystyle |\partial _{\xi }^{\alpha }\partial _{x}^{\beta }P(x,\xi )|\leq C_{\alpha ,\beta }\,(1+|\xi |)^{m-|\alpha |}}
for all x,ξ ∈Rn, all multiindices α,β, some constants Cα, β and some real number m, then P belongs to the symbol class
S
1
,
0
m
{\displaystyle \scriptstyle {S_{1,0}^{m}}}
of Hörmander. The corresponding operator P(x,D) is called a pseudo-differential operator of order m and belongs to the class
Ψ
1
,
0
m
.
{\displaystyle \Psi _{1,0}^{m}.}
== Properties ==
Linear differential operators of order m with smooth bounded coefficients are pseudo-differential
operators of order m.
The composition PQ of two pseudo-differential operators P, Q is again a pseudo-differential operator and the symbol of PQ can be calculated by using the symbols of P and Q. The adjoint and transpose of a pseudo-differential operator is a pseudo-differential operator.
If a differential operator of order m is (uniformly) elliptic (of order m)
and invertible, then its inverse is a pseudo-differential operator of order −m, and its symbol can be calculated. This means that one can solve linear elliptic differential equations more or less explicitly
by using the theory of pseudo-differential operators.
Differential operators are local in the sense that one only needs the value of a function in a neighbourhood of a point to determine the effect of the operator. Pseudo-differential operators are pseudo-local, which means informally that when applied to a distribution they do not create a singularity at points where the distribution was already smooth.
Just as a differential operator can be expressed in terms of D = −id/dx in the form
p
(
x
,
D
)
{\displaystyle p(x,D)\,}
for a polynomial p in D (which is called the symbol), a pseudo-differential operator has a symbol in a more general class of functions. Often one can reduce a problem in analysis of pseudo-differential operators to a sequence of algebraic problems involving their symbols, and this is the essence of microlocal analysis.
== Kernel of pseudo-differential operator ==
Pseudo-differential operators can be represented by kernels. The singularity of the kernel on the diagonal depends on the degree of the corresponding operator. In fact, if the symbol satisfies the above differential inequalities with m ≤ 0, it can be shown that the kernel is a singular integral kernel.
== See also ==
Differential algebra for a definition of pseudo-differential operators in the context of differential algebras and differential rings.
Fourier transform
Fourier integral operator
Oscillatory integral operator
Sato's fundamental theorem
Operational calculus
== Footnotes ==
== References ==
Stein, Elias (1993), Harmonic Analysis: Real-Variable Methods, Orthogonality and Oscillatory Integrals, Princeton University Press.
Atiyah, Michael F.; Singer, Isadore M. (1968), "The Index of Elliptic Operators I", Annals of Mathematics, 87 (3): 484–530, doi:10.2307/1970715, JSTOR 1970715
== Further reading ==
Nicolas Lerner, Metrics on the phase space and non-selfadjoint pseudo-differential operators. Pseudo-Differential Operators. Theory and Applications, 3. Birkhäuser Verlag, Basel, 2010.
Michael E. Taylor, Pseudodifferential Operators, Princeton Univ. Press 1981. ISBN 0-691-08282-0
M. A. Shubin, Pseudodifferential Operators and Spectral Theory, Springer-Verlag 2001. ISBN 3-540-41195-X
Francois Treves, Introduction to Pseudo Differential and Fourier Integral Operators, (University Series in Mathematics), Plenum Publ. Co. 1981. ISBN 0-306-40404-4
F. G. Friedlander and M. Joshi, Introduction to the Theory of Distributions, Cambridge University Press 1999. ISBN 0-521-64971-4
Hörmander, Lars (1987). The Analysis of Linear Partial Differential Operators III: Pseudo-Differential Operators. Springer. ISBN 3-540-49937-7.
André Unterberger, Pseudo-differential operators and applications: an introduction. Lecture Notes Series, 46. Aarhus Universitet, Matematisk Institut, Aarhus, 1976.
== External links ==
Lectures on Pseudo-differential Operators by Mark S. Joshi on arxiv.org.
"Pseudo-differential operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Pseudo-differential_operator |
In quantum mechanics, energy is defined in terms of the energy operator, acting on the wave function of the system as a consequence of time translation symmetry.
== Definition ==
It is given by:
E
^
=
i
ℏ
∂
∂
t
{\displaystyle {\hat {E}}=i\hbar {\frac {\partial }{\partial t}}}
It acts on the wave function (the probability amplitude for different configurations of the system)
Ψ
(
r
,
t
)
{\displaystyle \Psi \left(\mathbf {r} ,t\right)}
== Application ==
The energy operator corresponds to the full energy of a system. The Schrödinger equation describes the space- and time-dependence of the slow changing (non-relativistic) wave function of a quantum system. The solution of the Schrödinger equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta.
=== Schrödinger equation ===
Using the energy operator in the Schrödinger equation:
i
ℏ
∂
∂
t
Ψ
(
r
,
t
)
=
H
^
Ψ
(
r
,
t
)
{\displaystyle i\hbar {\frac {\partial }{\partial t}}\Psi (\mathbf {r} ,\,t)={\hat {H}}\Psi (\mathbf {r} ,t)}
one obtains:
E
^
Ψ
(
r
,
t
)
=
H
^
Ψ
(
r
,
t
)
{\displaystyle {\hat {E}}\Psi (\mathbf {r} ,t)={\hat {H}}\Psi (\mathbf {r} ,t)}
where i is the imaginary unit, ħ is the reduced Planck constant, and
H
^
{\displaystyle {\hat {H}}}
is the Hamiltonian operator expressed as:
H
^
=
−
ℏ
2
2
m
∇
2
+
V
(
x
)
.
{\displaystyle {\hat {H}}=-{\frac {\hbar ^{2}}{2m}}\nabla ^{2}+V(x).}
From the equation, the equality can be made:
⟨
E
⟩
=
⟨
H
^
⟩
{\textstyle \langle E\rangle =\langle {\hat {H}}\rangle }
, where
⟨
E
⟩
{\textstyle \langle E\rangle }
is the expectation value of energy.
==== Properties ====
It can be shown that the expectation value of energy will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
K
E
=
−
ℏ
2
2
m
∫
−
∞
+
∞
ψ
∗
(
d
2
ψ
d
x
2
)
d
x
=
−
ℏ
2
2
m
(
[
ψ
′
(
x
)
ψ
∗
(
x
)
]
−
∞
+
∞
−
∫
−
∞
+
∞
(
d
ψ
d
x
)
(
d
ψ
d
x
)
∗
d
x
)
=
ℏ
2
2
m
∫
−
∞
+
∞
|
d
ψ
d
x
|
2
d
x
≥
0
{\displaystyle {\begin{aligned}KE&=-{\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\psi ^{*}\left({\frac {d^{2}\psi }{dx^{2}}}\right)\,dx\\&=-{\frac {\hbar ^{2}}{2m}}\left({\left[\psi '(x)\psi ^{*}(x)\right]_{-\infty }^{+\infty }}-\int _{-\infty }^{+\infty }\left({\frac {d\psi }{dx}}\right)\left({\frac {d\psi }{dx}}\right)^{*}\,dx\right)\\&={\frac {\hbar ^{2}}{2m}}\int _{-\infty }^{+\infty }\left|{\frac {d\psi }{dx}}\right|^{2}\,dx\geq 0\end{aligned}}}
Hence the expectation value of kinetic energy is always non-negative. This result can be used with the linearity condition to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
E
=
K
E
+
⟨
V
(
x
)
⟩
=
K
E
+
∫
−
∞
+
∞
V
(
x
)
|
ψ
(
x
)
|
2
d
x
≥
V
min
(
x
)
∫
−
∞
+
∞
|
ψ
(
x
)
|
2
d
x
≥
V
min
(
x
)
{\displaystyle E=KE+\langle V(x)\rangle =KE+\int _{-\infty }^{+\infty }V(x)|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)\int _{-\infty }^{+\infty }|\psi (x)|^{2}\,dx\geq V_{\text{min}}(x)}
which complete the proof. Similarly, the same condition can be generalized to any higher dimensions.
==== Constant energy ====
Working from the definition, a partial solution for a wavefunction of a particle with a constant energy can be constructed. If the wavefunction is assumed to be separable, then the time dependence can be stated as
e
−
i
E
t
/
ℏ
{\displaystyle e^{-iEt/\hbar }}
, where E is the constant energy. In full,
Ψ
(
r
,
t
)
=
ψ
(
r
)
e
−
i
E
t
/
ℏ
{\displaystyle \Psi (\mathbf {r} ,t)=\psi (\mathbf {r} )e^{-iEt/\hbar }}
where
ψ
(
r
)
{\displaystyle \psi (\mathbf {r} )}
is the partial solution of the wavefunction dependent on position. Applying the energy operator, we have
E
^
Ψ
(
r
,
t
)
=
i
ℏ
∂
∂
t
ψ
(
r
)
e
−
i
E
t
/
ℏ
=
i
ℏ
(
−
i
E
ℏ
)
ψ
(
r
)
e
−
i
E
t
/
ℏ
=
E
ψ
(
r
)
e
−
i
E
t
/
ℏ
=
E
Ψ
(
r
,
t
)
.
{\displaystyle {\hat {E}}\Psi (\mathbf {r} ,t)=i\hbar {\frac {\partial }{\partial t}}\psi (\mathbf {r} )e^{-iEt/\hbar }=i\hbar \left({\frac {-iE}{\hbar }}\right)\psi (\mathbf {r} )e^{-iEt/\hbar }=E\psi (\mathbf {r} )e^{-iEt/\hbar }=E\Psi (\mathbf {r} ,t).}
This is also known as the stationary state, and can be used to analyse the time-independent Schrödinger equation:
E
Ψ
(
r
,
t
)
=
H
^
Ψ
(
r
,
t
)
{\displaystyle E\Psi (\mathbf {r} ,t)={\hat {H}}\Psi (\mathbf {r} ,t)}
where E is an eigenvalue of energy.
=== Klein–Gordon equation ===
The relativistic mass-energy relation:
E
2
=
(
p
c
)
2
+
(
m
c
2
)
2
{\displaystyle E^{2}=(pc)^{2}+(mc^{2})^{2}}
where again E = total energy, p = total 3-momentum of the particle, m = invariant mass, and c = speed of light, can similarly yield the Klein–Gordon equation:
E
^
2
=
c
2
p
^
2
+
(
m
c
2
)
2
E
^
2
Ψ
=
c
2
p
^
2
Ψ
+
(
m
c
2
)
2
Ψ
{\displaystyle {\begin{aligned}&{\hat {E}}^{2}=c^{2}{\hat {p}}^{2}+(mc^{2})^{2}\\&{\hat {E}}^{2}\Psi =c^{2}{\hat {p}}^{2}\Psi +(mc^{2})^{2}\Psi \\\end{aligned}}}
where
p
^
{\displaystyle {\hat {p}}}
is the momentum operator. That is:
∂
2
Ψ
∂
t
2
=
c
2
∇
2
Ψ
−
(
m
c
2
ℏ
)
2
Ψ
{\displaystyle {\frac {\partial ^{2}\Psi }{\partial t^{2}}}=c^{2}\nabla ^{2}\Psi -\left({\frac {mc^{2}}{\hbar }}\right)^{2}\Psi }
== Derivation ==
The energy operator is easily derived from using the free particle wave function (plane wave solution to Schrödinger's equation). Starting in one dimension the wave function is
Ψ
=
e
i
(
k
x
−
ω
t
)
{\displaystyle \Psi =e^{i(kx-\omega t)}}
The time derivative of Ψ is
∂
Ψ
∂
t
=
−
i
ω
e
i
(
k
x
−
ω
t
)
=
−
i
ω
Ψ
.
{\displaystyle {\frac {\partial \Psi }{\partial t}}=-i\omega e^{i(kx-\omega t)}=-i\omega \Psi .}
By the De Broglie relation:
E
=
ℏ
ω
,
{\displaystyle E=\hbar \omega ,}
we have
∂
Ψ
∂
t
=
−
i
E
ℏ
Ψ
.
{\displaystyle {\frac {\partial \Psi }{\partial t}}=-i{\frac {E}{\hbar }}\Psi .}
Re-arranging the equation leads to
E
Ψ
=
i
ℏ
∂
Ψ
∂
t
,
{\displaystyle E\Psi =i\hbar {\frac {\partial \Psi }{\partial t}},}
where the energy factor E is a scalar value, the energy the particle has and the value that is measured. The partial derivative is a linear operator so this expression is the operator for energy:
E
^
=
i
ℏ
∂
∂
t
.
{\displaystyle {\hat {E}}=i\hbar {\frac {\partial }{\partial t}}.}
It can be concluded that the scalar E is the eigenvalue of the operator, while
E
^
{\displaystyle {\hat {E}}}
is the operator. Summarizing these results:
E
^
Ψ
=
i
ℏ
∂
∂
t
Ψ
=
E
Ψ
{\displaystyle {\hat {E}}\Psi =i\hbar {\frac {\partial }{\partial t}}\Psi =E\Psi }
For a 3-d plane wave
Ψ
=
e
i
(
k
⋅
r
−
ω
t
)
{\displaystyle \Psi =e^{i(\mathbf {k} \cdot \mathbf {r} -\omega t)}}
the derivation is exactly identical, as no change is made to the term including time and therefore the time derivative. Since the operator is linear, they are valid for any linear combination of plane waves, and so they can act on any wave function without affecting the properties of the wave function or operators. Hence this must be true for any wave function. It turns out to work even in relativistic quantum mechanics, such as the Klein–Gordon equation above.
== See also ==
Time translation symmetry
Planck constant
Schrödinger equation
Momentum operator
Hamiltonian (quantum mechanics)
Conservation of energy
Complex number
Stationary state
== References == | Wikipedia/Energy_operator |
In mathematical finance, the Black–Scholes equation, also called the Black–Scholes–Merton equation, is a partial differential equation (PDE) governing the price evolution of derivatives under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives.
Consider a stock paying no dividends. Now construct any derivative that has a fixed maturation time
T
{\displaystyle T}
in the future, and at maturation, it has payoff
K
(
S
T
)
{\displaystyle K(S_{T})}
that depends on the values taken by the stock at that moment (such as European call or put options). Then the price of the derivative satisfies
{
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
+
r
S
∂
V
∂
S
−
r
V
=
0
V
(
T
,
s
)
=
K
(
s
)
∀
s
{\displaystyle {\begin{cases}{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}+rS{\frac {\partial V}{\partial S}}-rV=0\\V(T,s)=K(s)\quad \forall s\end{cases}}}
where
V
(
t
,
S
)
{\displaystyle V(t,S)}
is the price of the option as a function of stock price S and time t, r is the risk-free interest rate, and
σ
{\displaystyle \sigma }
is the volatility of the stock.
The key financial insight behind the equation is that, under the model assumption of a frictionless market, one can perfectly hedge the option by buying and selling the underlying asset in just the right way and consequently “eliminate risk". This hedge, in turn, implies that there is only one right price for the option, as returned by the Black–Scholes formula.
== Financial interpretation ==
The equation has a concrete interpretation that is often used by practitioners and is the basis for the common derivation given in the next subsection. The equation can be rewritten in the form:
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
=
r
V
−
r
S
∂
V
∂
S
{\displaystyle {\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}=rV-rS{\frac {\partial V}{\partial S}}}
The left-hand side consists of a "time decay" term, the change in derivative value with respect to time, called theta, and a term involving the second spatial derivative gamma, the convexity of the derivative value with respect to the underlying value. The right-hand side is the riskless return from a long position in the derivative and a short position consisting of
∂
V
/
∂
S
{\textstyle {\partial V}/{\partial S}}
shares of the underlying asset.
Black and Scholes' insight was that the portfolio represented by the right-hand side is riskless: thus the equation says that the riskless return over any infinitesimal time interval can be expressed as the sum of theta and a term incorporating gamma. For an option, theta is typically negative, reflecting the loss in value due to having less time for exercising the option (for a European call on an underlying without dividends, it is always negative). Gamma is typically positive and so the gamma term reflects the gains in holding the option. The equation states that over any infinitesimal time interval the loss from theta and the gain from the gamma term must offset each other so that the result is a return at the riskless rate.
From the viewpoint of the option issuer, e.g. an investment bank, the gamma term is the cost of hedging the option. (Since gamma is the greatest when the spot price of the underlying is near the strike price of the option, the seller's hedging costs are the greatest in that circumstance.)
== Derivation ==
Per the model assumptions above, the price of the underlying asset (typically a stock) follows a geometric Brownian motion. That is
d
S
=
μ
S
d
t
+
σ
S
d
W
{\displaystyle dS=\mu S\,dt+\sigma S\,dW\,}
where W is a stochastic variable (Brownian motion). Note that W, and consequently its infinitesimal increment dW, represents the only source of uncertainty in the price history of the stock. Intuitively, W(t) is a process that "wiggles up and down" in such a random way that its expected change over any time interval is 0. (In addition, its variance over time T is equal to T; see Wiener process § Basic properties); a good discrete analogue for W is a simple random walk. Thus the above equation states that the infinitesimal rate of return on the stock has an expected value of μ dt and a variance of
σ
2
d
t
{\displaystyle \sigma ^{2}dt}
.
The payoff of an option (or any derivative contingent to stock S)
V
(
S
,
T
)
{\displaystyle V(S,T)}
at maturity is known. To find its value at an earlier time we need to know how
V
{\displaystyle V}
evolves as a function of
S
{\displaystyle S}
and
t
{\displaystyle t}
. By Itô's lemma for two variables we have
d
V
=
(
μ
S
∂
V
∂
S
+
∂
V
∂
t
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
)
d
t
+
σ
S
∂
V
∂
S
d
W
{\displaystyle dV=\left(\mu S{\frac {\partial V}{\partial S}}+{\frac {\partial V}{\partial t}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)dt+\sigma S{\frac {\partial V}{\partial S}}\,dW}
Now consider a portfolio
Π
{\displaystyle \Pi }
consisting of a short option and
∂
V
/
∂
S
{\textstyle {\partial V}/{\partial S}}
long shares at time
t
{\displaystyle t}
. The value of these holdings is
Π
=
−
V
+
∂
V
∂
S
S
{\displaystyle \Pi =-V+{\frac {\partial V}{\partial S}}S}
As
∂
V
∂
S
{\displaystyle {\frac {\partial V}{\partial S}}}
changes with time, the position in
S
{\displaystyle S}
is continually updated. We implicitly assume that the portfolio contains a cash account to accommodate buying and selling shares
S
{\displaystyle S}
, making the portfolio self-financing. Therefore, we only need to consider the total profit or loss from changes in the values of the holdings:
d
Π
=
−
d
V
+
∂
V
∂
S
d
S
{\displaystyle d\Pi =-dV+{\frac {\partial V}{\partial S}}dS}
Substituting
d
S
{\displaystyle dS}
and
d
V
{\displaystyle dV}
into the expression for
d
Π
{\displaystyle d\Pi }
:
d
Π
=
(
−
∂
V
∂
t
−
1
2
σ
2
S
2
∂
2
V
∂
S
2
)
d
t
{\displaystyle d\Pi =\left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)dt}
Over a time period
[
t
,
t
+
Δ
t
]
{\displaystyle [t,t+\Delta t]}
, for
Δ
t
{\displaystyle \Delta t}
small enough, we see that
Δ
Π
=
(
−
∂
V
∂
t
−
1
2
σ
2
S
2
∂
2
V
∂
S
2
)
Δ
t
{\displaystyle \Delta \Pi =\left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t}
Note that the
d
W
{\displaystyle dW}
terms have vanished. Thus uncertainty has been eliminated and the portfolio is effectively riskless, i.e. a delta-hedge. The rate of return on this portfolio must be equal to the rate of return on any other riskless instrument; otherwise, there would be opportunities for arbitrage. Now assuming the risk-free rate of return is
r
{\displaystyle r}
we must have over the time period
[
t
,
t
+
Δ
t
]
{\displaystyle [t,t+\Delta t]}
:
Δ
Π
=
r
Π
Δ
t
{\displaystyle \Delta \Pi =r\Pi \,\Delta t}
If we now substitute our formulas for
Δ
Π
{\displaystyle \Delta \Pi }
and
Π
{\displaystyle \Pi }
we obtain:
(
−
∂
V
∂
t
−
1
2
σ
2
S
2
∂
2
V
∂
S
2
)
Δ
t
=
r
(
−
V
+
S
∂
V
∂
S
)
Δ
t
{\displaystyle \left(-{\frac {\partial V}{\partial t}}-{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}\right)\Delta t=r\left(-V+S{\frac {\partial V}{\partial S}}\right)\Delta t}
Simplifying, we arrive at the Black–Scholes partial differential equation:
∂
V
∂
t
+
r
S
∂
V
∂
S
+
1
2
σ
2
S
2
∂
2
V
∂
S
2
=
r
V
{\displaystyle {\frac {\partial V}{\partial t}}+rS{\frac {\partial V}{\partial S}}+{\frac {1}{2}}\sigma ^{2}S^{2}{\frac {\partial ^{2}V}{\partial S^{2}}}=rV}
With the assumptions of the Black–Scholes model, this second order partial differential equation holds for any type of option as long as its price function
V
{\displaystyle V}
is twice differentiable with respect to
S
{\displaystyle S}
and once with respect to
t
{\displaystyle t}
.
=== Alternative derivation ===
Here is an alternative derivation that can be utilized in situations where it is initially unclear what the hedging portfolio should be. (For a reference, see 6.4 of Shreve vol II).
In the Black–Scholes model, assuming we have picked the risk-neutral probability measure, the underlying stock price S(t) is assumed to evolve as a geometric Brownian motion:
d
S
(
t
)
S
(
t
)
=
r
d
t
+
σ
d
W
(
t
)
{\displaystyle {\frac {dS(t)}{S(t)}}=r\ dt+\sigma dW(t)}
Since this stochastic differential equation (SDE) shows the stock price evolution is Markovian, any derivative on this underlying is a function of time t and the stock price at the current time, S(t). Then an application of Itô's lemma gives an SDE for the discounted derivative process
e
−
r
t
V
(
t
,
S
(
t
)
)
{\displaystyle e^{-rt}V(t,S(t))}
, which should be a martingale. In order for that to hold, the drift term must be zero, which implies the Black—Scholes PDE.
This derivation is basically an application of the Feynman–Kac formula and can be attempted whenever the underlying asset(s) evolve according to given SDE(s).
== Solving methods ==
Once the Black–Scholes PDE, with boundary and terminal conditions, is derived for a derivative, the PDE can be solved numerically using standard methods of numerical analysis, such as a type of finite difference method. In certain cases, it is possible to solve for an exact formula, such as in the case of a European call, which was done by Black and Scholes.
The solution is conceptually simple. Since in the Black–Scholes model, the underlying stock price
S
t
{\displaystyle S_{t}}
follows a geometric Brownian motion, the distribution of
S
T
{\displaystyle S_{T}}
, conditional on its price
S
t
{\displaystyle S_{t}}
at time
t
{\displaystyle t}
, is a log-normal distribution. Then the price of the derivative is just discounted expected payoff
E
[
e
−
r
(
T
−
t
)
K
(
S
T
)
|
S
t
]
{\displaystyle E[e^{-r(T-t)}K(S_{T})|S_{t}]}
, which may be computed analytically when the payoff function
K
{\displaystyle K}
is analytically tractable, or numerically if not.
To do this for a call option, recall the PDE above has boundary conditions
C
(
0
,
t
)
=
0
for all
t
C
(
S
,
t
)
∼
S
−
K
e
−
r
(
T
−
t
)
as
S
→
∞
C
(
S
,
T
)
=
max
{
S
−
K
,
0
}
{\displaystyle {\begin{aligned}C(0,t)&=0{\text{ for all }}t\\C(S,t)&\sim S-Ke^{-r(T-t)}{\text{ as }}S\rightarrow \infty \\C(S,T)&=\max\{S-K,0\}\end{aligned}}}
The last condition gives the value of the option at the time that the option matures. Other conditions are possible as S goes to 0 or infinity. For example, common conditions utilized in other situations are to choose delta to vanish as S goes to 0 and gamma to vanish as S goes to infinity; these will give the same formula as the conditions above (in general, differing boundary conditions will give different solutions, so some financial insight should be utilized to pick suitable conditions for the situation at hand).
The solution of the PDE gives the value of the option at any earlier time,
E
[
max
{
S
−
K
,
0
}
]
{\displaystyle \mathbb {E} \left[\max\{S-K,0\}\right]}
. To solve the PDE we recognize that it is a Cauchy–Euler equation which can be transformed into a diffusion equation by introducing the change-of-variable transformation
τ
=
T
−
t
u
=
C
e
r
τ
x
=
ln
(
S
K
)
+
(
r
−
1
2
σ
2
)
τ
{\displaystyle {\begin{aligned}\tau &=T-t\\u&=Ce^{r\tau }\\x&=\ln \left({\frac {S}{K}}\right)+\left(r-{\frac {1}{2}}\sigma ^{2}\right)\tau \end{aligned}}}
Then the Black–Scholes PDE becomes a diffusion equation
∂
u
∂
τ
=
1
2
σ
2
∂
2
u
∂
x
2
{\displaystyle {\frac {\partial u}{\partial \tau }}={\frac {1}{2}}\sigma ^{2}{\frac {\partial ^{2}u}{\partial x^{2}}}}
The terminal condition
C
(
S
,
T
)
=
max
{
S
−
K
,
0
}
{\displaystyle C(S,T)=\max\{S-K,0\}}
now becomes an initial condition
u
(
x
,
0
)
=
u
0
(
x
)
:=
K
(
e
max
{
x
,
0
}
−
1
)
=
K
(
e
x
−
1
)
H
(
x
)
,
{\displaystyle u(x,0)=u_{0}(x):=K(e^{\max\{x,0\}}-1)=K\left(e^{x}-1\right)H(x),}
where H(x) is the Heaviside step function. The Heaviside function corresponds to enforcement of the boundary data in the S, t coordinate system that requires when t = T,
C
(
S
,
T
)
=
0
∀
S
<
K
,
{\displaystyle C(S,\,T)=0\quad \forall \;S<K,}
assuming both S, K > 0. With this assumption, it is equivalent to the max function over all x in the real numbers, with the exception of x = 0. The equality above between the max function and the Heaviside function is in the sense of distributions because it does not hold for x = 0. Though subtle, this is important because the Heaviside function need not be finite at x = 0, or even defined for that matter. For more on the value of the Heaviside function at x = 0, see the section "Zero Argument" in the article Heaviside step function.
Using the standard convolution method for solving a diffusion equation given an initial value function, u(x, 0), we have
u
(
x
,
τ
)
=
1
σ
2
π
τ
∫
−
∞
∞
u
0
(
y
)
exp
[
−
(
x
−
y
)
2
2
σ
2
τ
]
d
y
,
{\displaystyle u(x,\tau )={\frac {1}{\sigma {\sqrt {2\pi \tau }}}}\int _{-\infty }^{\infty }{u_{0}(y)\exp {\left[-{\frac {(x-y)^{2}}{2\sigma ^{2}\tau }}\right]}}dy,}
which, after some manipulation, yields
u
(
x
,
τ
)
=
K
e
x
+
1
2
σ
2
τ
N
(
d
+
)
−
K
N
(
d
−
)
,
{\displaystyle u(x,\tau )=Ke^{x+{\frac {1}{2}}\sigma ^{2}\tau }N(d_{+})-KN(d_{-}),}
where
N
(
⋅
)
{\displaystyle N(\cdot )}
is the standard normal cumulative distribution function and
d
+
=
1
σ
τ
[
(
x
+
1
2
σ
2
τ
)
+
1
2
σ
2
τ
]
d
−
=
1
σ
τ
[
(
x
+
1
2
σ
2
τ
)
−
1
2
σ
2
τ
]
.
{\displaystyle {\begin{aligned}d_{+}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)+{\frac {1}{2}}\sigma ^{2}\tau \right]\\d_{-}&={\frac {1}{\sigma {\sqrt {\tau }}}}\left[\left(x+{\frac {1}{2}}\sigma ^{2}\tau \right)-{\frac {1}{2}}\sigma ^{2}\tau \right].\end{aligned}}}
These are the same solutions (up to time translation) that were obtained by Fischer Black in 1976.
Reverting
u
,
x
,
τ
{\displaystyle u,x,\tau }
to the original set of variables yields the above stated solution to the Black–Scholes equation.
The asymptotic condition can now be realized.
u
(
x
,
τ
)
≍
x
⇝
∞
K
e
x
,
{\displaystyle u(x,\,\tau ){\overset {x\rightsquigarrow \infty }{\asymp }}Ke^{x},}
which gives simply S when reverting to the original coordinates.
lim
x
→
∞
N
(
x
)
=
1.
{\displaystyle \lim _{x\to \infty }N(x)=1.}
== See also ==
Bachelier model - uses arithmetic Brownian motion instead of geometric
== References == | Wikipedia/Black–Scholes_equation |
Regularity is a topic of the mathematical study of partial differential equations (PDE) such as Laplace's equation, about the integrability and differentiability of weak solutions. Hilbert's nineteenth problem was concerned with this concept.
The motivation for this study is as follows. It is often difficult to construct a classical solution satisfying the PDE in regular sense, so we search for a weak solution at first, and then find out whether the weak solution is smooth enough to be qualified as a classical solution.
Several theorems have been proposed for different types of PDEs.
== Elliptic regularity theory ==
Let
U
{\displaystyle U}
be an open, bounded subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
, denote its boundary as
∂
U
{\displaystyle \partial U}
and the variables as
x
=
(
x
1
,
.
.
.
,
x
n
)
{\displaystyle x=(x_{1},...,x_{n})}
. Representing the PDE as a partial differential operator
L
{\displaystyle L}
acting on an unknown function
u
=
u
(
x
)
{\displaystyle u=u(x)}
of
x
∈
U
{\displaystyle x\in U}
results in a BVP of the form
{
L
u
=
f
in
U
u
=
0
on
∂
U
,
{\displaystyle \left\{{\begin{aligned}Lu&=f&&{\text{in }}U\\u&=0&&{\text{on }}\partial U,\end{aligned}}\right.}
where
f
:
U
→
R
{\displaystyle f:U\rightarrow \mathbb {R} }
is a given function
f
=
f
(
x
)
{\displaystyle f=f(x)}
and
u
:
U
∪
∂
U
→
R
{\displaystyle u:U\cup \partial U\rightarrow \mathbb {R} }
and the elliptic operator
L
{\displaystyle L}
is of the divergence form:
L
u
(
x
)
=
−
∑
i
,
j
=
1
n
(
a
i
j
(
x
)
u
x
i
)
x
j
+
∑
i
=
1
n
b
i
(
x
)
u
x
i
(
x
)
+
c
(
x
)
u
(
x
)
,
{\displaystyle Lu(x)=-\sum _{i,j=1}^{n}(a_{ij}(x)u_{x_{i}})_{x_{j}}+\sum _{i=1}^{n}b_{i}(x)u_{x_{i}}(x)+c(x)u(x),}
then
Interior regularity: If m is a natural number,
a
i
j
,
b
j
,
c
∈
C
m
+
1
(
U
)
,
f
∈
H
m
(
U
)
{\displaystyle a^{ij},b^{j},c\in C^{m+1}(U),f\in H^{m}(U)}
(2) ,
u
∈
H
0
1
(
U
)
{\displaystyle u\in H_{0}^{1}(U)}
is a weak solution, then for any open set V in U with compact closure,
‖
u
‖
H
m
+
2
(
V
)
≤
C
(
‖
f
‖
H
m
(
U
)
+
‖
u
‖
L
2
(
U
)
)
{\displaystyle \|u\|_{H^{m+2}(V)}\leq C(\|f\|_{H^{m}(U)}+\|u\|_{L^{2}(U)})}
(3), where C depends on U, V, L, m, per se
u
∈
H
l
o
c
m
+
2
(
U
)
{\displaystyle u\in H_{loc}^{m+2}(U)}
, which also holds if m is infinity by Sobolev embedding theorem.
Boundary regularity: (2) together with the assumption that
∂
U
{\displaystyle \partial U}
is
C
m
+
2
{\displaystyle C^{m+2}}
indicates that (3) still holds after replacing V with U, i.e.
u
∈
H
m
+
2
(
U
)
{\displaystyle u\in H^{m+2}(U)}
, which also holds if m is infinity.
== Parabolic and Hyperbolic regularity theory ==
Parabolic and hyperbolic PDEs describe the time evolution of a quantity u governed by an elliptic operator L and an external force f over a space
U
⊂
R
n
{\displaystyle U\subset \mathbb {R} ^{n}}
. We assume the boundary of U to be smooth, and the elliptic operator to be independent of time, with smooth coefficients, i.e.
L
u
(
t
,
x
)
=
−
∑
i
,
j
=
1
n
(
a
i
j
(
x
)
u
x
i
(
t
,
x
)
)
x
j
+
∑
i
=
1
n
b
i
(
x
)
u
x
i
(
t
,
x
)
+
c
(
x
)
u
(
t
,
x
)
.
{\displaystyle Lu(t,x)=-\sum _{i,j=1}^{n}{\big (}a_{ij}(x)u_{x_{i}}(t,x){\big )}_{x_{j}}+\sum _{i=1}^{n}b_{i}(x)u_{x_{i}}(t,x)+c(x)u(t,x).}
In addition, we subscribe the boundary value of u to be 0.
Then the regularity of the solution is given by the following table,
where m is a natural number,
x
∈
U
{\displaystyle x\in U}
denotes the space variable, t denotes the time variable, Hs is a Sobolev space of functions with square-integrable weak derivatives, and LtpX is the Bochner space of integrable X-valued functions.
== Counterexamples ==
Not every weak solution is smooth; for example, there may be discontinuities in the weak solutions of conservation laws called shock waves.
== References == | Wikipedia/Regularity_theory |
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
The HAM was first devised in 1992 by Liao Shijun of Shanghai Jiaotong University in his PhD dissertation and further modified in 1997 to introduce a non-zero auxiliary parameter, referred to as the convergence-control parameter, c0, to construct a homotopy on a differential system in general form. The convergence-control parameter is a non-physical variable that provides a simple way to verify and enforce convergence of a solution series. The capability of the HAM to naturally show convergence of the series solution is unusual in analytical and semi-analytic approaches to nonlinear partial differential equations.
== Characteristics ==
The HAM distinguishes itself from various other analytical methods in four important aspects. First, it is a series expansion method that is not directly dependent on small or large physical parameters. Thus, it is applicable for not only weakly but also strongly nonlinear problems, going beyond some of the inherent limitations of the standard perturbation methods. Second, the HAM is a unified method for the Lyapunov artificial small parameter method, the delta expansion method, the Adomian decomposition method, and the homotopy perturbation method. The greater generality of the method often allows for strong convergence of the solution over larger spatial and parameter domains. Third, the HAM gives excellent flexibility in the expression of the solution and how the solution is explicitly obtained. It provides great freedom to choose the basis functions of the desired solution and the corresponding auxiliary linear operator of the homotopy. Finally, unlike the other analytic approximation techniques, the HAM provides a simple way to ensure the convergence of the solution series.
The homotopy analysis method is also able to combine with other techniques employed in nonlinear differential equations such as spectral methods and Padé approximants. It may further be combined with computational methods, such as the boundary element method to allow the linear method to solve nonlinear systems. Different from the numerical technique of homotopy continuation, the homotopy analysis method is an analytic approximation method as opposed to a discrete computational method. Further, the HAM uses the homotopy parameter only on a theoretical level to demonstrate that a nonlinear system may be split into an infinite set of linear systems which are solved analytically, while the continuation methods require solving a discrete linear system as the homotopy parameter is varied to solve the nonlinear system.
== Applications ==
In the last twenty years, the HAM has been applied to solve a growing number of nonlinear ordinary/partial differential equations in science, finance, and engineering.
For example, multiple steady-state resonant waves in deep and finite water depth were found with the wave resonance criterion of arbitrary number of traveling gravity waves; this agreed with Phillips' criterion for four waves with small amplitude. Further, a unified wave model applied with the HAM, admits not only the traditional smooth progressive periodic/solitary waves, but also the progressive solitary waves with peaked crest in finite water depth. This model shows peaked solitary waves are consistent solutions along with the known smooth ones. Additionally, the HAM has been applied to many other nonlinear problems such as nonlinear heat transfer, the limit cycle of nonlinear dynamic systems, the American put option, the exact Navier–Stokes equation, the option pricing under stochastic volatility, the electrohydrodynamic flows, the Poisson–Boltzmann equation for semiconductor devices, and others.
== Brief mathematical description ==
Consider a general nonlinear differential equation
N
[
u
(
x
)
]
=
0
{\displaystyle {\mathcal {N}}[u(x)]=0}
,
where
N
{\displaystyle {\mathcal {N}}}
is a nonlinear operator. Let
L
{\displaystyle {\mathcal {L}}}
denote an auxiliary linear operator, u0(x) an initial guess of u(x), and c0 a constant (called the convergence-control parameter), respectively. Using the embedding parameter q ∈ [0,1] from homotopy theory, one may construct a family of equations,
(
1
−
q
)
L
[
U
(
x
;
q
)
−
u
0
(
x
)
]
=
c
0
q
N
[
U
(
x
;
q
)
]
,
{\displaystyle (1-q){\mathcal {L}}[U(x;q)-u_{0}(x)]=c_{0}\,q\,{\mathcal {N}}[U(x;q)],}
called the zeroth-order deformation equation, whose solution varies continuously with respect to the embedding parameter q ∈ [0,1]. This is the linear equation
L
[
U
(
x
;
q
)
−
u
0
(
x
)
]
=
0
,
{\displaystyle {\mathcal {L}}[U(x;q)-u_{0}(x)]=0,}
with known initial guess U(x; 0) = u0(x) when q = 0, but is equivalent to the original nonlinear equation
N
[
u
(
x
)
]
=
0
{\displaystyle {\mathcal {N}}[u(x)]=0}
, when q = 1, i.e. U(x; 1) = u(x)). Therefore, as q increases from 0 to 1, the solution U(x; q) of the zeroth-order deformation equation varies (or deforms) from the chosen initial guess u0(x) to the solution u(x) of the considered nonlinear equation.
Expanding U(x; q) in a Taylor series about q = 0, we have the homotopy-Maclaurin series
U
(
x
;
q
)
=
u
0
(
x
)
+
∑
m
=
1
∞
u
m
(
x
)
q
m
.
{\displaystyle U(x;q)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x)\,q^{m}.}
Assuming that the so-called convergence-control parameter c0 of the zeroth-order deformation equation is properly chosen that the above series is convergent at q = 1, we have the homotopy-series solution
u
(
x
)
=
u
0
(
x
)
+
∑
m
=
1
∞
u
m
(
x
)
.
{\displaystyle u(x)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x).}
From the zeroth-order deformation equation, one can directly derive the governing equation of um(x)
L
[
u
m
(
x
)
−
χ
m
u
m
−
1
(
x
)
]
=
c
0
R
m
[
u
0
,
u
1
,
…
,
u
m
−
1
]
,
{\displaystyle {\mathcal {L}}[u_{m}(x)-\chi _{m}u_{m-1}(x)]=c_{0}\,R_{m}[u_{0},u_{1},\ldots ,u_{m-1}],}
called the mth-order deformation equation, where
χ
1
=
0
{\displaystyle \chi _{1}=0}
and
χ
k
=
1
{\displaystyle \chi _{k}=1}
for k > 1, and the right-hand side Rm is dependent only upon the known results u0, u1, ..., um − 1 and can be obtained easily using computer algebra software. In this way, the original nonlinear equation is transferred into an infinite number of linear ones, but without the assumption of any small/large physical parameters.
Since the HAM is based on a homotopy, one has great freedom to choose the initial guess u0(x), the auxiliary linear operator
L
{\displaystyle {\mathcal {L}}}
, and the convergence-control parameter c0 in the zeroth-order deformation equation. Thus, the HAM provides the mathematician freedom to choose the equation-type of the high-order deformation equation and the base functions of its solution. The optimal value of the convergence-control parameter c0 is determined by the minimum of the squared residual error of governing equations and/or boundary conditions after the general form has been solved for the chosen initial guess and linear operator. Thus, the convergence-control parameter c0 is a simple way to guarantee the convergence of the homotopy series solution and differentiates the HAM from other analytic approximation methods. The method overall gives a useful generalization of the concept of homotopy.
== The HAM and computer algebra ==
The HAM is an analytic approximation method designed for the computer era with the goal of "computing with functions instead of numbers." In conjunction with a computer algebra system such as Mathematica or Maple, one can gain analytic approximations of a highly nonlinear problem to arbitrarily high order by means of the HAM in only a few seconds. Inspired by the recent successful applications of the HAM in different fields, a Mathematica package based on the HAM, called BVPh, has been made available online for solving nonlinear boundary-value problems [4]. BVPh is a solver package for highly nonlinear ODEs with singularities, multiple solutions, and multipoint boundary conditions in either a finite or an infinite interval, and includes support for certain types of nonlinear PDEs. Another HAM-based Mathematica code, APOh, has been produced to solve for an explicit analytic approximation of the optimal exercise boundary of American put option, which is also available online [5].
== Frequency response analysis for nonlinear oscillators ==
The HAM has recently been reported to be useful for obtaining analytical solutions for nonlinear frequency response equations. Such solutions are able to capture various nonlinear behaviors such as hardening-type, softening-type or mixed behaviors of the oscillator. These analytical equations are also useful in prediction of chaos in nonlinear systems.
== References ==
== External links ==
http://numericaltank.sjtu.edu.cn/BVPh.htm
http://numericaltank.sjtu.edu.cn/APO.htm | Wikipedia/Homotopy_analysis_method |
The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia.
It is further extensible to stochastic systems by using the Ito integral.
The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method.
The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.
== Ordinary differential equations ==
Adomian method is well suited to solve Cauchy problems, an important class of problems which include initial conditions problems.
=== Application to a first order nonlinear system ===
An example of initial condition problem for an ordinary differential equation is the following:
y
′
(
t
)
+
y
2
(
t
)
=
−
1
,
{\displaystyle y^{\prime }(t)+y^{2}(t)=-1,}
y
(
0
)
=
0.
{\displaystyle y(0)=0.}
To solve the problem, the highest degree differential operator (written here as L) is put on the left side, in the following way:
L
y
=
−
1
−
y
2
,
{\displaystyle Ly=-1-y^{2},}
with L = d/dt and
L
−
1
=
∫
0
t
(
)
{\displaystyle L^{-1}=\int _{0}^{t}()}
. Now the solution is assumed to be an infinite series of contributions:
y
=
y
0
+
y
1
+
y
2
+
y
3
+
⋯
.
{\displaystyle y=y_{0}+y_{1}+y_{2}+y_{3}+\cdots .}
Replacing in the previous expression, we obtain:
(
y
0
+
y
1
+
y
2
+
y
3
+
⋯
)
=
y
(
0
)
+
L
−
1
[
−
1
−
(
y
0
+
y
1
+
y
2
+
y
3
+
⋯
)
2
]
.
{\displaystyle (y_{0}+y_{1}+y_{2}+y_{3}+\cdots )=y(0)+L^{-1}[-1-(y_{0}+y_{1}+y_{2}+y_{3}+\cdots )^{2}].}
Now we identify y0 with some explicit expression on the right, and yi, i = 1, 2, 3, ..., with some expression on the right containing terms of lower order than i. For instance:
y
0
=
y
(
0
)
+
L
−
1
(
−
1
)
=
−
t
y
1
=
−
L
−
1
(
y
0
2
)
=
−
L
−
1
(
t
2
)
=
−
t
3
/
3
y
2
=
−
L
−
1
(
2
y
0
y
1
)
=
−
2
t
5
/
15
y
3
=
−
L
−
1
(
y
1
2
+
2
y
0
y
2
)
=
−
17
t
7
/
315.
{\displaystyle {\begin{aligned}&y_{0}&=&\ y(0)+L^{-1}(-1)&=&-t\\&y_{1}&=&-L^{-1}(y_{0}^{2})=-L^{-1}(t^{2})&=&-t^{3}/3\\&y_{2}&=&-L^{-1}(2y_{0}y_{1})&=&-2t^{5}/15\\&y_{3}&=&-L^{-1}(y_{1}^{2}+2y_{0}y_{2})&=&-17t^{7}/315.\end{aligned}}}
In this way, any contribution can be explicitly calculated at any order. If we settle for the four first terms, the approximant is the following:
y
=
y
0
+
y
1
+
y
2
+
y
3
+
⋯
=
−
[
t
+
1
3
t
3
+
2
15
t
5
+
17
315
t
7
+
⋯
]
{\displaystyle {\begin{aligned}y&=y_{0}+y_{1}+y_{2}+y_{3}+\cdots \\&=-\left[t+{\frac {1}{3}}t^{3}+{\frac {2}{15}}t^{5}+{\frac {17}{315}}t^{7}+\cdots \right]\end{aligned}}}
=== Application to Blasius equation ===
A second example, with more complex boundary conditions is the Blasius equation for a flow in a boundary layer:
d
3
u
d
x
3
+
1
2
u
d
2
u
d
x
2
=
0
{\displaystyle {\frac {\mathrm {d} ^{3}u}{\mathrm {d} x^{3}}}+{\frac {1}{2}}u{\frac {\mathrm {d} ^{2}u}{\mathrm {d} x^{2}}}=0}
With the following conditions at the boundaries:
u
(
0
)
=
0
u
′
(
0
)
=
0
u
′
(
x
)
→
1
,
x
→
∞
{\displaystyle {\begin{aligned}u(0)&=0\\u^{\prime }(0)&=0\\u^{\prime }(x)&\to 1,\qquad x\to \infty \end{aligned}}}
Linear and non-linear operators are now called
L
=
d
3
d
x
3
{\displaystyle L={\frac {\mathrm {d} ^{3}}{\mathrm {d} x^{3}}}}
and
N
=
1
2
u
d
2
d
x
2
{\displaystyle N={\frac {1}{2}}u{\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}}
, respectively. Then, the expression becomes:
L
u
+
N
u
=
0
{\displaystyle Lu+Nu=0}
and the solution may be expressed, in this case, in the following simple way:
u
=
α
+
β
x
+
γ
x
2
/
2
−
L
−
1
N
u
{\displaystyle u=\alpha +\beta x+\gamma x^{2}/2-L^{-1}Nu}
where:
L
−
1
ξ
(
x
)
=
∫
d
x
∫
d
x
∫
d
x
ξ
(
x
)
{\displaystyle L^{-1}\xi (x)=\int dx\int \mathrm {d} x\int \mathrm {d} x\;\;\xi (x)}
If:
u
=
u
0
+
u
1
+
u
2
+
⋯
+
u
N
=
α
+
β
x
+
γ
x
2
/
2
−
1
2
L
−
1
(
u
0
+
u
1
+
u
2
+
⋯
+
u
N
)
d
2
d
x
2
(
u
0
+
u
1
+
u
2
+
⋯
+
u
N
)
{\displaystyle {\begin{aligned}u&=u^{0}+u^{1}+u^{2}+\cdots +u^{N}\\&=\alpha +\beta x+\gamma x^{2}/2-{\frac {1}{2}}L^{-1}(u^{0}+u^{1}+u^{2}+\cdots +u^{N}){\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}(u^{0}+u^{1}+u^{2}+\cdots +u^{N})\end{aligned}}}
and:
u
0
=
α
+
β
x
+
γ
x
2
/
2
u
1
=
−
1
2
L
−
1
(
u
0
u
0
″
)
=
−
L
−
1
A
0
u
2
=
−
1
2
L
−
1
(
u
1
u
0
″
+
u
0
u
1
″
)
=
−
L
−
1
A
1
u
3
=
−
1
2
L
−
1
(
u
2
u
0
″
+
u
1
u
1
″
+
u
0
u
2
″
)
=
−
L
−
1
A
2
⋯
{\displaystyle {\begin{aligned}u^{0}&={}\alpha +\beta x+\gamma x^{2}/2\\u^{1}&=-{\frac {1}{2}}L^{-1}(u^{0}u^{0''})&=&-L^{-1}A_{0}\\u^{2}&=-{\frac {1}{2}}L^{-1}(u^{1}u^{0''}+u^{0}u^{1''})&=&-L^{-1}A_{1}\\u^{3}&=-{\frac {1}{2}}L^{-1}(u^{2}u^{0''}+u^{1}u^{1''}+u^{0}u^{2''})&=&-L^{-1}A_{2}\\&\cdots \end{aligned}}}
Adomian’s polynomials to linearize the non-linear term can be obtained systematically by using the following rule:
A
n
=
1
n
!
d
n
d
λ
n
f
(
u
(
λ
)
)
∣
λ
=
0
,
{\displaystyle A_{n}={\frac {1}{n!}}{\frac {\mathrm {d} ^{n}}{\mathrm {d} \lambda ^{n}}}f(u(\lambda ))\mid _{\lambda =0},}
where:
d
n
d
λ
n
u
(
λ
)
∣
λ
=
0
=
n
!
u
n
{\displaystyle {\frac {\mathrm {d} ^{n}}{\mathrm {d} \lambda ^{n}}}u(\lambda )\mid _{\lambda =0}=n!u_{n}}
Boundary conditions must be applied, in general, at the end of each approximation. In this case, the integration constants must be grouped into three final independent constants. However, in our example, the three constants appear grouped from the beginning in the form shown in the formal solution above. After applying the two first boundary conditions we obtain the so-called Blasius series:
u
=
γ
2
x
2
−
γ
2
2
(
x
5
5
!
)
+
11
γ
3
4
(
x
8
8
!
)
−
375
γ
4
8
(
x
11
11
!
)
+
⋯
{\displaystyle u={\frac {\gamma }{2}}x^{2}-{\frac {\gamma ^{2}}{2}}\left({\frac {x^{5}}{5!}}\right)+{\frac {11\gamma ^{3}}{4}}\left({\frac {x^{8}}{8!}}\right)-{\frac {375\gamma ^{4}}{8}}\left({\frac {x^{11}}{11!}}\right)+\cdots }
To obtain γ we have to apply boundary conditions at ∞, which may be done by writing the series as a Padé approximant:
f
(
z
)
=
∑
n
=
0
L
+
M
c
n
z
n
=
a
0
+
a
1
z
+
⋯
+
a
L
z
L
b
0
+
b
1
z
+
⋯
+
b
M
z
M
{\displaystyle f(z)=\sum _{n=0}^{L+M}c_{n}z^{n}={\frac {a_{0}+a_{1}z+\cdots +a_{L}z^{L}}{b_{0}+b_{1}z+\cdots +b_{M}z^{M}}}}
where L = M. The limit at
∞
{\displaystyle \infty }
of this expression is aL/bM.
If we choose b0 = 1, M linear equations for the b coefficients are obtained:
[
c
L
−
M
+
1
c
L
−
M
+
2
⋯
c
L
c
L
−
M
+
2
c
L
−
M
+
3
⋯
c
L
+
1
⋮
⋮
⋮
c
L
c
L
+
1
⋯
c
L
+
M
−
1
]
[
b
M
b
M
−
1
⋮
b
1
]
=
−
[
c
L
+
1
c
L
+
2
⋮
c
L
+
M
]
{\displaystyle \left[{\begin{array}{cccc}c_{L-M+1}&c_{L-M+2}&\cdots &c_{L}\\c_{L-M+2}&c_{L-M+3}&\cdots &c_{L+1}\\\vdots &\vdots &&\vdots \\c_{L}&c_{L+1}&\cdots &c_{L+M-1}\end{array}}\right]\left[{\begin{array}{c}b_{M}\\b_{M-1}\\\vdots \\b_{1}\end{array}}\right]=-\left[{\begin{array}{c}c_{L+1}\\c_{L+2}\\\vdots \\c_{L+M}\end{array}}\right]}
Then, we obtain the a coefficients by means of the following sequence:
a
0
=
c
0
a
1
=
c
1
+
b
1
c
0
a
2
=
c
2
+
b
1
c
1
+
b
2
c
0
⋯
a
L
=
c
L
+
∑
i
=
1
min
(
L
,
m
)
b
i
c
L
−
i
.
{\displaystyle {\begin{aligned}a_{0}&=c_{0}\\a_{1}&=c_{1}+b_{1}c_{0}\\a_{2}&=c_{2}+b_{1}c_{1}+b_{2}c_{0}\\&\cdots \\a_{L}&=c_{L}+\sum _{i=1}^{\min(L,m)}b_{i}c_{L-i}.\end{aligned}}}
In our example:
u
′
(
x
)
=
γ
x
−
γ
2
2
(
x
4
4
!
)
+
11
γ
3
4
(
x
7
7
!
)
−
375
γ
4
8
(
x
10
10
!
)
{\displaystyle u'(x)=\gamma x-{\frac {\gamma ^{2}}{2}}\left({\frac {x^{4}}{4!}}\right)+{\frac {11\gamma ^{3}}{4}}\left({\frac {x^{7}}{7!}}\right)-{\frac {375\gamma ^{4}}{8}}\left({\frac {x^{10}}{10!}}\right)}
Which when γ = 0.0408 becomes:
u
′
(
x
)
=
0.0204
+
0.0379
z
−
0.0059
z
2
−
0.00004575
z
3
+
6.357
⋅
10
−
6
z
4
−
1.291
⋅
10
−
6
z
5
1
−
0.1429
z
−
0.0000232
z
2
+
0.0008375
z
3
−
0.0001558
z
4
−
1.2849
⋅
10
−
6
z
5
,
{\displaystyle u'(x)={\frac {0.0204+0.0379\,z-0.0059\,z^{2}-0.00004575\,z^{3}+6.357\cdot 10^{-6}z^{4}-1.291\cdot 10^{-6}z^{5}}{1-0.1429\,z-0.0000232\,z^{2}+0.0008375\,z^{3}-0.0001558\,z^{4}-1.2849\cdot 10^{-6}z^{5}}},}
with the limit:
lim
x
→
∞
u
′
(
x
)
=
1.004.
{\displaystyle \lim _{x\to \infty }u'(x)=1.004.}
Which is approximately equal to 1 (from boundary condition (3)) with an accuracy of 4/1000.
== Partial differential equations ==
=== Application to a rectangular system with nonlinearity ===
One of the most frequent problems in physical sciences is to obtain the solution of a (linear or nonlinear) partial differential equation which satisfies a set of functional values on a rectangular boundary. An example is the following problem:
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
−
b
∂
u
2
∂
x
=
ρ
(
x
,
y
)
(
1
)
{\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}-b{\frac {\partial u^{2}}{\partial x}}=\rho (x,y)\qquad (1)}
with the following boundary conditions defined on a rectangle:
u
(
x
=
0
)
=
f
1
(
y
)
and
u
(
x
=
x
l
)
=
f
2
(
y
)
(1-a)
{\displaystyle u(x=0)=f_{1}(y)\quad {\text{and}}\quad u(x=x_{l})=f_{2}(y)\qquad {\text{(1-a)}}}
u
(
y
=
−
y
l
)
=
g
1
(
x
)
and
u
(
y
=
y
l
)
=
g
2
(
x
)
(1-b)
{\displaystyle u(y=-y_{l})=g_{1}(x)\quad {\text{and}}\quad u(y=y_{l})=g_{2}(x)\qquad {\text{(1-b)}}}
This kind of partial differential equation appears frequently coupled with others in science and engineering. For instance, in the incompressible fluid flow problem, the Navier–Stokes equations must be solved in parallel with a Poisson equation for the pressure.
==== Decomposition of the system ====
Let us use the following notation for the problem (1):
L
x
u
+
L
y
u
+
N
u
=
ρ
(
x
,
y
)
(
2
)
{\displaystyle L_{x}u+L_{y}u+Nu=\rho (x,y)\qquad (2)}
where Lx, Ly are double derivate operators and N is a non-linear operator.
The formal solution of (2) is:
u
=
a
(
y
)
+
b
(
y
)
x
+
L
x
−
1
ρ
(
x
,
y
)
−
L
x
−
1
L
y
u
−
L
x
−
1
N
u
(
3
)
{\displaystyle u=a(y)+b(y)x+L_{x}^{-1}\rho (x,y)-L_{x}^{-1}L_{y}u-L_{x}^{-1}Nu\qquad (3)}
Expanding now u as a set of contributions to the solution we have:
u
=
u
0
+
u
1
+
u
2
+
u
3
+
⋯
{\displaystyle u=u_{0}+u_{1}+u_{2}+u_{3}+\cdots }
By substitution in (3) and making a one-to-one correspondence between the contributions on the left side and the terms on the right side we obtain the following iterative scheme:
u
0
=
a
0
(
y
)
+
b
0
(
y
)
x
+
L
x
−
1
ρ
(
x
,
y
)
u
1
=
a
1
(
y
)
+
b
1
(
y
)
x
−
L
x
−
1
L
y
u
0
+
b
∫
d
x
A
0
⋯
u
n
=
a
n
(
y
)
+
b
n
(
y
)
x
−
L
x
−
1
L
y
u
n
−
1
+
b
∫
d
x
A
n
−
1
0
<
n
<
∞
{\displaystyle {\begin{aligned}u_{0}&=a_{0}(y)+b_{0}(y)x+L_{x}^{-1}\rho (x,y)\\u_{1}&=a_{1}(y)+b_{1}(y)x-L_{x}^{-1}L_{y}u_{0}+b\int dxA_{0}\\&\cdots \\u_{n}&=a_{n}(y)+b_{n}(y)x-L_{x}^{-1}L_{y}u_{n-1}+b\int dxA_{n-1}\quad 0<n<\infty \end{aligned}}}
where the couple {an(y), bn(y)} is the solution of the following system of equations:
φ
n
(
x
=
0
)
=
f
1
(
y
)
φ
n
(
x
=
x
l
)
=
f
2
(
y
)
,
{\displaystyle {\begin{aligned}\varphi ^{n}(x=0)&=f_{1}(y)\\\varphi ^{n}(x=x_{l})&=f_{2}(y),\end{aligned}}}
here
φ
n
≡
∑
i
=
0
n
u
i
{\displaystyle \varphi ^{n}\equiv \sum _{i=0}^{n}u_{i}}
is the nth-order approximant to the solution and N u has been consistently expanded in Adomian polynomials:
N
u
=
−
b
∂
x
u
2
=
−
b
∂
x
(
u
0
+
u
1
+
u
2
+
u
3
+
⋯
)
(
u
0
+
u
1
+
u
2
+
u
3
+
⋯
)
=
−
b
∂
x
(
u
0
u
0
+
2
u
0
u
1
+
u
1
u
1
+
2
u
0
u
2
+
⋯
)
=
−
b
∂
x
∑
n
=
1
∞
A
(
n
−
1
)
,
{\displaystyle {\begin{aligned}Nu&=-b\partial _{x}u^{2}=-b\partial _{x}(u_{0}+u_{1}+u_{2}+u_{3}+\cdots )(u_{0}+u_{1}+u_{2}+u_{3}+\cdots )\\&=-b\partial _{x}(u_{0}u_{0}+2u_{0}u_{1}+u_{1}u_{1}+2u_{0}u_{2}+\cdots )\\&=-b\partial _{x}\sum _{n=1}^{\infty }A(n-1),\end{aligned}}}
where
A
n
=
∑
ν
=
1
n
C
(
ν
,
n
)
f
(
ν
)
(
u
0
)
{\displaystyle A_{n}=\sum _{\nu =1}^{n}C(\nu ,n)f^{(\nu )}(u_{0})}
and f(u) = u2 in the example (1).
Here C(ν, n) are products (or sum of products) of ν components of u whose subscripts sum up to n, divided by the factorial of the number of repeated subscripts. It is only a thumb-rule to order systematically the decomposition to be sure that all the combinations appearing are utilized sooner or later.
The
∑
n
=
0
∞
A
n
{\displaystyle \sum _{n=0}^{\infty }A_{n}}
is equal to the sum of a generalized Taylor series about u0.
For the example (1) the Adomian polynomials are:
A
0
=
u
0
2
A
1
=
2
u
0
u
1
A
2
=
u
1
2
+
2
u
0
u
2
A
3
=
2
u
1
u
2
+
2
u
0
u
3
⋯
{\displaystyle {\begin{aligned}A_{0}&=u_{0}^{2}\\A_{1}&=2u_{0}u_{1}\\A_{2}&=u_{1}^{2}+2u_{0}u_{2}\\A_{3}&=2u_{1}u_{2}+2u_{0}u_{3}\\&\cdots \end{aligned}}}
Other possible choices are also possible for the expression of An.
==== Series solutions ====
Cherruault established that the series terms obtained by Adomian's method approach zero as 1/(mn)! if m is the order of the highest linear differential operator and that
lim
n
→
∞
φ
n
=
u
{\displaystyle \lim _{n\to \infty }\varphi ^{n}=u}
. With this method the solution can be found by systematically integrating along any of the two directions: in the x-direction we would use expression (3); in the alternative y-direction we would use the following expression:
u
=
c
(
x
)
+
d
(
x
)
y
+
L
y
−
1
ρ
(
x
,
y
)
−
L
y
−
1
L
x
u
−
L
y
−
1
N
u
{\displaystyle u=c(x)+d(x)y+L_{y}^{-1}\rho (x,y)-L_{y}^{-1}L_{x}u-L_{y}^{-1}Nu}
where: c(x), d(x) is obtained from the boundary conditions at y = - yl and y = yl:
u
(
y
=
−
y
l
)
=
g
1
(
x
)
u
(
y
=
y
l
)
=
g
2
(
x
)
{\displaystyle {\begin{aligned}u(y=-y_{l})&=g_{1}(x)\\u(y=y_{l})&=g_{2}(x)\end{aligned}}}
If we call the two respective solutions x-partial solution and y-partial solution, one of the most interesting consequences of the method is that the x-partial solution uses only the two boundary conditions (1-a) and the y-partial solution uses only the conditions (1-b).
Thus, one of the two sets of boundary functions {f1, f2} or {g1, g2} is redundant, and this implies that a partial differential equation with boundary conditions on a rectangle cannot have arbitrary boundary conditions on the borders, since the conditions at x = x1, x = x2 must be consistent with those imposed at y = y1 and y = y2.
An example to clarify this point is the solution of the Poisson problem with the following boundary conditions:
u
(
x
=
0
)
=
f
1
(
y
)
=
0
u
(
x
=
x
l
)
=
f
2
(
y
)
=
0
{\displaystyle {\begin{aligned}u(x=0)&=f_{1}(y)=0\\u(x=x_{l})&=f_{2}(y)=0\end{aligned}}}
By using Adomian's method and a symbolic processor (such as Mathematica or Maple) it is easy to obtain the third order approximant to the solution. This approximant has an error lower than 5×10−16 in any point, as it can be proved by substitution in the initial problem and by displaying the absolute value of the residual obtained as a function of (x, y).
The solution at y = -0.25 and y = 0.25 is given by specific functions that in this case are:
g
1
(
x
)
=
0.0520833
x
−
0.347222
x
3
+
9.25186
×
10
−
17
x
4
+
0.833333
x
5
−
0.555556
x
6
{\displaystyle g_{1}(x)=0.0520833\,x-0.347222\,x^{3}+9.25186\times 10^{-17}x^{4}+0.833333\,x^{5}-0.555556\,x^{6}}
and g2(x) = g1(x) respectively.
If a (double) integration is now performed in the y-direction using these two boundary functions the same solution will be obtained, which satisfy u(x=0, y) = 0 and u(x=0.5, y) = 0 and cannot satisfy any other condition on these borders.
Some people are surprised by these results; it seems strange that not all initial-boundary conditions must be explicitly used to solve a differential system. However, it is a well established fact that any elliptic equation has one and only one solution for any functional conditions in the four sides of a rectangle provided there is no discontinuity on the edges.
The cause of the misconception is that scientists and engineers normally think in a boundary condition in terms of weak convergence in a Hilbert space (the distance to the boundary function is small enough to practical purposes). In contrast, Cauchy problems impose a point-to-point convergence to a given boundary function and to all its derivatives (and this is a quite strong condition!).
For the first ones, a function satisfies a boundary condition when the area (or another functional distance) between it and the true function imposed in the boundary is so small as desired; for the second ones, however, the function must tend to the true function imposed in any and every point of the interval.
The commented Poisson problem does not have a solution for any functional boundary conditions f1, f2, g1, g2; however, given f1, f2 it is always possible to find boundary functions g1*, g2* so close to g1, g2 as desired (in the weak convergence meaning) for which the problem has solution. This property makes it possible to solve Poisson's and many other problems with arbitrary boundary conditions but never for analytic functions exactly specified on the boundaries.
The reader can convince himself (herself) of the high sensitivity of PDE solutions to small changes in the boundary conditions by solving this problem integrating along the x-direction, with boundary functions slightly different even though visually not distinguishable. For instance, the solution with the boundary conditions:
f
1
,
2
(
y
)
=
0.00413682
−
0.0813801
y
2
+
0.260416
y
4
−
0.277778
y
6
{\displaystyle f_{1,2}(y)=0.00413682-0.0813801\,y^{2}+0.260416\,y^{4}-0.277778\,y^{6}}
at x = 0 and x = 0.5, and the solution with the boundary conditions:
f
1
,
2
(
y
)
=
0.00413683
−
0.00040048
y
−
0.0813802
y
2
+
0.0101279
y
3
+
0.260417
y
4
−
0.0694455
y
5
−
0.277778
y
6
+
0.15873
y
7
+
⋯
{\displaystyle {\begin{aligned}f_{1,2}(y)=0.00413683&-0.00040048\,y-0.0813802\,y^{2}+0.0101279\,y^{3}+0.260417\,y^{4}\\&-0.0694455\,y^{5}-0.277778\,y^{6}+0.15873\,y^{7}+\cdots \end{aligned}}}
at x = 0 and x = 0.5, produce lateral functions with different sign convexity even though both functions are visually not distinguishable.
Solutions of elliptic problems and other partial differential equations are highly sensitive to small changes in the boundary function imposed when only two sides are used. And this sensitivity is not easily compatible with models that are supposed to represent real systems, which are described by means of measurements containing experimental errors and are normally expressed as initial-boundary value problems in a Hilbert space.
==== Improvements to the decomposition method ====
At least three methods have been reported
to obtain the boundary functions g1*, g2* that are compatible with any lateral set of conditions {f1, f2} imposed. This makes it possible to find the analytical solution of any PDE boundary problem on a closed rectangle with the required accuracy, so allowing to solve a wide range of problems that the standard Adomian's method was not able to address.
The first one perturbs the two boundary functions imposed at x = 0 and x = x1 (condition 1-a) with a Nth-order polynomial in y: p1, p2 in such a way that: f1' = f1 + p1, f2' = f2 + p2, where the norm of the two perturbation functions are smaller than the accuracy needed at the boundaries. These p1, p2 depend on a set of polynomial coefficients ci, i = 1, ..., N. Then, the Adomian method is applied and functions are obtained at the four boundaries which depend on the set of ci, i = 1, ..., N. Finally, a boundary function F(c1, c2, ..., cN) is defined as the sum of these four functions, and the distance between F(c1, c2, ..., cN) and the real boundary functions ((1-a) and (1-b)) is minimized. The problem has been reduced, in this way, to the global minimization of the function F(c1, c2, ..., cN) which has a global minimum for some combination of the parameters ci, i = 1, ..., N. This minimum may be found by means of a genetic algorithm or by using some other optimization method, as the one proposed by Cherruault (1999).
A second method to obtain analytic approximants of initial-boundary problems is to combine Adomian decomposition with spectral methods.
Finally, the third method proposed by García-Olivares is based on imposing analytic solutions at the four boundaries, but modifying the original differential operator in such a way that it is different from the original one only in a narrow region close to the boundaries, and it forces the solution to satisfy exactly analytic conditions at the four boundaries.
== Integral Equations ==
The Adomian decomposition method may also be applied to linear and nonlinear integral equations to obtain solutions. This corresponds to the fact that many differential equation can be converted into integral equations.
=== Adomian Decomposition Method ===
The Adomian decomposition method for nonhomogenous Fredholm integral equation of the second kind goes as follows:
Given an integral equation of the form:
u
(
x
)
=
f
(
x
)
+
λ
∫
a
b
K
(
x
,
t
)
u
(
t
)
d
t
{\displaystyle u(x)=f(x)+\lambda \int _{a}^{b}K(x,t)u(t)dt}
We assume we may express the solution in series form:
u
(
x
)
=
∑
n
=
0
∞
u
n
(
x
)
{\displaystyle u(x)=\sum _{n=0}^{\infty }u_{n}(x)}
Plugging the series form into the integral equation then yields:
∑
n
=
0
∞
u
n
(
x
)
=
f
(
x
)
+
λ
∫
a
b
K
(
x
,
t
)
(
∑
n
=
0
∞
u
n
(
t
)
)
d
t
{\displaystyle \sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \int _{a}^{b}K(x,t)(\sum _{n=0}^{\infty }u_{n}(t))dt}
Assuming that the sum converges absolutely to
u
(
x
)
{\displaystyle u(x)}
we may integerchange the sum and integral as follows:
∑
n
=
0
∞
u
n
(
x
)
=
f
(
x
)
+
λ
∫
a
b
∑
n
=
0
∞
K
(
x
,
t
)
u
n
(
t
)
d
t
{\displaystyle \sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \int _{a}^{b}\sum _{n=0}^{\infty }K(x,t)u_{n}(t)dt}
∑
n
=
0
∞
u
n
(
x
)
=
f
(
x
)
+
λ
∑
n
=
0
∞
∫
a
b
K
(
x
,
t
)
u
n
(
t
)
d
t
{\displaystyle \sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \sum _{n=0}^{\infty }\int _{a}^{b}K(x,t)u_{n}(t)dt}
Expanding the sum on both sides yields:
u
0
(
x
)
+
u
1
(
x
)
+
u
2
(
x
)
+
.
.
.
=
f
(
x
)
+
λ
∫
a
b
K
(
x
,
t
)
u
0
(
t
)
d
t
+
λ
∫
a
b
K
(
x
,
t
)
u
1
(
t
)
d
t
+
λ
∫
a
b
K
(
x
,
t
)
u
2
(
t
)
d
t
+
.
.
.
{\displaystyle u_{0}(x)+u_{1}(x)+u_{2}(x)+...=f(x)+\lambda \int _{a}^{b}K(x,t)u_{0}(t)dt+\lambda \int _{a}^{b}K(x,t)u_{1}(t)dt+\lambda \int _{a}^{b}K(x,t)u_{2}(t)dt+...}
Hence we may associate each
u
i
(
x
)
{\displaystyle u_{i}(x)}
in the following recurrent manner:
u
0
(
x
)
=
f
(
x
)
{\displaystyle u_{0}(x)=f(x)}
u
i
(
x
)
=
λ
∫
a
b
K
(
x
,
t
)
u
i
−
1
d
t
,
i
≥
1
{\displaystyle u_{i}(x)=\lambda \int _{a}^{b}K(x,t)u_{i-1}dt,\,\,\,\,\,\,\,\,i\geq 1}
which gives us the solution
u
(
x
)
{\displaystyle u(x)}
in the solution form above.
==== Example ====
Given the Fredholm integral equation:
u
(
x
)
=
cos
(
x
)
+
2
x
+
∫
0
π
x
t
⋅
u
(
t
)
d
t
{\displaystyle u(x)=\cos(x)+2x+\int _{0}^{\pi }xt\cdot u(t)dt}
Since
f
(
x
)
=
cos
(
x
)
+
2
x
{\displaystyle f(x)=\cos(x)+2x}
, we can set:
u
0
(
x
)
=
cos
(
x
)
+
2
x
{\displaystyle u_{0}(x)=\cos(x)+2x}
u
1
(
x
)
=
∫
0
π
x
t
⋅
u
0
d
t
=
∫
0
π
x
t
⋅
(
c
o
s
x
+
2
x
)
d
t
=
(
−
2
+
2
π
3
3
)
x
{\displaystyle u_{1}(x)=\int _{0}^{\pi }xt\cdot u_{0}dt=\int _{0}^{\pi }xt\cdot (cosx+2x)dt=(-2+{\frac {2\pi ^{3}}{3}})x}
u
2
(
x
)
=
∫
0
π
x
t
⋅
u
1
d
t
=
∫
0
π
x
t
⋅
(
−
2
+
2
π
3
3
)
t
d
t
=
(
−
2
π
3
3
+
2
π
6
9
)
x
{\displaystyle u_{2}(x)=\int _{0}^{\pi }xt\cdot u_{1}dt=\int _{0}^{\pi }xt\cdot (-2+{\frac {2\pi ^{3}}{3}})t\,dt=({\frac {-2\pi ^{3}}{3}}+{\frac {2\pi ^{6}}{9}})x}
...
Hence the solution
u
(
x
)
{\displaystyle u(x)}
may be written as:
u
(
x
)
=
cos
(
x
)
+
2
x
+
(
−
2
+
2
π
3
3
)
x
+
(
−
2
π
3
3
+
2
π
6
9
)
x
+
.
.
.
{\displaystyle u(x)=\cos(x)+2x+(-2+{\frac {2\pi ^{3}}{3}})x+({\frac {-2\pi ^{3}}{3}}+{\frac {2\pi ^{6}}{9}})x+...}
Since this is a telescoping series, we can see that every terms after
c
o
s
(
x
)
{\displaystyle cos(x)}
cancels and may be regarded as "noise", Thus,
u
(
x
)
{\displaystyle u(x)}
becomes:
u
(
x
)
=
cos
(
x
)
{\displaystyle u(x)=\cos(x)}
== Gallery ==
== See also ==
Order of approximation
== References == | Wikipedia/Adomian_decomposition_method |
A continuity equation or transport equation is an equation that describes the transport of some quantity. It is particularly simple and powerful when applied to a conserved quantity, but it can be generalized to apply to any extensive quantity. Since mass, energy, momentum, electric charge and other natural quantities are conserved under their respective appropriate conditions, a variety of physical phenomena may be described using continuity equations.
Continuity equations are a stronger, local form of conservation laws. For example, a weak version of the law of conservation of energy states that energy can neither be created nor destroyed—i.e., the total amount of energy in the universe is fixed. This statement does not rule out the possibility that a quantity of energy could disappear from one point while simultaneously appearing at another point. A stronger statement is that energy is locally conserved: energy can neither be created nor destroyed, nor can it "teleport" from one place to another—it can only move by a continuous flow. A continuity equation is the mathematical way to express this kind of statement. For example, the continuity equation for electric charge states that the amount of electric charge in any volume of space can only change by the amount of electric current flowing into or out of that volume through its boundaries.
Continuity equations more generally can include "source" and "sink" terms, which allow them to describe quantities that are often but not always conserved, such as the density of a molecular species which can be created or destroyed by chemical reactions. In an everyday example, there is a continuity equation for the number of people alive; it has a "source term" to account for people being born, and a "sink term" to account for people dying.
Any continuity equation can be expressed in an "integral form" (in terms of a flux integral), which applies to any finite region, or in a "differential form" (in terms of the divergence operator) which applies at a point.
Continuity equations underlie more specific transport equations such as the convection–diffusion equation, Boltzmann transport equation, and Navier–Stokes equations.
Flows governed by continuity equations can be visualized using a Sankey diagram.
== General equation ==
=== Definition of flux ===
A continuity equation is useful when a flux can be defined. To define flux, first there must be a quantity q which can flow or move, such as mass, energy, electric charge, momentum, number of molecules, etc. Let ρ be the volume density of this quantity, that is, the amount of q per unit volume.
The way that this quantity q is flowing is described by its flux. The flux of q is a vector field, which we denote as j. Here are some examples and properties of flux:
The dimension of flux is "amount of q flowing per unit time, through a unit area". For example, in the mass continuity equation for flowing water, if 1 gram per second of water is flowing through a pipe with cross-sectional area 1 cm2, then the average mass flux j inside the pipe is (1 g/s) / cm2, and its direction is along the pipe in the direction that the water is flowing. Outside the pipe, where there is no water, the flux is zero.
If there is a velocity field u which describes the relevant flow—in other words, if all of the quantity q at a point x is moving with velocity u(x)—then the flux is by definition equal to the density times the velocity field:
j
=
ρ
u
{\displaystyle \mathbf {j} =\rho \mathbf {u} }
For example, if in the mass continuity equation for flowing water, u is the water's velocity at each point, and ρ is the water's density at each point, then j would be the mass flux, also known as the material discharge.
In a well-known example, the flux of electric charge is the electric current density.
If there is an imaginary surface S, then the surface integral of flux over S is equal to the amount of q that is passing through the surface S per unit time:
in which
∬
S
d
S
{\textstyle \iint _{S}d\mathbf {S} }
is a surface integral.
(Note that the concept that is here called "flux" is alternatively termed flux density in some literature, in which context "flux" denotes the surface integral of flux density. See the main article on Flux for details.)
=== Integral form ===
The integral form of the continuity equation states that:
The amount of q in a region increases when additional q flows inward through the surface of the region, and decreases when it flows outward;
The amount of q in a region increases when new q is created inside the region, and decreases when q is destroyed;
Apart from these two processes, there is no other way for the amount of q in a region to change.
Mathematically, the integral form of the continuity equation expressing the rate of increase of q within a volume V is:
where
S is any imaginary closed surface, that encloses a volume V,
∮
S
d
S
{\displaystyle \oint _{S}d\mathbf {S} }
denotes a surface integral over that closed surface,
q is the total amount of the quantity in the volume V,
j is the flux of q,
t is time,
Σ is the net rate that q is being generated inside the volume V per unit time. When q is being generated (i.e., when
∂
q
∂
t
>
0
{\displaystyle {\tfrac {\partial q}{\partial t}}>0}
), the region is called a source of q, and it makes Σ more positive. When q is being destroyed (i.e., when
∂
q
∂
t
<
0
{\displaystyle {\tfrac {\partial q}{\partial t}}<0}
), the region is called a sink of q, and it makes Σ more negative. The term Σ is sometimes written as
d
q
/
d
t
|
gen
{\displaystyle dq/dt|_{\text{gen}}}
or the total change of q from its generation or destruction inside the control volume.
In a simple example, V could be a building, and q could be the number of living people in the building. The surface S would consist of the walls, doors, roof, and foundation of the building. Then the continuity equation states that the number of living people in the building (1) increases when living people enter the building (i.e., when there is an inward flux through the surface), (2) decreases when living people exit the building (i.e., when there is an outward flux through the surface), (3) increases when someone in the building gives birth to new life (i.e., when there is a positive time rate of change within the volume), and (4) decreases when someone in the building no longer lives (i.e., when there is a negative time rate of change within the volume). In conclusion, in this example there are four distinct ways that the net rate Σ may be altered.
=== Differential form ===
By the divergence theorem, a general continuity equation can also be written in a "differential form":
where
∇⋅ is divergence,
ρ is the density of the amount q (i.e. the quantity q per unit volume),
j is the flux of q (i.e. j = ρv, where v is the vector field describing the movement of the quantity q),
t is time,
σ is the generation of q per unit volume per unit time. Terms that generate q (i.e., σ > 0) or remove q (i.e., σ < 0) are referred to as sources and sinks respectively.
This general equation may be used to derive any continuity equation, ranging from as simple as the volume continuity equation to as complicated as the Navier–Stokes equations. This equation also generalizes the advection equation. Other equations in physics, such as Gauss's law of the electric field and Gauss's law for gravity, have a similar mathematical form to the continuity equation, but are not usually referred to by the term "continuity equation", because j in those cases does not represent the flow of a real physical quantity.
In the case that q is a conserved quantity that cannot be created or destroyed (such as energy), σ = 0 and the equations become:
∂
ρ
∂
t
+
∇
⋅
j
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {j} =0}
== Electromagnetism ==
In electromagnetic theory, the continuity equation is an empirical law expressing (local) charge conservation. Mathematically it is an automatic consequence of Maxwell's equations, although charge conservation is more fundamental than Maxwell's equations. It states that the divergence of the current density J (in amperes per square meter) is equal to the negative rate of change of the charge density ρ (in coulombs per cubic meter),
∇
⋅
J
=
−
∂
ρ
∂
t
{\displaystyle \nabla \cdot \mathbf {J} =-{\frac {\partial \rho }{\partial t}}}
Current is the movement of charge. The continuity equation says that if charge is moving out of a differential volume (i.e., divergence of current density is positive) then the amount of charge within that volume is going to decrease, so the rate of change of charge density is negative. Therefore, the continuity equation amounts to a conservation of charge.
If magnetic monopoles exist, there would be a continuity equation for monopole currents as well, see the monopole article for background and the duality between electric and magnetic currents.
== Fluid dynamics ==
In fluid dynamics, the continuity equation states that the rate at which mass enters a system is equal to the rate at which mass leaves the system plus the accumulation of mass within the system.
The differential form of the continuity equation is:
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
where
ρ is fluid density,
t is time,
u is the flow velocity vector field.
The time derivative can be understood as the accumulation (or loss) of mass in the system, while the divergence term represents the difference in flow in versus flow out. In this context, this equation is also one of the Euler equations (fluid dynamics). The Navier–Stokes equations form a vector continuity equation describing the conservation of linear momentum.
If the fluid is incompressible (volumetric strain rate is zero), the mass continuity equation simplifies to a volume continuity equation:
∇
⋅
u
=
0
,
{\displaystyle \nabla \cdot \mathbf {u} =0,}
which means that the divergence of the velocity field is zero everywhere. Physically, this is equivalent to saying that the local volume dilation rate is zero, hence a flow of water through a converging pipe will adjust solely by increasing its velocity as water is largely incompressible.
== Computer vision ==
In computer vision, optical flow is the pattern of apparent motion of objects in a visual scene. Under the assumption that brightness of the moving object did not change between two image frames, one can derive the optical flow equation as:
∂
I
∂
x
V
x
+
∂
I
∂
y
V
y
+
∂
I
∂
t
=
∇
I
⋅
V
+
∂
I
∂
t
=
0
{\displaystyle {\frac {\partial I}{\partial x}}V_{x}+{\frac {\partial I}{\partial y}}V_{y}+{\frac {\partial I}{\partial t}}=\nabla I\cdot \mathbf {V} +{\frac {\partial I}{\partial t}}=0}
where
t is time,
x, y coordinates in the image,
I is the image intensity at image coordinate (x, y) and time t,
V is the optical flow velocity vector
(
V
x
,
V
y
)
{\displaystyle (V_{x},V_{y})}
at image coordinate (x, y) and time t
== Energy and heat ==
Conservation of energy says that energy cannot be created or destroyed. (See below for the nuances associated with general relativity.) Therefore, there is a continuity equation for energy flow:
∂
u
∂
t
+
∇
⋅
q
=
0
{\displaystyle {\frac {\partial u}{\partial t}}+\nabla \cdot \mathbf {q} =0}
where
u, local energy density (energy per unit volume),
q, energy flux (transfer of energy per unit cross-sectional area per unit time) as a vector,
An important practical example is the flow of heat. When heat flows inside a solid, the continuity equation can be combined with Fourier's law (heat flux is proportional to temperature gradient) to arrive at the heat equation. The equation of heat flow may also have source terms: Although energy cannot be created or destroyed, heat can be created from other types of energy, for example via friction or joule heating.
== Probability distributions ==
If there is a quantity that moves continuously according to a stochastic (random) process, like the location of a single dissolved molecule with Brownian motion, then there is a continuity equation for its probability distribution. The flux in this case is the probability per unit area per unit time that the particle passes through a surface. According to the continuity equation, the negative divergence of this flux equals the rate of change of the probability density. The continuity equation reflects the fact that the molecule is always somewhere—the integral of its probability distribution is always equal to 1—and that it moves by a continuous motion (no teleporting).
== Quantum mechanics ==
Quantum mechanics is another domain where there is a continuity equation related to conservation of probability. The terms in the equation require the following definitions, and are slightly less obvious than the other examples above, so they are outlined here:
The wavefunction Ψ for a single particle in position space (rather than momentum space), that is, a function of position r and time t, Ψ = Ψ(r, t).
The probability density function is
ρ
(
r
,
t
)
=
Ψ
∗
(
r
,
t
)
Ψ
(
r
,
t
)
=
|
Ψ
(
r
,
t
)
|
2
.
{\displaystyle \rho (\mathbf {r} ,t)=\Psi ^{*}(\mathbf {r} ,t)\Psi (\mathbf {r} ,t)=|\Psi (\mathbf {r} ,t)|^{2}.}
The probability of finding the particle within V at t is denoted and defined by
P
=
P
r
∈
V
(
t
)
=
∫
V
Ψ
∗
Ψ
d
V
=
∫
V
|
Ψ
|
2
d
V
.
{\displaystyle P=P_{\mathbf {r} \in V}(t)=\int _{V}\Psi ^{*}\Psi dV=\int _{V}|\Psi |^{2}dV.}
The probability current (probability flux) is
j
(
r
,
t
)
=
ℏ
2
m
i
[
Ψ
∗
(
∇
Ψ
)
−
Ψ
(
∇
Ψ
∗
)
]
.
{\displaystyle \mathbf {j} (\mathbf {r} ,t)={\frac {\hbar }{2mi}}\left[\Psi ^{*}\left(\nabla \Psi \right)-\Psi \left(\nabla \Psi ^{*}\right)\right].}
With these definitions the continuity equation reads:
∇
⋅
j
+
∂
ρ
∂
t
=
0
⇌
∇
⋅
j
+
∂
|
Ψ
|
2
∂
t
=
0.
{\displaystyle \nabla \cdot \mathbf {j} +{\frac {\partial \rho }{\partial t}}=0\mathrel {\rightleftharpoons } \nabla \cdot \mathbf {j} +{\frac {\partial |\Psi |^{2}}{\partial t}}=0.}
Either form may be quoted. Intuitively, the above quantities indicate this represents the flow of probability. The chance of finding the particle at some position r and time t flows like a fluid; hence the term probability current, a vector field. The particle itself does not flow deterministically in this vector field.
== Semiconductor ==
The total current flow in the semiconductor consists of drift current and diffusion current of both the electrons in the conduction band and holes in the valence band.
General form for electrons in one-dimension:
∂
n
∂
t
=
n
μ
n
∂
E
∂
x
+
μ
n
E
∂
n
∂
x
+
D
n
∂
2
n
∂
x
2
+
(
G
n
−
R
n
)
{\displaystyle {\frac {\partial n}{\partial t}}=n\mu _{n}{\frac {\partial E}{\partial x}}+\mu _{n}E{\frac {\partial n}{\partial x}}+D_{n}{\frac {\partial ^{2}n}{\partial x^{2}}}+(G_{n}-R_{n})}
where:
n is the local concentration of electrons
μ
n
{\displaystyle \mu _{n}}
is electron mobility
E is the electric field across the depletion region
Dn is the diffusion coefficient for electrons
Gn is the rate of generation of electrons
Rn is the rate of recombination of electrons
Similarly, for holes:
∂
p
∂
t
=
−
p
μ
p
∂
E
∂
x
−
μ
p
E
∂
p
∂
x
+
D
p
∂
2
p
∂
x
2
+
(
G
p
−
R
p
)
{\displaystyle {\frac {\partial p}{\partial t}}=-p\mu _{p}{\frac {\partial E}{\partial x}}-\mu _{p}E{\frac {\partial p}{\partial x}}+D_{p}{\frac {\partial ^{2}p}{\partial x^{2}}}+(G_{p}-R_{p})}
where:
p is the local concentration of holes
μ
p
{\displaystyle \mu _{p}}
is hole mobility
E is the electric field across the depletion region
Dp is the diffusion coefficient for holes
Gp is the rate of generation of holes
Rp is the rate of recombination of holes
=== Derivation ===
This section presents a derivation of the equation above for electrons. A similar derivation can be found for the equation for holes.
Consider the fact that the number of electrons is conserved across a volume of semiconductor material with cross-sectional area, A, and length, dx, along the x-axis. More precisely, one can say:
Rate of change of electron density
=
(
Electron flux in
−
Electron flux out
)
+
Net generation inside a volume
{\displaystyle {\text{Rate of change of electron density}}=({\text{Electron flux in}}-{\text{Electron flux out}})+{\text{Net generation inside a volume}}}
Mathematically, this equality can be written:
d
n
d
t
A
d
x
=
[
J
(
x
+
d
x
)
−
J
(
x
)
]
A
e
+
(
G
n
−
R
n
)
A
d
x
=
[
J
(
x
)
+
d
J
d
x
d
x
−
J
(
x
)
]
A
e
+
(
G
n
−
R
n
)
A
d
x
d
n
d
t
=
1
e
d
J
d
x
+
(
G
n
−
R
n
)
{\displaystyle {\begin{aligned}{\frac {dn}{dt}}A\,dx&=\left[J(x+dx)-J(x)\right]{\frac {A}{e}}+(G_{n}-R_{n})A\,dx\\&=\left[J(x)+{\frac {dJ}{dx}}dx-J(x)\right]{\frac {A}{e}}+(G_{n}-R_{n})A\,dx\\[1.2ex]{\frac {dn}{dt}}&={\frac {1}{e}}{\frac {dJ}{dx}}+(G_{n}-R_{n})\end{aligned}}}
Here J denotes current density(whose direction is against electron flow by convention) due to electron flow within the considered volume of the semiconductor. It is also called electron current density.
Total electron current density is the sum of drift current and diffusion current densities:
J
n
=
e
n
μ
n
E
+
e
D
n
d
n
d
x
{\displaystyle J_{n}=en\mu _{n}E+eD_{n}{\frac {dn}{dx}}}
Therefore, we have
d
n
d
t
=
1
e
d
d
x
(
e
n
μ
n
E
+
e
D
n
d
n
d
x
)
+
(
G
n
−
R
n
)
{\displaystyle {\frac {dn}{dt}}={\frac {1}{e}}{\frac {d}{dx}}\left(en\mu _{n}E+eD_{n}{\frac {dn}{dx}}\right)+(G_{n}-R_{n})}
Applying the product rule results in the final expression:
d
n
d
t
=
μ
n
E
d
n
d
x
+
μ
n
n
d
E
d
x
+
D
n
d
2
n
d
x
2
+
(
G
n
−
R
n
)
{\displaystyle {\frac {dn}{dt}}=\mu _{n}E{\frac {dn}{dx}}+\mu _{n}n{\frac {dE}{dx}}+D_{n}{\frac {d^{2}n}{dx^{2}}}+(G_{n}-R_{n})}
=== Solution ===
The key to solving these equations in real devices is whenever possible to select regions in which most of the mechanisms are negligible so that the equations reduce to a much simpler form.
== Relativistic version ==
=== Special relativity ===
The notation and tools of special relativity, especially 4-vectors and 4-gradients, offer a convenient way to write any continuity equation.
The density of a quantity ρ and its current j can be combined into a 4-vector called a 4-current:
J
=
(
c
ρ
,
j
x
,
j
y
,
j
z
)
{\displaystyle J=\left(c\rho ,j_{x},j_{y},j_{z}\right)}
where c is the speed of light. The 4-divergence of this current is:
∂
μ
J
μ
=
c
∂
ρ
∂
c
t
+
∇
⋅
j
{\displaystyle \partial _{\mu }J^{\mu }=c{\frac {\partial \rho }{\partial ct}}+\nabla \cdot \mathbf {j} }
where ∂μ is the 4-gradient and μ is an index labeling the spacetime dimension. Then the continuity equation is:
∂
μ
J
μ
=
0
{\displaystyle \partial _{\mu }J^{\mu }=0}
in the usual case where there are no sources or sinks, that is, for perfectly conserved quantities like energy or charge. This continuity equation is manifestly ("obviously") Lorentz invariant.
Examples of continuity equations often written in this form include electric charge conservation
∂
μ
J
μ
=
0
{\displaystyle \partial _{\mu }J^{\mu }=0}
where J is the electric 4-current; and energy–momentum conservation
∂
ν
T
μ
ν
=
0
{\displaystyle \partial _{\nu }T^{\mu \nu }=0}
where T is the stress–energy tensor.
=== General relativity ===
In general relativity, where spacetime is curved, the continuity equation (in differential form) for energy, charge, or other conserved quantities involves the covariant divergence instead of the ordinary divergence.
For example, the stress–energy tensor is a second-order tensor field containing energy–momentum densities, energy–momentum fluxes, and shear stresses, of a mass-energy distribution. The differential form of energy–momentum conservation in general relativity states that the covariant divergence of the stress-energy tensor is zero:
T
μ
ν
;
μ
=
0.
{\displaystyle {T^{\mu }}_{\nu ;\mu }=0.}
This is an important constraint on the form the Einstein field equations take in general relativity.
However, the ordinary divergence of the stress–energy tensor does not necessarily vanish:
∂
μ
T
μ
ν
=
−
Γ
μ
λ
μ
T
λ
ν
−
Γ
μ
λ
ν
T
μ
λ
,
{\displaystyle \partial _{\mu }T^{\mu \nu }=-\Gamma _{\mu \lambda }^{\mu }T^{\lambda \nu }-\Gamma _{\mu \lambda }^{\nu }T^{\mu \lambda },}
The right-hand side strictly vanishes for a flat geometry only.
As a consequence, the integral form of the continuity equation is difficult to define and not necessarily valid for a region within which spacetime is significantly curved (e.g. around a black hole, or across the whole universe).
== Particle physics ==
Quarks and gluons have color charge, which is always conserved like electric charge, and there is a continuity equation for such color charge currents (explicit expressions for currents are given at gluon field strength tensor).
There are many other quantities in particle physics which are often or always conserved: baryon number (proportional to the number of quarks minus the number of antiquarks), electron number, mu number, tau number, isospin, and others. Each of these has a corresponding continuity equation, possibly including source / sink terms.
== Noether's theorem ==
One reason that conservation equations frequently occur in physics is Noether's theorem. This states that whenever the laws of physics have a continuous symmetry, there is a continuity equation for some conserved physical quantity. The three most famous examples are:
The laws of physics are invariant with respect to time-translation—for example, the laws of physics today are the same as they were yesterday. This symmetry leads to the continuity equation for conservation of energy.
The laws of physics are invariant with respect to space-translation—for example, a rocket in outer space is not subject to different forces or potentials if it is displaced in any given direction (eg. x, y, z), leading to the conservation of the three components of momentum conservation of momentum.
The laws of physics are invariant with respect to orientation—for example, floating in outer space, there is no measurement you can do to say "which way is up"; the laws of physics are the same regardless of how you are oriented. This symmetry leads to the continuity equation for conservation of angular momentum.
== See also ==
Conservation law
Conservation form
Dissipative system
== References ==
== Further reading ==
Lamb, H. (2006) [1932]. Hydrodynamics (6th ed.). Cambridge University Press. ISBN 978-0-521-45868-9.
Griffiths, D. J. (1999). Introduction to Electrodynamics (3rd ed.). Pearson Education Inc. ISBN 81-7758-293-3.
Grant, I. S.; Phillips, W. R. (2008). Electromagnetism. Manchester Physics Series (2nd ed.). ISBN 978-0-471-92712-9.
Wheeler, J. A.; Misner, C.; Thorne, K. S. (1973). Gravitation. W. H. Freeman & Co. ISBN 0-7167-0344-0. | Wikipedia/Continuity_equation |
The Boltzmann equation or Boltzmann transport equation (BTE) describes the statistical behaviour of a thermodynamic system not in a state of equilibrium; it was devised by Ludwig Boltzmann in 1872.
The classic example of such a system is a fluid with temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random but biased transport of the particles making up that fluid. In the modern literature the term Boltzmann equation is often used in a more general sense, referring to any kinetic equation that describes the change of a macroscopic quantity in a thermodynamic system, such as energy, charge or particle number.
The equation arises not by analyzing the individual positions and momenta of each particle in the fluid but rather by considering a probability distribution for the position and momentum of a typical particle—that is, the probability that the particle occupies a given very small region of space (mathematically the volume element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
) centered at the position
r
{\displaystyle \mathbf {r} }
, and has momentum nearly equal to a given momentum vector
p
{\displaystyle \mathbf {p} }
(thus occupying a very small region of momentum space
d
3
p
{\displaystyle d^{3}\mathbf {p} }
), at an instant of time.
The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport. One may also derive other properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation.
The equation is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle position and momentum. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.
== Overview ==
=== The phase space and density function ===
The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component px, py, pz. The entire space is 6-dimensional: a point in this space is (r, p) = (x, y, z, px, py, pz), and each coordinate is parameterized by time t. A relevant differential element is written
d
3
r
d
3
p
=
d
x
d
y
d
z
d
p
x
d
p
y
d
p
z
.
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} =dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}.}
Since the probability of N molecules, which all have r and p within
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
, is in question, at the heart of the equation is a quantity f which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t. This is a probability density function: f(r, p, t), defined so that,
d
N
=
f
(
r
,
p
,
t
)
d
3
r
d
3
p
{\displaystyle dN=f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
is the number of molecules which all have positions lying within a volume element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
about r and momenta lying within a momentum space element
d
3
p
{\displaystyle d^{3}\mathbf {p} }
about p, at time t. Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region:
N
=
∫
m
o
m
e
n
t
a
d
3
p
∫
p
o
s
i
t
i
o
n
s
d
3
r
f
(
r
,
p
,
t
)
=
∭
m
o
m
e
n
t
a
∭
p
o
s
i
t
i
o
n
s
f
(
x
,
y
,
z
,
p
x
,
p
y
,
p
z
,
t
)
d
x
d
y
d
z
d
p
x
d
p
y
d
p
z
{\displaystyle {\begin{aligned}N&=\int \limits _{\mathrm {momenta} }d^{3}\mathbf {p} \int \limits _{\mathrm {positions} }d^{3}\mathbf {r} \,f(\mathbf {r} ,\mathbf {p} ,t)\\[5pt]&=\iiint \limits _{\mathrm {momenta} }\quad \iiint \limits _{\mathrm {positions} }f(x,y,z,p_{x},p_{y},p_{z},t)\,dx\,dy\,dz\,dp_{x}\,dp_{y}\,dp_{z}\end{aligned}}}
which is a 6-fold integral. While f is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r1, p1 for particle 1, r2, p2 for particle 2, etc. up to rN, pN for particle N.
It is assumed the particles in the system are identical (so each has an identical mass m). For a mixture of more than one chemical species, one distribution is needed for each, see below.
=== Principal statement ===
The general equation can then be written as
d
f
d
t
=
(
∂
f
∂
t
)
force
+
(
∂
f
∂
t
)
diff
+
(
∂
f
∂
t
)
coll
,
{\displaystyle {\frac {df}{dt}}=\left({\frac {\partial f}{\partial t}}\right)_{\text{force}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{diff}}+\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}},}
where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term – accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.
Note that some authors use the particle velocity v instead of momentum p; they are related in the definition of momentum by p = mv.
== The force and diffusion terms ==
Consider particles described by f, each experiencing an external force F not due to other particles (see the collision term for the latter treatment).
Suppose at time t some number of particles all have position r within element
d
3
r
{\displaystyle d^{3}\mathbf {r} }
and momentum p within
d
3
p
{\displaystyle d^{3}\mathbf {p} }
. If a force F instantly acts on each particle, then at time t + Δt their position will be
r
+
Δ
r
=
r
+
p
m
Δ
t
{\displaystyle \mathbf {r} +\Delta \mathbf {r} =\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t}
and momentum p + Δp = p + FΔt. Then, in the absence of collisions, f must satisfy
f
(
r
+
p
m
Δ
t
,
p
+
F
Δ
t
,
t
+
Δ
t
)
d
3
r
d
3
p
=
f
(
r
,
p
,
t
)
d
3
r
d
3
p
{\displaystyle f\left(\mathbf {r} +{\frac {\mathbf {p} }{m}}\,\Delta t,\mathbf {p} +\mathbf {F} \,\Delta t,t+\Delta t\right)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} =f(\mathbf {r} ,\mathbf {p} ,t)\,d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
Note that we have used the fact that the phase space volume element
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume
d
3
r
d
3
p
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} }
changes, so
where Δf is the total change in f. Dividing (1) by
d
3
r
d
3
p
Δ
t
{\displaystyle d^{3}\mathbf {r} \,d^{3}\mathbf {p} \,\Delta t}
and taking the limits Δt → 0 and Δf → 0, we have
The total differential of f is:
where ∇ is the gradient operator, · is the dot product,
∂
f
∂
p
=
e
^
x
∂
f
∂
p
x
+
e
^
y
∂
f
∂
p
y
+
e
^
z
∂
f
∂
p
z
=
∇
p
f
{\displaystyle {\frac {\partial f}{\partial \mathbf {p} }}=\mathbf {\hat {e}} _{x}{\frac {\partial f}{\partial p_{x}}}+\mathbf {\hat {e}} _{y}{\frac {\partial f}{\partial p_{y}}}+\mathbf {\hat {e}} _{z}{\frac {\partial f}{\partial p_{z}}}=\nabla _{\mathbf {p} }f}
is a shorthand for the momentum analogue of ∇, and êx, êy, êz are Cartesian unit vectors.
=== Final statement ===
Dividing (3) by dt and substituting into (2) gives:
∂
f
∂
t
+
p
m
⋅
∇
f
+
F
⋅
∂
f
∂
p
=
(
∂
f
∂
t
)
c
o
l
l
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\left({\frac {\partial f}{\partial t}}\right)_{\mathrm {coll} }}
In this context, F(r, t) is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation, where individual collisions are replaced with long-range aggregated interactions, e.g. Coulomb interactions, is often called the Vlasov equation.
This equation is more useful than the principal one above, yet still incomplete, since f cannot be solved unless the collision term in f is known. This term cannot be found as easily or generally as the others – it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell–Boltzmann, Fermi–Dirac or Bose–Einstein distributions.
== The collision term (Stosszahlansatz) and molecular chaos ==
=== Two-body collision term ===
A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "Stosszahlansatz" and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:
(
∂
f
∂
t
)
coll
=
∬
g
I
(
g
,
Ω
)
[
f
(
r
,
p
′
A
,
t
)
f
(
r
,
p
′
B
,
t
)
−
f
(
r
,
p
A
,
t
)
f
(
r
,
p
B
,
t
)
]
d
Ω
d
3
p
B
,
{\displaystyle \left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}=\iint gI(g,\Omega )[f(\mathbf {r} ,\mathbf {p'} _{A},t)f(\mathbf {r} ,\mathbf {p'} _{B},t)-f(\mathbf {r} ,\mathbf {p} _{A},t)f(\mathbf {r} ,\mathbf {p} _{B},t)]\,d\Omega \,d^{3}\mathbf {p} _{B},}
where pA and pB are the momenta of any two particles (labeled as A and B for convenience) before a collision, p′A and p′B are the momenta after the collision,
g
=
|
p
B
−
p
A
|
=
|
p
′
B
−
p
′
A
|
{\displaystyle g=|\mathbf {p} _{B}-\mathbf {p} _{A}|=|\mathbf {p'} _{B}-\mathbf {p'} _{A}|}
is the magnitude of the relative momenta (see relative velocity for more on this concept), and I(g, Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle dΩ, due to the collision.
=== Simplifications to the collision term ===
Since much of the challenge in solving the Boltzmann equation originates with the complex collision term, attempts have been made to "model" and simplify the collision term. The best known model equation is due to Bhatnagar, Gross and Krook. The assumption in the BGK approximation is that the effect of molecular collisions is to force a non-equilibrium distribution function at a point in physical space back to a Maxwellian equilibrium distribution function and that the rate at which this occurs is proportional to the molecular collision frequency. The Boltzmann equation is therefore modified to the BGK form:
∂
f
∂
t
+
p
m
⋅
∇
f
+
F
⋅
∂
f
∂
p
=
ν
(
f
0
−
f
)
,
{\displaystyle {\frac {\partial f}{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla f+\mathbf {F} \cdot {\frac {\partial f}{\partial \mathbf {p} }}=\nu (f_{0}-f),}
where
ν
{\displaystyle \nu }
is the molecular collision frequency, and
f
0
{\displaystyle f_{0}}
is the local Maxwellian distribution function given the gas temperature at this point in space. This is also called "relaxation time approximation".
== General equation (for a mixture) ==
For a mixture of chemical species labelled by indices i = 1, 2, 3, ..., n the equation for species i is
∂
f
i
∂
t
+
p
i
m
i
⋅
∇
f
i
+
F
⋅
∂
f
i
∂
p
i
=
(
∂
f
i
∂
t
)
coll
,
{\displaystyle {\frac {\partial f_{i}}{\partial t}}+{\frac {\mathbf {p} _{i}}{m_{i}}}\cdot \nabla f_{i}+\mathbf {F} \cdot {\frac {\partial f_{i}}{\partial \mathbf {p} _{i}}}=\left({\frac {\partial f_{i}}{\partial t}}\right)_{\text{coll}},}
where fi = fi(r, pi, t), and the collision term is
(
∂
f
i
∂
t
)
c
o
l
l
=
∑
j
=
1
n
∬
g
i
j
I
i
j
(
g
i
j
,
Ω
)
[
f
i
′
f
j
′
−
f
i
f
j
]
d
Ω
d
3
p
′
,
{\displaystyle \left({\frac {\partial f_{i}}{\partial t}}\right)_{\mathrm {coll} }=\sum _{j=1}^{n}\iint g_{ij}I_{ij}(g_{ij},\Omega )[f'_{i}f'_{j}-f_{i}f_{j}]\,d\Omega \,d^{3}\mathbf {p'} ,}
where f′ = f′(p′i, t), the magnitude of the relative momenta is
g
i
j
=
|
p
i
−
p
j
|
=
|
p
i
′
−
p
j
′
|
,
{\displaystyle g_{ij}=|\mathbf {p} _{i}-\mathbf {p} _{j}|=|\mathbf {p} '_{i}-\mathbf {p} '_{j}|,}
and Iij is the differential cross-section, as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase-space element.
== Applications and extensions ==
=== Conservation equations ===
The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum, and energy.: 163 For a fluid consisting of only one kind of particle, the number density n is given by
n
=
∫
f
d
3
p
.
{\displaystyle n=\int f\,d^{3}\mathbf {p} .}
The average value of any function A is
⟨
A
⟩
=
1
n
∫
A
f
d
3
p
.
{\displaystyle \langle A\rangle ={\frac {1}{n}}\int Af\,d^{3}\mathbf {p} .}
Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus
x
↦
x
i
{\displaystyle \mathbf {x} \mapsto x_{i}}
and
p
↦
p
i
=
m
v
i
{\displaystyle \mathbf {p} \mapsto p_{i}=mv_{i}}
, where
v
i
{\displaystyle v_{i}}
is the particle velocity vector. Define
A
(
p
i
)
{\displaystyle A(p_{i})}
as some function of momentum
p
i
{\displaystyle p_{i}}
only, whose total value is conserved in a collision. Assume also that the force
F
i
{\displaystyle F_{i}}
is a function of position only, and that f is zero for
p
i
→
±
∞
{\displaystyle p_{i}\to \pm \infty }
. Multiplying the Boltzmann equation by A and integrating over momentum yields four terms, which, using integration by parts, can be expressed as
∫
A
∂
f
∂
t
d
3
p
=
∂
∂
t
(
n
⟨
A
⟩
)
,
{\displaystyle \int A{\frac {\partial f}{\partial t}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}(n\langle A\rangle ),}
∫
p
j
A
m
∂
f
∂
x
j
d
3
p
=
1
m
∂
∂
x
j
(
n
⟨
A
p
j
⟩
)
,
{\displaystyle \int {\frac {p_{j}A}{m}}{\frac {\partial f}{\partial x_{j}}}\,d^{3}\mathbf {p} ={\frac {1}{m}}{\frac {\partial }{\partial x_{j}}}(n\langle Ap_{j}\rangle ),}
∫
A
F
j
∂
f
∂
p
j
d
3
p
=
−
n
F
j
⟨
∂
A
∂
p
j
⟩
,
{\displaystyle \int AF_{j}{\frac {\partial f}{\partial p_{j}}}\,d^{3}\mathbf {p} =-nF_{j}\left\langle {\frac {\partial A}{\partial p_{j}}}\right\rangle ,}
∫
A
(
∂
f
∂
t
)
coll
d
3
p
=
∂
∂
t
coll
(
n
⟨
A
⟩
)
=
0
,
{\displaystyle \int A\left({\frac {\partial f}{\partial t}}\right)_{\text{coll}}\,d^{3}\mathbf {p} ={\frac {\partial }{\partial t}}_{\text{coll}}(n\langle A\rangle )=0,}
where the last term is zero, since A is conserved in a collision. The values of A correspond to moments of velocity
v
i
{\displaystyle v_{i}}
(and momentum
p
i
{\displaystyle p_{i}}
, as they are linearly dependent).
==== Zeroth moment ====
Letting
A
=
m
(
v
i
)
0
=
m
{\displaystyle A=m(v_{i})^{0}=m}
, the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation:: 12, 168
∂
∂
t
ρ
+
∂
∂
x
j
(
ρ
V
j
)
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\rho +{\frac {\partial }{\partial x_{j}}}(\rho V_{j})=0,}
where
ρ
=
m
n
{\displaystyle \rho =mn}
is the mass density, and
V
i
=
⟨
v
i
⟩
{\displaystyle V_{i}=\langle v_{i}\rangle }
is the average fluid velocity.
==== First moment ====
Letting
A
=
m
(
v
i
)
1
=
p
i
{\displaystyle A=m(v_{i})^{1}=p_{i}}
, the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation:: 15, 169
∂
∂
t
(
ρ
V
i
)
+
∂
∂
x
j
(
ρ
V
i
V
j
+
P
i
j
)
−
n
F
i
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}(\rho V_{i})+{\frac {\partial }{\partial x_{j}}}(\rho V_{i}V_{j}+P_{ij})-nF_{i}=0,}
where
P
i
j
=
ρ
⟨
(
v
i
−
V
i
)
(
v
j
−
V
j
)
⟩
{\displaystyle P_{ij}=\rho \langle (v_{i}-V_{i})(v_{j}-V_{j})\rangle }
is the pressure tensor (the viscous stress tensor plus the hydrostatic pressure).
==== Second moment ====
Letting
A
=
m
(
v
i
)
2
2
=
p
i
p
i
2
m
{\displaystyle A={\frac {m(v_{i})^{2}}{2}}={\frac {p_{i}p_{i}}{2m}}}
, the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation:: 19, 169
∂
∂
t
(
u
+
1
2
ρ
V
i
V
i
)
+
∂
∂
x
j
(
u
V
j
+
1
2
ρ
V
i
V
i
V
j
+
J
q
j
+
P
i
j
V
i
)
−
n
F
i
V
i
=
0
,
{\displaystyle {\frac {\partial }{\partial t}}\left(u+{\tfrac {1}{2}}\rho V_{i}V_{i}\right)+{\frac {\partial }{\partial x_{j}}}\left(uV_{j}+{\tfrac {1}{2}}\rho V_{i}V_{i}V_{j}+J_{qj}+P_{ij}V_{i}\right)-nF_{i}V_{i}=0,}
where
u
=
1
2
ρ
⟨
(
v
i
−
V
i
)
(
v
i
−
V
i
)
⟩
{\textstyle u={\tfrac {1}{2}}\rho \langle (v_{i}-V_{i})(v_{i}-V_{i})\rangle }
is the kinetic thermal energy density, and
J
q
i
=
1
2
ρ
⟨
(
v
i
−
V
i
)
(
v
k
−
V
k
)
(
v
k
−
V
k
)
⟩
{\textstyle J_{qi}={\tfrac {1}{2}}\rho \langle (v_{i}-V_{i})(v_{k}-V_{k})(v_{k}-V_{k})\rangle }
is the heat flux vector.
=== Hamiltonian mechanics ===
In Hamiltonian mechanics, the Boltzmann equation is often written more generally as
L
^
[
f
]
=
C
[
f
]
,
{\displaystyle {\hat {\mathbf {L} }}[f]=\mathbf {C} [f],}
where L is the Liouville operator (there is an inconsistent definition between the Liouville operator as defined here and the one in the article linked) describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is
L
^
N
R
=
∂
∂
t
+
p
m
⋅
∇
+
F
⋅
∂
∂
p
.
{\displaystyle {\hat {\mathbf {L} }}_{\mathrm {NR} }={\frac {\partial }{\partial t}}+{\frac {\mathbf {p} }{m}}\cdot \nabla +\mathbf {F} \cdot {\frac {\partial }{\partial \mathbf {p} }}\,.}
=== Quantum theory and violation of particle number conservation ===
It is possible to write down relativistic quantum Boltzmann equations for relativistic quantum systems in which the number of particles is not conserved in collisions. This has several applications in physical cosmology, including the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis. It is not a priori clear that the state of a quantum system can be characterized by a classical phase space density f. However, for a wide class of applications a well-defined generalization of f exists which is the solution of an effective Boltzmann equation that can be derived from first principles of quantum field theory.
=== General relativity and astronomy ===
The Boltzmann equation is of use in galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe.
Its generalization in general relativity is
L
^
G
R
[
f
]
=
p
α
∂
f
∂
x
α
−
Γ
α
β
γ
p
β
p
γ
∂
f
∂
p
α
=
C
[
f
]
,
{\displaystyle {\hat {\mathbf {L} }}_{\mathrm {GR} }[f]=p^{\alpha }{\frac {\partial f}{\partial x^{\alpha }}}-\Gamma ^{\alpha }{}_{\beta \gamma }p^{\beta }p^{\gamma }{\frac {\partial f}{\partial p^{\alpha }}}=C[f],}
where Γαβγ is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant (xi, pi) phase space as opposed to fully contravariant (xi, pi) phase space.
In physical cosmology the fully covariant approach has been used to study the cosmic microwave background radiation. More generically the study of processes in the early universe often attempt to take into account the effects of quantum mechanics and general relativity. In the very dense medium formed by the primordial plasma after the Big Bang, particles are continuously created and annihilated. In such an environment quantum coherence and the spatial extension of the wavefunction can affect the dynamics, making it questionable whether the classical phase space distribution f that appears in the Boltzmann equation is suitable to describe the system. In many cases it is, however, possible to derive an effective Boltzmann equation for a generalized distribution function from first principles of quantum field theory. This includes the formation of the light elements in Big Bang nucleosynthesis, the production of dark matter and baryogenesis.
== Solving the equation ==
Exact solutions to the Boltzmann equations have been proven to exist in some cases; this analytical approach provides insight, but is not generally usable in practical problems.
Instead, numerical methods (including finite elements and lattice Boltzmann methods) are generally used to find approximate solutions to the various forms of the Boltzmann equation. Example applications range from hypersonic aerodynamics in rarefied gas flows to plasma flows. An application of the Boltzmann equation in electrodynamics is the calculation of the electrical conductivity - the result is in leading order identical with the semiclassical result.
Close to local equilibrium, solution of the Boltzmann equation can be represented by an asymptotic expansion in powers of Knudsen number (the Chapman–Enskog expansion). The first two terms of this expansion give the Euler equations and the Navier–Stokes equations. The higher terms have singularities. The problem of developing mathematically the limiting processes, which lead from the atomistic view (represented by Boltzmann's equation) to the laws of motion of continua, is an important part of Hilbert's sixth problem.
== Limitations and further uses of the Boltzmann equation ==
The Boltzmann equation is valid only under several assumptions. For instance, the particles are assumed to be pointlike, i.e. without having a finite size. There exists a generalization of the Boltzmann equation that is called the Enskog equation. The collision term is modified in Enskog equations such that particles have a finite size, for example they can be modelled as spheres having a fixed radius.
No further degrees of freedom besides translational motion are assumed for the particles. If there are internal degrees of freedom, the Boltzmann equation has to be generalized and might possess inelastic collisions.
Many real fluids like liquids or dense gases have besides the features mentioned above more complex forms of collisions, there will be not only binary, but also ternary and higher order collisions. These must be derived by using the BBGKY hierarchy.
Boltzmann-like equations are also used for the movement of cells. Since cells are composite particles that carry internal degrees of freedom, the corresponding generalized Boltzmann equations must have inelastic collision integrals. Such equations can describe invasions of cancer cells in tissue, morphogenesis, and chemotaxis-related effects.
== See also ==
== Notes ==
== References ==
Harris, Stewart (1971). An introduction to the theory of the Boltzmann equation. Dover Books. p. 221. ISBN 978-0-486-43831-3.. Very inexpensive introduction to the modern framework (starting from a formal deduction from Liouville and the Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy (BBGKY) in which the Boltzmann equation is placed). Most statistical mechanics textbooks like Huang still treat the topic using Boltzmann's original arguments. To derive the equation, these books use a heuristic explanation that does not bring out the range of validity and the characteristic assumptions that distinguish Boltzmann's from other transport equations like Fokker–Planck or Landau equations.
Arkeryd, Leif (1972). "On the Boltzmann equation part I: Existence". Arch. Rational Mech. Anal. 45 (1): 1–16. Bibcode:1972ArRMA..45....1A. doi:10.1007/BF00253392.
Arkeryd, Leif (1972). "On the Boltzmann equation part II: The full initial value problem". Arch. Rational Mech. Anal. 45 (1): 17–34. Bibcode:1972ArRMA..45...17A. doi:10.1007/BF00253393.
Arkeryd, Leif (1972). "On the Boltzmann equation part I: Existence". Arch. Rational Mech. Anal. 45 (1): 1–16. Bibcode:1972ArRMA..45....1A. doi:10.1007/BF00253392.
== External links ==
The Boltzmann Transport Equation by Franz Vesely
Boltzmann gaseous behaviors solved | Wikipedia/Boltzmann_equation |
In physics, the acoustic wave equation is a second-order partial differential equation that governs the propagation of acoustic waves through a material medium resp. a standing wavefield. The equation describes the evolution of acoustic pressure p or particle velocity u as a function of position x and time t. A simplified (scalar) form of the equation describes acoustic waves in only one spatial dimension, while a more general form describes waves in three dimensions.
For lossy media, more intricate models need to be applied in order to take into account frequency-dependent attenuation and phase speed. Such models include acoustic wave equations that incorporate fractional derivative terms, see also the acoustic attenuation article or the survey paper.
== Definition in one dimension ==
The wave equation describing a standing wave field in one dimension (position
x
{\displaystyle x}
) is
p
x
x
−
1
c
2
p
t
t
=
0
,
{\displaystyle p_{xx}-{\frac {1}{c^{2}}}p_{tt}=0,}
where
p
{\displaystyle p}
is the acoustic pressure (the local deviation from the ambient pressure) and
c
{\displaystyle c}
the speed of sound, using subscript notation for the partial derivatives.
=== Derivation ===
Start with the ideal gas law
P
=
ρ
R
specific
T
,
{\displaystyle P=\rho R_{\text{specific}}T,}
where
T
{\displaystyle T}
the absolute temperature of the gas and specific gas constant
R
specific
{\displaystyle R_{\text{specific}}}
.
Then, assuming the process is adiabatic, pressure
P
(
ρ
)
{\displaystyle P(\rho )}
can be considered a function of density
ρ
{\displaystyle \rho }
.
The conservation of mass and conservation of momentum can be written as a closed system of two equations
ρ
t
+
(
ρ
u
)
x
=
0
,
(
ρ
u
)
t
+
(
ρ
u
2
+
P
(
ρ
)
)
x
=
0.
{\displaystyle {\begin{aligned}\rho _{t}+(\rho u)_{x}&=0,\\(\rho u)_{t}+(\rho u^{2}+P(\rho ))_{x}&=0.\end{aligned}}}
This coupled system of two nonlinear conservation laws can be written in vector form as:
q
t
+
f
(
q
)
x
=
0
,
{\displaystyle q_{t}+f(q)_{x}=0,}
with
q
=
[
ρ
ρ
u
]
=
[
q
(
1
)
q
(
2
)
]
,
f
(
q
)
=
[
ρ
u
ρ
u
2
+
P
(
ρ
)
]
=
[
q
(
2
)
q
(
2
)
2
/
q
(
1
)
+
P
(
q
(
1
)
)
]
.
{\displaystyle q={\begin{bmatrix}\rho \\\rho u\end{bmatrix}}={\begin{bmatrix}q_{(1)}\\q_{(2)}\end{bmatrix}},\quad f(q)={\begin{bmatrix}\rho u\\\rho u^{2}+P(\rho )\end{bmatrix}}={\begin{bmatrix}q_{(2)}\\q_{(2)}^{2}/q_{(1)}+P(q_{(1)})\end{bmatrix}}.}
To linearize this equation, let
q
(
x
,
t
)
=
q
0
+
q
~
(
x
,
t
)
,
{\displaystyle q(x,t)=q_{0}+{\tilde {q}}(x,t),}
where
q
0
=
(
ρ
0
,
ρ
0
u
0
)
{\displaystyle q_{0}=(\rho _{0},\rho _{0}u_{0})}
is the (constant) background state and
q
~
{\displaystyle {\tilde {q}}}
is a sufficiently small perturbation, i.e., any powers or products of
q
~
{\displaystyle {\tilde {q}}}
can be discarded. Hence, the taylor expansion of
f
(
q
)
{\displaystyle f(q)}
gives:
f
(
q
0
+
q
~
)
≈
f
(
q
0
)
+
f
′
(
q
0
)
q
~
{\displaystyle f(q_{0}+{\tilde {q}})\approx f(q_{0})+f'(q_{0}){\tilde {q}}}
where
f
′
(
q
)
=
[
∂
f
(
1
)
/
∂
q
(
1
)
∂
f
(
1
)
/
∂
q
(
2
)
∂
f
(
2
)
/
∂
q
(
1
)
∂
f
(
2
)
/
∂
q
(
2
)
]
=
[
0
1
−
u
2
+
P
′
(
ρ
)
2
u
]
.
{\displaystyle f'(q)={\begin{bmatrix}\partial f_{(1)}/\partial q_{(1)}&\partial f_{(1)}/\partial q_{(2)}\\\partial f_{(2)}/\partial q_{(1)}&\partial f_{(2)}/\partial q_{(2)}\end{bmatrix}}={\begin{bmatrix}0&1\\-u^{2}+P'(\rho )&2u\end{bmatrix}}.}
This results in the linearized equation
q
~
t
+
f
′
(
q
0
)
q
~
x
=
0
⇔
ρ
~
t
+
(
ρ
u
~
)
x
=
0
(
ρ
u
~
)
t
+
(
−
u
0
2
+
P
′
(
ρ
0
)
)
ρ
~
x
+
2
u
0
(
ρ
u
~
)
x
=
0
{\displaystyle {\tilde {q}}_{t}+f'(q_{0}){\tilde {q}}_{x}=0\quad \Leftrightarrow \quad {\begin{aligned}{\tilde {\rho }}_{t}+({\widetilde {\rho u}})_{x}&=0\\({\widetilde {\rho u}})_{t}+(-u_{0}^{2}+P'(\rho _{0})){\tilde {\rho }}_{x}+2u_{0}({\widetilde {\rho u}})_{x}&=0\end{aligned}}}
Likewise, small perturbations of the components of
q
{\displaystyle q}
can be rewritten as:
ρ
u
=
(
ρ
0
+
ρ
~
)
(
u
0
+
u
~
)
=
ρ
0
u
0
+
ρ
~
u
0
+
ρ
0
u
~
+
ρ
~
u
~
{\displaystyle \rho u=(\rho _{0}+{\tilde {\rho }})(u_{0}+{\tilde {u}})=\rho _{0}u_{0}+{\tilde {\rho }}u_{0}+\rho _{0}{\tilde {u}}+{\tilde {\rho }}{\tilde {u}}}
such that
ρ
u
~
≈
ρ
~
u
0
+
ρ
0
u
~
,
{\displaystyle {\widetilde {\rho u}}\approx {\tilde {\rho }}u_{0}+\rho _{0}{\tilde {u}},}
and pressure perturbations relate to density perturbations as:
p
=
p
0
+
p
~
=
P
(
ρ
0
+
ρ
~
)
=
P
(
ρ
0
)
+
P
′
(
ρ
0
)
ρ
~
+
…
{\displaystyle p=p_{0}+{\tilde {p}}=P(\rho _{0}+{\tilde {\rho }})=P(\rho _{0})+P'(\rho _{0}){\tilde {\rho }}+\dots }
such that:
p
0
=
P
(
ρ
0
)
,
p
~
≈
P
′
(
ρ
0
)
ρ
~
,
{\displaystyle p_{0}=P(\rho _{0}),\quad {\tilde {p}}\approx P'(\rho _{0}){\tilde {\rho }},}
where
P
′
(
ρ
0
)
{\displaystyle P'(\rho _{0})}
is a constant, resulting in the alternative form of the linear acoustics equations:
p
~
t
+
u
0
p
~
x
+
K
0
u
~
x
=
0
,
ρ
0
u
~
t
+
p
~
x
+
ρ
0
u
0
u
~
x
=
0.
{\displaystyle {\begin{aligned}{\tilde {p}}_{t}+u_{0}{\tilde {p}}_{x}+K_{0}{\tilde {u}}_{x}&=0,\\\rho _{0}{\tilde {u}}_{t}+{\tilde {p}}_{x}+\rho _{0}u_{0}{\tilde {u}}_{x}&=0.\end{aligned}}}
where
K
0
=
ρ
0
P
′
(
ρ
0
)
{\displaystyle K_{0}=\rho _{0}P'(\rho _{0})}
is the bulk modulus of compressibility. After dropping the tilde for convenience, the linear first order system can be written as:
[
p
u
]
t
+
[
u
0
K
0
1
/
ρ
0
u
0
]
[
p
u
]
x
=
0.
{\displaystyle {\begin{bmatrix}p\\u\end{bmatrix}}_{t}+{\begin{bmatrix}u_{0}&K_{0}\\1/\rho _{0}&u_{0}\end{bmatrix}}{\begin{bmatrix}p\\u\end{bmatrix}}_{x}=0.}
While, in general, a non-zero background velocity is possible (e.g. when studying the sound propagation in a constant-strength wind), it will be assumed that
u
0
=
0
{\displaystyle u_{0}=0}
. Then the linear system reduces to the second-order wave equation:
p
t
t
=
−
K
0
u
x
t
=
−
K
0
u
t
x
=
K
0
(
1
ρ
0
p
x
)
x
=
c
0
2
p
x
x
,
{\displaystyle p_{tt}=-K_{0}u_{xt}=-K_{0}u_{tx}=K_{0}\left({\frac {1}{\rho _{0}}}p_{x}\right)_{x}=c_{0}^{2}p_{xx},}
with
c
0
=
K
0
/
ρ
0
{\displaystyle c_{0}={\sqrt {K_{0}/\rho _{0}}}}
the speed of sound.
Hence, the acoustic equation can be derived from a system of first-order
advection equations that follow directly from physics, i.e., the first integrals:
q
t
+
A
q
x
=
0
,
{\displaystyle q_{t}+Aq_{x}=0,}
with
q
=
[
p
u
]
,
A
=
[
0
K
0
1
/
ρ
0
0
]
.
{\displaystyle q={\begin{bmatrix}p\\u\end{bmatrix}},\quad A={\begin{bmatrix}0&K_{0}\\1/\rho _{0}&0\end{bmatrix}}.}
Conversely, given the second-order equation
p
t
t
=
c
0
2
p
x
x
{\displaystyle p_{tt}=c_{0}^{2}p_{xx}}
a first-order system can be derived:
q
t
+
A
^
q
x
=
0
,
{\displaystyle q_{t}+{\hat {A}}q_{x}=0,}
with
q
=
[
p
t
−
p
x
]
,
A
^
=
[
0
c
0
2
1
0
]
,
{\displaystyle q={\begin{bmatrix}p_{t}\\-p_{x}\end{bmatrix}},\quad {\hat {A}}={\begin{bmatrix}0&c_{0}^{2}\\1&0\end{bmatrix}},}
where matrix
A
{\displaystyle A}
and
A
^
{\displaystyle {\hat {A}}}
are similar.
=== Solution ===
Provided that the speed
c
{\displaystyle c}
is a constant, not dependent on frequency (the dispersionless case), then the most general solution is
p
=
f
(
c
t
−
x
)
+
g
(
c
t
+
x
)
{\displaystyle p=f(ct-x)+g(ct+x)}
where
f
{\displaystyle f}
and
g
{\displaystyle g}
are any two twice-differentiable functions. This may be pictured as the superposition of two waveforms of arbitrary profile, one (
f
{\displaystyle f}
) traveling up the x-axis and the other (
g
{\displaystyle g}
) down the x-axis at the speed
c
{\displaystyle c}
. The particular case of a sinusoidal wave traveling in one direction is obtained by choosing either
f
{\displaystyle f}
or
g
{\displaystyle g}
to be a sinusoid, and the other to be zero, giving
p
=
p
0
sin
(
ω
t
∓
k
x
)
{\displaystyle p=p_{0}\sin(\omega t\mp kx)}
.
where
ω
{\displaystyle \omega }
is the angular frequency of the wave and
k
{\displaystyle k}
is its wave number.
== In three dimensions ==
=== Equation ===
Feynman provides a derivation of the wave equation for sound in three dimensions as
∇
2
p
−
1
c
2
∂
2
p
∂
t
2
=
0
,
{\displaystyle \nabla ^{2}p-{1 \over c^{2}}{\partial ^{2}p \over \partial t^{2}}=0,}
where
∇
2
{\displaystyle \nabla ^{2}}
is the Laplace operator,
p
{\displaystyle p}
is the acoustic pressure (the local deviation from the ambient pressure), and
c
{\displaystyle c}
is the speed of sound.
A similar looking wave equation but for the vector field particle velocity is given by
∇
2
u
−
1
c
2
∂
2
u
∂
t
2
=
0
{\displaystyle \nabla ^{2}\mathbf {u} \;-{1 \over c^{2}}{\partial ^{2}\mathbf {u} \; \over \partial t^{2}}=0}
.
In some situations, it is more convenient to solve the wave equation for an abstract scalar field velocity potential which has the form
∇
2
Φ
−
1
c
2
∂
2
Φ
∂
t
2
=
0
{\displaystyle \nabla ^{2}\Phi -{1 \over c^{2}}{\partial ^{2}\Phi \over \partial t^{2}}=0}
and then derive the physical quantities particle velocity and acoustic pressure by the equations (or definition, in the case of particle velocity):
u
=
∇
Φ
{\displaystyle \mathbf {u} =\nabla \Phi \;}
,
p
=
−
ρ
∂
∂
t
Φ
{\displaystyle p=-\rho {\partial \over \partial t}\Phi }
.
=== Solution ===
The following solutions are obtained by separation of variables in different coordinate systems. They are phasor solutions, that is they have an implicit time-dependence factor of
e
i
ω
t
{\displaystyle e^{i\omega t}}
where
ω
=
2
π
f
{\displaystyle \omega =2\pi f}
is the angular frequency. The explicit time dependence is given by
p
(
r
,
t
,
k
)
=
Real
[
p
(
r
,
k
)
e
i
ω
t
]
{\displaystyle p(r,t,k)=\operatorname {Real} \left[p(r,k)e^{i\omega t}\right]}
Here
k
=
ω
/
c
{\displaystyle k=\omega /c\ }
is the wave number.
==== Cartesian coordinates ====
p
(
r
,
k
)
=
A
e
±
i
k
r
{\displaystyle p(r,k)=Ae^{\pm ikr}}
.
==== Cylindrical coordinates ====
p
(
r
,
k
)
=
A
H
0
(
1
)
(
k
r
)
+
B
H
0
(
2
)
(
k
r
)
{\displaystyle p(r,k)=AH_{0}^{(1)}(kr)+\ BH_{0}^{(2)}(kr)}
.
where the asymptotic approximations to the Hankel functions, when
k
r
→
∞
{\displaystyle kr\rightarrow \infty }
, are
H
0
(
1
)
(
k
r
)
≃
2
π
k
r
e
i
(
k
r
−
π
/
4
)
{\displaystyle H_{0}^{(1)}(kr)\simeq {\sqrt {\frac {2}{\pi kr}}}e^{i(kr-\pi /4)}}
H
0
(
2
)
(
k
r
)
≃
2
π
k
r
e
−
i
(
k
r
−
π
/
4
)
{\displaystyle H_{0}^{(2)}(kr)\simeq {\sqrt {\frac {2}{\pi kr}}}e^{-i(kr-\pi /4)}}
.
==== Spherical coordinates ====
p
(
r
,
k
)
=
A
r
e
±
i
k
r
{\displaystyle p(r,k)={\frac {A}{r}}e^{\pm ikr}}
.
Depending on the chosen Fourier convention, one of these represents an outward travelling wave and the other a nonphysical inward travelling wave. The inward travelling solution wave is only nonphysical because of the singularity that occurs at r=0; inward travelling waves do exist.
== See also ==
Acoustics
Acoustic attenuation
Acoustic theory
Differential equations
Fluid dynamics
Ideal gas law
Madelung equations
One-way wave equation
Pressure
Thermodynamics
Wave equation
== Notes ==
== References ==
LeVeque, Randall J. (2002). Finite Volume Methods for Hyperbolic Problems. Cambridge University Press. doi:10.1017/cbo9780511791253. ISBN 978-0-521-81087-6. | Wikipedia/Acoustic_wave_equation |
In mathematics, a first-order partial differential equation is a partial differential equation that involves the first derivatives of an unknown function
u
{\displaystyle u}
of
n
≥
2
{\displaystyle n\geq 2}
variables. The equation takes the form
F
(
x
1
,
…
,
x
n
,
u
,
u
x
1
,
…
u
x
n
)
=
0
,
{\displaystyle F(x_{1},\ldots ,x_{n},u,u_{x_{1}},\ldots u_{x_{n}})=0,}
using subscript notation to denote the partial derivatives of
u
{\displaystyle u}
.
Such equations arise in the construction of characteristic surfaces for hyperbolic partial differential equations, in the calculus of variations, in some geometrical problems, and in simple models for gas dynamics whose solution involves the method of characteristics, e.g., the advection equation. If a family of solutions
of a single first-order partial differential equation can be found, then additional solutions may be obtained by forming envelopes of solutions in that family. In a related procedure, general solutions may be obtained by integrating families of ordinary differential equations.
== General solution and complete integral ==
The general solution to the first order partial differential equation is a solution which contains an arbitrary function. But, the solution to the first order partial differential equations with as many arbitrary constants as the number of independent variables is called the complete integral. The following n-parameter family of solutions
ϕ
(
x
1
,
x
2
,
…
,
x
n
,
u
,
a
1
,
a
2
,
…
,
a
n
)
{\displaystyle \phi (x_{1},x_{2},\dots ,x_{n},u,a_{1},a_{2},\dots ,a_{n})}
is a complete integral if
det
|
ϕ
x
i
a
j
|
≠
0
{\displaystyle {\text{det}}|\phi _{x_{i}a_{j}}|\neq 0}
. The below discussions on the type of integrals are based on the textbook A Treatise on Differential Equations (Chaper IX, 6th edition, 1928) by Andrew Forsyth.
=== Complete integral ===
The solutions are described in relatively simple manner in two or three dimensions with which the key concepts are trivially extended to higher dimensions. A general first-order partial differential equation in three dimensions has the form
F
(
x
,
y
,
z
,
u
,
p
,
q
,
r
)
=
0
,
{\displaystyle F(x,y,z,u,p,q,r)=0,\,}
where
p
=
u
x
,
q
=
u
y
,
r
=
u
z
.
{\displaystyle p=u_{x},\,q=u_{y},\,r=u_{z}.}
Suppose
ϕ
(
x
,
y
,
z
,
u
,
a
,
b
,
c
)
=
0
{\displaystyle \phi (x,y,z,u,a,b,c)=0}
be the complete integral that contains three arbitrary constants
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
. From this we can obtain three relations by differentiation
ϕ
x
+
p
ϕ
u
=
0
{\displaystyle \phi _{x}+p\phi _{u}=0}
ϕ
y
+
q
ϕ
u
=
0
{\displaystyle \phi _{y}+q\phi _{u}=0}
ϕ
z
+
r
ϕ
u
=
0
{\displaystyle \phi _{z}+r\phi _{u}=0}
Along with the complete integral
ϕ
=
0
{\displaystyle \phi =0}
, the above three relations can be used to eliminate three constants and obtain an equation (original partial differential equation) relating
(
x
,
y
,
z
,
u
,
p
,
q
,
r
)
{\displaystyle (x,y,z,u,p,q,r)}
. Note that the elimination of constants leading to the partial differential equation need not be unique, i.e., two different equations can result in the same complete integral, for example, elimination of constants from the relation
u
=
(
x
−
a
)
2
+
(
y
−
b
)
2
+
z
−
c
{\displaystyle u={\sqrt {(x-a)^{2}+(y-b)^{2}}}+z-c}
leads to
p
2
+
q
2
=
1
{\displaystyle p^{2}+q^{2}=1}
and
r
=
1
{\displaystyle r=1}
.
=== General integral ===
Once a complete integral is found, a general solution can be constructed from it. The general integral is obtained by making the constants functions of the coordinates, i.e.,
a
=
a
(
x
,
y
,
z
)
,
b
=
b
(
x
,
y
,
z
)
,
c
=
c
(
x
,
y
,
z
)
{\displaystyle a=a(x,y,z),\,b=b(x,y,z),\,c=c(x,y,z)}
. These functions are chosen such that the forms of
(
p
,
q
,
r
)
{\displaystyle (p,q,r)}
are unaltered so that the elimination process from complete integral can be utilized. Differentiation of the complete integral now provides
ϕ
x
+
p
ϕ
u
=
−
(
a
x
ϕ
a
+
b
x
ϕ
b
+
c
x
ϕ
c
)
{\displaystyle \phi _{x}+p\phi _{u}=-(a_{x}\phi _{a}+b_{x}\phi _{b}+c_{x}\phi _{c})}
ϕ
y
+
q
ϕ
u
=
−
(
a
y
ϕ
a
+
b
y
ϕ
b
+
c
y
ϕ
c
)
{\displaystyle \phi _{y}+q\phi _{u}=-(a_{y}\phi _{a}+b_{y}\phi _{b}+c_{y}\phi _{c})}
ϕ
z
+
r
ϕ
u
=
−
(
a
z
ϕ
a
+
b
z
ϕ
b
+
c
z
ϕ
c
)
{\displaystyle \phi _{z}+r\phi _{u}=-(a_{z}\phi _{a}+b_{z}\phi _{b}+c_{z}\phi _{c})}
in which we require the right-hand side terms of all the three equations to vanish identically so that elimination of
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
from
ϕ
{\displaystyle \phi }
results in the partial differential equation. This requirement can be written more compactly by writing it as
J
ϕ
a
=
0
,
J
ϕ
b
=
0
,
J
ϕ
c
=
0
{\displaystyle J\phi _{a}=0,\quad J\phi _{b}=0,\quad J\phi _{c}=0}
where
J
=
∂
(
a
,
b
,
c
)
∂
(
x
,
y
,
z
)
=
|
a
x
a
y
a
z
b
x
b
y
b
z
c
x
c
y
c
z
|
{\displaystyle J={\frac {\partial (a,b,c)}{\partial (x,y,z)}}={\begin{aligned}{\begin{vmatrix}a_{x}&a_{y}&a_{z}\\b_{x}&b_{y}&b_{z}\\c_{x}&c_{y}&c_{z}\end{vmatrix}}\end{aligned}}}
is the Jacobian determinant. The condition
J
=
0
{\displaystyle J=0}
leads to the general solution. Whenever
J
=
0
{\displaystyle J=0}
, then there exists a functional relation between
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
because whenever a determinant is zero, the columns (or rows) are not linearly independent. Take this functional relation to be
c
=
ψ
(
a
,
b
)
.
{\displaystyle c=\psi (a,b).}
Once
(
a
,
b
)
{\displaystyle (a,b)}
is found, the problem is solved. From the above relation, we have
d
c
=
ψ
a
d
a
+
ψ
b
d
b
{\displaystyle dc=\psi _{a}da+\psi _{b}db}
. By summing the original equations
(
a
x
ϕ
a
+
b
x
ϕ
b
+
c
x
ϕ
c
)
=
0
{\displaystyle (a_{x}\phi _{a}+b_{x}\phi _{b}+c_{x}\phi _{c})=0}
,
(
a
y
ϕ
a
+
b
y
ϕ
b
+
c
y
ϕ
c
)
=
0
{\displaystyle (a_{y}\phi _{a}+b_{y}\phi _{b}+c_{y}\phi _{c})=0}
and
(
a
z
ϕ
a
+
b
z
ϕ
b
+
c
z
ϕ
c
)
=
0
{\displaystyle (a_{z}\phi _{a}+b_{z}\phi _{b}+c_{z}\phi _{c})=0}
we find
ϕ
a
d
a
+
ϕ
b
d
b
+
ϕ
c
d
c
=
0
{\displaystyle \phi _{a}da+\phi _{b}db+\phi _{c}dc=0}
. Now eliminating
d
c
{\displaystyle dc}
from the two equations derived, we obtain
(
ϕ
a
+
ϕ
c
ψ
a
)
d
a
+
(
ϕ
b
+
ϕ
c
ψ
b
)
d
b
=
0
{\displaystyle (\phi _{a}+\phi _{c}\psi _{a})da+(\phi _{b}+\phi _{c}\psi _{b})db=0}
Since
a
{\displaystyle a}
and
b
{\displaystyle b}
are independent, we require
(
ϕ
a
+
ϕ
c
ψ
a
)
=
0
{\displaystyle (\phi _{a}+\phi _{c}\psi _{a})=0}
(
ϕ
b
+
ϕ
c
ψ
b
)
=
0.
{\displaystyle (\phi _{b}+\phi _{c}\psi _{b})=0.}
The above two equations can be used to solve
a
{\displaystyle a}
and
b
{\displaystyle b}
. Substituting
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
in
ϕ
=
0
{\displaystyle \phi =0}
, we obtain the general integral. Thus a general integral describes a relation between
(
x
,
y
,
z
,
u
)
{\displaystyle (x,y,z,u)}
, two known independent functions
(
a
,
b
)
{\displaystyle (a,b)}
and an arbitrary function
ψ
(
a
,
b
)
{\displaystyle \psi (a,b)}
. Note that we have assumed
c
=
ψ
(
a
,
b
)
{\displaystyle c=\psi (a,b)}
to make the determinant
J
{\displaystyle J}
zero, but this is not always needed. The relations
c
=
ψ
(
a
)
{\displaystyle c=\psi (a)}
or,
c
=
ψ
(
b
)
{\displaystyle c=\psi (b)}
suffice to make the determinant zero.
=== Singular integral ===
Singular integral is obtained when
J
≠
0
{\displaystyle J\neq 0}
. In this case, elimination of
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
from
ϕ
=
0
{\displaystyle \phi =0}
works if
ϕ
a
=
0
,
ϕ
b
=
0
,
ϕ
c
=
0.
{\displaystyle \phi _{a}=0,\quad \phi _{b}=0,\quad \phi _{c}=0.}
The three equations can be used to solve the three unknowns
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
. Solution obtained by elimination of
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
this way leads to what are called singular integrals.
=== Special integral ===
Usually, most integrals fall into three categories defined above, but it may happen that a solution does not fit into any of three types of integrals mentioned above. These solutions are called special integrals. A relation
χ
(
x
,
y
,
z
,
u
)
=
0
{\displaystyle \chi (x,y,z,u)=0}
that satisfies the partial differential equation is said to a special integral if we are unable to determine
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
from the following equations
ϕ
x
χ
u
−
χ
x
ϕ
u
=
0
{\displaystyle \phi _{x}\chi _{u}-\chi _{x}\phi _{u}=0}
ϕ
y
χ
u
−
χ
y
ϕ
u
=
0
{\displaystyle \phi _{y}\chi _{u}-\chi _{y}\phi _{u}=0}
ϕ
z
χ
u
−
χ
z
ϕ
u
=
0.
{\displaystyle \phi _{z}\chi _{u}-\chi _{z}\phi _{u}=0.}
If we able to determine
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
from the above set of equations, then
χ
=
0
{\displaystyle \chi =0}
will turn out to be one of the three integrals described before.
== Two dimensional case ==
The complete integral in two-dimensional space can be written as
ϕ
(
x
,
y
,
u
,
a
,
b
)
=
0
{\displaystyle \phi (x,y,u,a,b)=0}
. The general integral is obtained by eliminating
a
{\displaystyle a}
from the following equations
ϕ
(
x
,
y
,
z
,
a
,
ψ
(
a
)
)
=
0
,
ϕ
a
+
ψ
a
ϕ
b
=
0.
{\displaystyle \phi (x,y,z,a,\psi (a))=0,\quad \phi _{a}+\psi _{a}\phi _{b}=0.}
The singular integral if it exists can be obtained by eliminating
(
a
,
b
)
{\displaystyle (a,b)}
from the following equations
ϕ
(
x
,
y
,
z
,
a
,
b
)
=
0
,
ϕ
a
=
0
,
ϕ
b
=
0.
{\displaystyle \phi (x,y,z,a,b)=0,\quad \phi _{a}=0,\quad \phi _{b}=0.}
If a complete integral is not available, solutions may still be obtained by solving a system of ordinary equations. To obtain this system, first note that the PDE determines a cone (analogous to the light cone) at each point: if the PDE is linear in the derivatives of u (it is quasi-linear), then the cone degenerates into a line. In the general case, the pairs (p,q) that satisfy the equation determine a family of planes at a given point:
u
−
u
0
=
p
(
x
−
x
0
)
+
q
(
y
−
y
0
)
,
{\displaystyle u-u_{0}=p(x-x_{0})+q(y-y_{0}),\,}
where
F
(
x
0
,
y
0
,
u
0
,
p
,
q
)
=
0.
{\displaystyle F(x_{0},y_{0},u_{0},p,q)=0.\,}
The envelope of these planes is a cone, or a line if the PDE is quasi-linear. The condition for an envelope is
F
p
d
p
+
F
q
d
q
=
0
,
{\displaystyle F_{p}\,dp+F_{q}\,dq=0,\,}
where F is evaluated at
(
x
0
,
y
0
,
u
0
,
p
,
q
)
{\displaystyle (x_{0},y_{0},u_{0},p,q)}
, and dp and dq are increments of p and q that satisfy F=0. Hence the generator of the cone is a line with direction
d
x
:
d
y
:
d
u
=
F
p
:
F
q
:
(
p
F
p
+
q
F
q
)
.
{\displaystyle dx:dy:du=F_{p}:F_{q}:(pF_{p}+qF_{q}).\,}
This direction corresponds to the light rays for the wave equation.
To integrate differential equations along these directions, we require increments for p and q along the ray. This can be obtained by differentiating the PDE:
F
x
+
F
u
p
+
F
p
p
x
+
F
q
p
y
=
0
,
{\displaystyle F_{x}+F_{u}p+F_{p}p_{x}+F_{q}p_{y}=0,\,}
F
y
+
F
u
q
+
F
p
q
x
+
F
q
q
y
=
0
,
{\displaystyle F_{y}+F_{u}q+F_{p}q_{x}+F_{q}q_{y}=0,\,}
Therefore the ray direction in
(
x
,
y
,
u
,
p
,
q
)
{\displaystyle (x,y,u,p,q)}
space is
d
x
:
d
y
:
d
u
:
d
p
:
d
q
=
F
p
:
F
q
:
(
p
F
p
+
q
F
q
)
:
(
−
F
x
−
F
u
p
)
:
(
−
F
y
−
F
u
q
)
.
{\displaystyle dx:dy:du:dp:dq=F_{p}:F_{q}:(pF_{p}+qF_{q}):(-F_{x}-F_{u}p):(-F_{y}-F_{u}q).\,}
The integration of these equations leads to a ray conoid at each point
(
x
0
,
y
0
,
u
0
)
{\displaystyle (x_{0},y_{0},u_{0})}
. General solutions of the PDE can then be obtained from envelopes of such conoids.
== Definitions of linear dependence for differential systems ==
This part can be referred to
§
1.2.3
{\displaystyle \S 1.2.3}
of Courant's book.
We assume that these
h
{\displaystyle h}
equations are independent, i.e., that none of them can be deduced
from the other by differentiation and elimination.
An equivalent description is given. Two definitions of linear dependence are given for first-order linear partial differential equations.
(
∗
)
{
∑
i
j
a
i
j
(
1
)
∂
y
j
∂
x
i
+
f
1
=
0
⋮
∑
i
j
a
i
j
(
n
)
∂
y
j
∂
x
i
+
f
n
=
0
{\displaystyle (*)\left\{{\begin{array}{*{20}{c}}\sum \limits _{ij}^{}{a_{ij}^{(1)}{\dfrac {\partial {y_{j}}}{\partial {x_{i}}}}}+{f_{1}}=0\\\vdots \\\sum \limits _{ij}^{}{a_{ij}^{(n)}{\dfrac {\partial {y_{j}}}{\partial {x_{i}}}}}+{f_{n}}=0\end{array}}\right.}
Where
x
i
{\displaystyle x_{i}}
are independent variables;
y
j
{\displaystyle y_{j}}
are dependent unknowns;
a
i
j
(
k
)
{\displaystyle a_{ij}^{(k)}}
are linear coefficients; and
f
k
{\displaystyle f_{k}}
are non-homogeneous items.
Let
Z
k
≡
∑
i
j
a
i
j
(
k
)
∂
y
j
∂
x
i
+
f
k
{\textstyle {Z_{k}}\equiv \sum _{ij}^{}{a_{ij}^{(k)}{\frac {\partial {y_{j}}}{\partial {x_{i}}}}}+{f_{k}}}
.
Definition I: Given a number field
P
{\displaystyle P}
,
when there are coefficients (
c
k
∈
P
{\displaystyle c_{k}\in P}
), not all zero,
such that
∑
k
c
k
Z
k
=
0
{\textstyle \sum _{k}{{c_{k}}{Z_{k}}=0}}
; the Eqs.(*) are linear dependent.
Definition II (differential linear dependence):
Given a number field
P
{\displaystyle P}
, when there are coefficients (
c
k
,
d
k
l
∈
P
{\displaystyle {c_{k}},d_{kl}\in P}
),
not all zero, such that
∑
k
c
k
Z
k
+
∑
k
l
d
k
l
∂
∂
x
l
Z
k
=
0
{\textstyle \sum _{k}{{c_{k}}{Z_{k}}}+\sum _{kl}{{d_{kl}}{\frac {\partial }{\partial {x_{l}}}}{Z_{k}}=0}}
,
the Eqs.(*) are thought as differential linear dependent.
If
d
k
l
≡
0
{\displaystyle {d_{kl}}\equiv 0}
, this definition degenerates into the definition I.
The div-curl systems, Maxwell's equations, Einstein's equations (with four harmonic coordinates) and
Yang-Mills equations (with gauge conditions) are well-determined in definition II, whereas are over-determined in definition I.
== Characteristic surfaces for the wave equation ==
Characteristic surfaces for the wave equation are level surfaces for solutions of the equation
u
t
2
=
c
2
(
u
x
2
+
u
y
2
+
u
z
2
)
.
{\displaystyle u_{t}^{2}=c^{2}\left(u_{x}^{2}+u_{y}^{2}+u_{z}^{2}\right).\,}
There is little loss of generality if we set
u
t
=
1
{\displaystyle u_{t}=1}
: in that case u satisfies
u
x
2
+
u
y
2
+
u
z
2
=
1
c
2
.
{\displaystyle u_{x}^{2}+u_{y}^{2}+u_{z}^{2}={\frac {1}{c^{2}}}.\,}
In vector notation, let
x
→
=
(
x
,
y
,
z
)
and
p
→
=
(
u
x
,
u
y
,
u
z
)
.
{\displaystyle {\vec {x}}=(x,y,z)\quad {\hbox{and}}\quad {\vec {p}}=(u_{x},u_{y},u_{z}).\,}
A family of solutions with planes as level surfaces is given by
u
(
x
→
)
=
p
→
⋅
(
x
→
−
x
0
→
)
,
{\displaystyle u({\vec {x}})={\vec {p}}\cdot ({\vec {x}}-{\vec {x_{0}}}),\,}
where
|
p
→
|
=
1
c
,
and
x
0
→
is arbitrary
.
{\displaystyle |{\vec {p}}\,|={\frac {1}{c}},\quad {\text{and}}\quad {\vec {x_{0}}}\quad {\text{is arbitrary}}.\,}
If x and x0 are held fixed, the envelope of these solutions is obtained by finding a point on the sphere of radius 1/c where the value of u is stationary. This is true if
p
→
{\displaystyle {\vec {p}}}
is parallel to
x
→
−
x
0
→
{\displaystyle {\vec {x}}-{\vec {x_{0}}}}
. Hence the envelope has equation
u
(
x
→
)
=
±
1
c
|
x
→
−
x
0
→
|
.
{\displaystyle u({\vec {x}})=\pm {\frac {1}{c}}|{\vec {x}}-{\vec {x_{0}}}\,|.}
These solutions correspond to spheres whose radius grows or shrinks with velocity c. These are light cones in space-time.
The initial value problem for this equation consists in specifying a level surface S where u=0 for t=0. The solution is obtained by taking the envelope of all the spheres with centers on S, whose radii grow with velocity c. This envelope is obtained by requiring that
1
c
|
x
→
−
x
0
→
|
is stationary for
x
0
→
∈
S
.
{\displaystyle {\frac {1}{c}}|{\vec {x}}-{\vec {x_{0}}}\,|\quad {\hbox{is stationary for}}\quad {\vec {x_{0}}}\in S.\,}
This condition will be satisfied if
|
x
→
−
x
0
→
|
{\displaystyle |{\vec {x}}-{\vec {x_{0}}}\,|}
is normal to S. Thus the envelope corresponds to motion with velocity c along each normal to S. This is the Huygens' construction of wave fronts: each point on S emits a spherical wave at time t=0, and the wave front at a later time t is the envelope of these spherical waves. The normals to S are the light rays.
== References ==
== Further reading ==
Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2.
Polyanin, A. D.; Zaitsev, V. F.; Moussiaux, A. (2002). Handbook of First Order Partial Differential Equations. London: Taylor & Francis. ISBN 0-415-27267-X.
Polyanin, A. D. (2002). Handbook of Linear Partial Differential Equations for Engineers and Scientists. Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-299-9.
Sarra, Scott (2003). "The Method of Characteristics with applications to Conservation Laws". Journal of Online Mathematics and Its Applications. | Wikipedia/First-order_partial_differential_equation |
The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
The HAM was first devised in 1992 by Liao Shijun of Shanghai Jiaotong University in his PhD dissertation and further modified in 1997 to introduce a non-zero auxiliary parameter, referred to as the convergence-control parameter, c0, to construct a homotopy on a differential system in general form. The convergence-control parameter is a non-physical variable that provides a simple way to verify and enforce convergence of a solution series. The capability of the HAM to naturally show convergence of the series solution is unusual in analytical and semi-analytic approaches to nonlinear partial differential equations.
== Characteristics ==
The HAM distinguishes itself from various other analytical methods in four important aspects. First, it is a series expansion method that is not directly dependent on small or large physical parameters. Thus, it is applicable for not only weakly but also strongly nonlinear problems, going beyond some of the inherent limitations of the standard perturbation methods. Second, the HAM is a unified method for the Lyapunov artificial small parameter method, the delta expansion method, the Adomian decomposition method, and the homotopy perturbation method. The greater generality of the method often allows for strong convergence of the solution over larger spatial and parameter domains. Third, the HAM gives excellent flexibility in the expression of the solution and how the solution is explicitly obtained. It provides great freedom to choose the basis functions of the desired solution and the corresponding auxiliary linear operator of the homotopy. Finally, unlike the other analytic approximation techniques, the HAM provides a simple way to ensure the convergence of the solution series.
The homotopy analysis method is also able to combine with other techniques employed in nonlinear differential equations such as spectral methods and Padé approximants. It may further be combined with computational methods, such as the boundary element method to allow the linear method to solve nonlinear systems. Different from the numerical technique of homotopy continuation, the homotopy analysis method is an analytic approximation method as opposed to a discrete computational method. Further, the HAM uses the homotopy parameter only on a theoretical level to demonstrate that a nonlinear system may be split into an infinite set of linear systems which are solved analytically, while the continuation methods require solving a discrete linear system as the homotopy parameter is varied to solve the nonlinear system.
== Applications ==
In the last twenty years, the HAM has been applied to solve a growing number of nonlinear ordinary/partial differential equations in science, finance, and engineering.
For example, multiple steady-state resonant waves in deep and finite water depth were found with the wave resonance criterion of arbitrary number of traveling gravity waves; this agreed with Phillips' criterion for four waves with small amplitude. Further, a unified wave model applied with the HAM, admits not only the traditional smooth progressive periodic/solitary waves, but also the progressive solitary waves with peaked crest in finite water depth. This model shows peaked solitary waves are consistent solutions along with the known smooth ones. Additionally, the HAM has been applied to many other nonlinear problems such as nonlinear heat transfer, the limit cycle of nonlinear dynamic systems, the American put option, the exact Navier–Stokes equation, the option pricing under stochastic volatility, the electrohydrodynamic flows, the Poisson–Boltzmann equation for semiconductor devices, and others.
== Brief mathematical description ==
Consider a general nonlinear differential equation
N
[
u
(
x
)
]
=
0
{\displaystyle {\mathcal {N}}[u(x)]=0}
,
where
N
{\displaystyle {\mathcal {N}}}
is a nonlinear operator. Let
L
{\displaystyle {\mathcal {L}}}
denote an auxiliary linear operator, u0(x) an initial guess of u(x), and c0 a constant (called the convergence-control parameter), respectively. Using the embedding parameter q ∈ [0,1] from homotopy theory, one may construct a family of equations,
(
1
−
q
)
L
[
U
(
x
;
q
)
−
u
0
(
x
)
]
=
c
0
q
N
[
U
(
x
;
q
)
]
,
{\displaystyle (1-q){\mathcal {L}}[U(x;q)-u_{0}(x)]=c_{0}\,q\,{\mathcal {N}}[U(x;q)],}
called the zeroth-order deformation equation, whose solution varies continuously with respect to the embedding parameter q ∈ [0,1]. This is the linear equation
L
[
U
(
x
;
q
)
−
u
0
(
x
)
]
=
0
,
{\displaystyle {\mathcal {L}}[U(x;q)-u_{0}(x)]=0,}
with known initial guess U(x; 0) = u0(x) when q = 0, but is equivalent to the original nonlinear equation
N
[
u
(
x
)
]
=
0
{\displaystyle {\mathcal {N}}[u(x)]=0}
, when q = 1, i.e. U(x; 1) = u(x)). Therefore, as q increases from 0 to 1, the solution U(x; q) of the zeroth-order deformation equation varies (or deforms) from the chosen initial guess u0(x) to the solution u(x) of the considered nonlinear equation.
Expanding U(x; q) in a Taylor series about q = 0, we have the homotopy-Maclaurin series
U
(
x
;
q
)
=
u
0
(
x
)
+
∑
m
=
1
∞
u
m
(
x
)
q
m
.
{\displaystyle U(x;q)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x)\,q^{m}.}
Assuming that the so-called convergence-control parameter c0 of the zeroth-order deformation equation is properly chosen that the above series is convergent at q = 1, we have the homotopy-series solution
u
(
x
)
=
u
0
(
x
)
+
∑
m
=
1
∞
u
m
(
x
)
.
{\displaystyle u(x)=u_{0}(x)+\sum _{m=1}^{\infty }u_{m}(x).}
From the zeroth-order deformation equation, one can directly derive the governing equation of um(x)
L
[
u
m
(
x
)
−
χ
m
u
m
−
1
(
x
)
]
=
c
0
R
m
[
u
0
,
u
1
,
…
,
u
m
−
1
]
,
{\displaystyle {\mathcal {L}}[u_{m}(x)-\chi _{m}u_{m-1}(x)]=c_{0}\,R_{m}[u_{0},u_{1},\ldots ,u_{m-1}],}
called the mth-order deformation equation, where
χ
1
=
0
{\displaystyle \chi _{1}=0}
and
χ
k
=
1
{\displaystyle \chi _{k}=1}
for k > 1, and the right-hand side Rm is dependent only upon the known results u0, u1, ..., um − 1 and can be obtained easily using computer algebra software. In this way, the original nonlinear equation is transferred into an infinite number of linear ones, but without the assumption of any small/large physical parameters.
Since the HAM is based on a homotopy, one has great freedom to choose the initial guess u0(x), the auxiliary linear operator
L
{\displaystyle {\mathcal {L}}}
, and the convergence-control parameter c0 in the zeroth-order deformation equation. Thus, the HAM provides the mathematician freedom to choose the equation-type of the high-order deformation equation and the base functions of its solution. The optimal value of the convergence-control parameter c0 is determined by the minimum of the squared residual error of governing equations and/or boundary conditions after the general form has been solved for the chosen initial guess and linear operator. Thus, the convergence-control parameter c0 is a simple way to guarantee the convergence of the homotopy series solution and differentiates the HAM from other analytic approximation methods. The method overall gives a useful generalization of the concept of homotopy.
== The HAM and computer algebra ==
The HAM is an analytic approximation method designed for the computer era with the goal of "computing with functions instead of numbers." In conjunction with a computer algebra system such as Mathematica or Maple, one can gain analytic approximations of a highly nonlinear problem to arbitrarily high order by means of the HAM in only a few seconds. Inspired by the recent successful applications of the HAM in different fields, a Mathematica package based on the HAM, called BVPh, has been made available online for solving nonlinear boundary-value problems [4]. BVPh is a solver package for highly nonlinear ODEs with singularities, multiple solutions, and multipoint boundary conditions in either a finite or an infinite interval, and includes support for certain types of nonlinear PDEs. Another HAM-based Mathematica code, APOh, has been produced to solve for an explicit analytic approximation of the optimal exercise boundary of American put option, which is also available online [5].
== Frequency response analysis for nonlinear oscillators ==
The HAM has recently been reported to be useful for obtaining analytical solutions for nonlinear frequency response equations. Such solutions are able to capture various nonlinear behaviors such as hardening-type, softening-type or mixed behaviors of the oscillator. These analytical equations are also useful in prediction of chaos in nonlinear systems.
== References ==
== External links ==
http://numericaltank.sjtu.edu.cn/BVPh.htm
http://numericaltank.sjtu.edu.cn/APO.htm | Wikipedia/Homotopy_perturbation_method |
Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field
u
(
x
,
t
)
{\displaystyle u(x,t)}
and diffusion coefficient (or kinematic viscosity, as in the original fluid mechanical context)
ν
{\displaystyle \nu }
, the general form of Burgers' equation (also known as viscous Burgers' equation) in one space dimension is the dissipative system:
∂
u
∂
t
+
u
∂
u
∂
x
=
ν
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}.}
The term
u
∂
u
/
∂
x
{\displaystyle u\partial u/\partial x}
can also be rewritten as
∂
(
u
2
/
2
)
/
∂
x
{\displaystyle \partial (u^{2}/2)/\partial x}
. When the diffusion term is absent (i.e.
ν
=
0
{\displaystyle \nu =0}
), Burgers' equation becomes the inviscid Burgers' equation:
∂
u
∂
t
+
u
∂
u
∂
x
=
0
,
{\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=0,}
which is a prototype for conservation equations that can develop discontinuities (shock waves).
The reason for the formation of sharp gradients for small values of
ν
{\displaystyle \nu }
becomes intuitively clear when one examines the left-hand side of the equation. The term
∂
/
∂
t
+
u
∂
/
∂
x
{\displaystyle \partial /\partial t+u\partial /\partial x}
is evidently a wave operator describing a wave propagating in the positive
x
{\displaystyle x}
-direction with a speed
u
{\displaystyle u}
. Since the wave speed is
u
{\displaystyle u}
, regions exhibiting large values of
u
{\displaystyle u}
will be propagated rightwards quicker than regions exhibiting smaller values of
u
{\displaystyle u}
; in other words, if
u
{\displaystyle u}
is decreasing in the
x
{\displaystyle x}
-direction, initially, then larger
u
{\displaystyle u}
's that lie in the backside will catch up with smaller
u
{\displaystyle u}
's on the front side. The role of the right-side diffusive term is essentially to stop the gradient becoming infinite.
== Inviscid Burgers' equation ==
The inviscid Burgers' equation is a conservation equation, more generally a first order quasilinear hyperbolic equation. The solution to the equation and along with the initial condition
∂
u
∂
t
+
u
∂
u
∂
x
=
0
,
u
(
x
,
0
)
=
f
(
x
)
{\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=0,\quad u(x,0)=f(x)}
can be constructed by the method of characteristics. Let
t
{\displaystyle t}
be the parameter characterising any given characteristics in the
x
{\displaystyle x}
-
t
{\displaystyle t}
plane, then the characteristic equations are given by
d
x
d
t
=
u
,
d
u
d
t
=
0.
{\displaystyle {\frac {dx}{dt}}=u,\quad {\frac {du}{dt}}=0.}
Integration of the second equation tells us that
u
{\displaystyle u}
is constant along the characteristic and integration of the first equation shows that the characteristics are straight lines, i.e.,
u
=
c
,
x
=
u
t
+
ξ
{\displaystyle u=c,\quad x=ut+\xi }
where
ξ
{\displaystyle \xi }
is the point (or parameter) on the x-axis (t = 0) of the x-t plane from which the characteristic curve is drawn. Since
u
{\displaystyle u}
at
x
{\displaystyle x}
-axis is known from the initial condition and the fact that
u
{\displaystyle u}
is unchanged as we move along the characteristic emanating from each point
x
=
ξ
{\displaystyle x=\xi }
, we write
u
=
c
=
f
(
ξ
)
{\displaystyle u=c=f(\xi )}
on each characteristic. Therefore, the family of trajectories of characteristics parametrized by
ξ
{\displaystyle \xi }
is
x
=
f
(
ξ
)
t
+
ξ
.
{\displaystyle x=f(\xi )t+\xi .}
Thus, the solution is given by
u
(
x
,
t
)
=
f
(
ξ
)
=
f
(
x
−
u
t
)
,
ξ
=
x
−
f
(
ξ
)
t
.
{\displaystyle u(x,t)=f(\xi )=f(x-ut),\quad \xi =x-f(\xi )t.}
This is an implicit relation that determines the solution of the inviscid Burgers' equation provided characteristics don't intersect. If the characteristics do intersect, then a classical solution to the PDE does not exist and leads to the formation of a shock wave. Whether characteristics can intersect or not depends on the initial condition. In fact, the breaking time before a shock wave can be formed is given by
t
b
=
−
1
inf
x
(
f
′
(
x
)
)
.
{\displaystyle t_{b}={\frac {-1}{\inf _{x}\left(f^{\prime }(x)\right)}}.}
=== Complete integral of the inviscid Burgers' equation ===
The implicit solution described above containing an arbitrary function
f
{\displaystyle f}
is called the general integral. However, the inviscid Burgers' equation, being a first-order partial differential equation, also has a complete integral which contains two arbitrary constants (for the two independent variables). Subrahmanyan Chandrasekhar provided the complete integral in 1943, which is given by
u
(
x
,
t
)
=
a
x
+
b
a
t
+
1
.
{\displaystyle u(x,t)={\frac {ax+b}{at+1}}.}
where
a
{\displaystyle a}
and
b
{\displaystyle b}
are arbitrary constants. The complete integral satisfies a linear initial condition, i.e.,
f
(
x
)
=
a
x
+
b
{\displaystyle f(x)=ax+b}
. One can also construct the general integral using the above complete integral.
== Viscous Burgers' equation ==
The viscous Burgers' equation can be converted to a linear equation by the Cole–Hopf transformation,
u
(
x
,
t
)
=
−
2
ν
∂
∂
x
ln
φ
(
x
,
t
)
,
{\displaystyle u(x,t)=-2\nu {\frac {\partial }{\partial x}}\ln \varphi (x,t),}
which turns it into the equation
2
ν
∂
∂
x
[
1
φ
(
∂
φ
∂
t
−
ν
∂
2
φ
∂
x
2
)
]
=
0
,
{\displaystyle 2\nu {\frac {\partial }{\partial x}}\left[{\frac {1}{\varphi }}\left({\frac {\partial \varphi }{\partial t}}-\nu {\frac {\partial ^{2}\varphi }{\partial x^{2}}}\right)\right]=0,}
which can be integrated with respect to
x
{\displaystyle x}
to obtain
∂
φ
∂
t
−
ν
∂
2
φ
∂
x
2
=
φ
d
f
(
t
)
d
t
,
{\displaystyle {\frac {\partial \varphi }{\partial t}}-\nu {\frac {\partial ^{2}\varphi }{\partial x^{2}}}=\varphi {\frac {df(t)}{dt}},}
where
d
f
/
d
t
{\displaystyle df/dt}
is an arbitrary function of time. Introducing the transformation
φ
→
φ
e
f
{\displaystyle \varphi \to \varphi e^{f}}
(which does not affect the function
u
(
x
,
t
)
{\displaystyle u(x,t)}
), the required equation reduces to that of the heat equation
∂
φ
∂
t
=
ν
∂
2
φ
∂
x
2
.
{\displaystyle {\frac {\partial \varphi }{\partial t}}=\nu {\frac {\partial ^{2}\varphi }{\partial x^{2}}}.}
The diffusion equation can be solved. That is, if
φ
(
x
,
0
)
=
φ
0
(
x
)
{\displaystyle \varphi (x,0)=\varphi _{0}(x)}
, then
φ
(
x
,
t
)
=
1
4
π
ν
t
∫
−
∞
∞
φ
0
(
x
′
)
exp
[
−
(
x
−
x
′
)
2
4
ν
t
]
d
x
′
.
{\displaystyle \varphi (x,t)={\frac {1}{\sqrt {4\pi \nu t}}}\int _{-\infty }^{\infty }\varphi _{0}(x')\exp \left[-{\frac {(x-x')^{2}}{4\nu t}}\right]dx'.}
The initial function
φ
0
(
x
)
{\displaystyle \varphi _{0}(x)}
is related to the initial function
u
(
x
,
0
)
=
f
(
x
)
{\displaystyle u(x,0)=f(x)}
by
ln
φ
0
(
x
)
=
−
1
2
ν
∫
0
x
f
(
x
′
)
d
x
′
,
{\displaystyle \ln \varphi _{0}(x)=-{\frac {1}{2\nu }}\int _{0}^{x}f(x')dx',}
where the lower limit is chosen arbitrarily. Inverting the Cole–Hopf transformation, we have
u
(
x
,
t
)
=
−
2
ν
∂
∂
x
ln
{
1
4
π
ν
t
∫
−
∞
∞
exp
[
−
(
x
−
x
′
)
2
4
ν
t
−
1
2
ν
∫
0
x
′
f
(
x
″
)
d
x
″
]
d
x
′
}
{\displaystyle u(x,t)=-2\nu {\frac {\partial }{\partial x}}\ln \left\{{\frac {1}{\sqrt {4\pi \nu t}}}\int _{-\infty }^{\infty }\exp \left[-{\frac {(x-x')^{2}}{4\nu t}}-{\frac {1}{2\nu }}\int _{0}^{x'}f(x'')dx''\right]dx'\right\}}
which simplifies, by getting rid of the time-dependent prefactor in the argument of the logarithm, to
u
(
x
,
t
)
=
−
2
ν
∂
∂
x
ln
{
∫
−
∞
∞
exp
[
−
(
x
−
x
′
)
2
4
ν
t
−
1
2
ν
∫
0
x
′
f
(
x
″
)
d
x
″
]
d
x
′
}
.
{\displaystyle u(x,t)=-2\nu {\frac {\partial }{\partial x}}\ln \left\{\int _{-\infty }^{\infty }\exp \left[-{\frac {(x-x')^{2}}{4\nu t}}-{\frac {1}{2\nu }}\int _{0}^{x'}f(x'')dx''\right]dx'\right\}.}
This solution is derived from the solution of the heat equation for
φ
{\displaystyle \varphi }
that decays to zero as
x
→
±
∞
{\displaystyle x\to \pm \infty }
; other solutions for
u
{\displaystyle u}
can be obtained starting from solutions of
φ
{\displaystyle \varphi }
that satisfies different boundary conditions.
== Some explicit solutions of the viscous Burgers' equation ==
Explicit expressions for the viscous Burgers' equation are available. Some of the physically relevant solutions are given below:
=== Steadily propagating traveling wave ===
If
u
(
x
,
0
)
=
f
(
x
)
{\displaystyle u(x,0)=f(x)}
is such that
f
(
−
∞
)
=
f
+
{\displaystyle f(-\infty )=f^{+}}
and
f
(
+
∞
)
=
f
−
{\displaystyle f(+\infty )=f^{-}}
and
f
′
(
x
)
<
0
{\displaystyle f'(x)<0}
, then we have a traveling-wave solution (with a constant speed
c
=
(
f
+
+
f
−
)
/
2
{\displaystyle c=(f^{+}+f^{-})/2}
) given by
u
(
x
,
t
)
=
c
−
f
+
−
f
−
2
tanh
[
f
+
−
f
−
4
ν
(
x
−
c
t
)
]
.
{\displaystyle u(x,t)=c-{\frac {f^{+}-f^{-}}{2}}\tanh \left[{\frac {f^{+}-f^{-}}{4\nu }}(x-ct)\right].}
This solution, that was originally derived by Harry Bateman in 1915, is used to describe the variation of pressure across a weak shock wave. When
f
+
=
2
{\displaystyle f^{+}=2}
and
f
−
=
0
{\displaystyle f^{-}=0}
this simplifies to
u
(
x
,
t
)
=
2
1
+
e
x
−
t
ν
{\displaystyle u(x,t)={\frac {2}{1+e^{\frac {x-t}{\nu }}}}}
with
c
=
1
{\displaystyle c=1}
.
=== Delta function as an initial condition ===
If
u
(
x
,
0
)
=
2
ν
R
e
δ
(
x
)
{\displaystyle u(x,0)=2\nu Re\delta (x)}
, where
R
e
{\displaystyle Re}
(say, the Reynolds number) is a constant, then we have
u
(
x
,
t
)
=
ν
π
t
[
(
e
R
e
−
1
)
e
−
x
2
/
4
ν
t
1
+
(
e
R
e
−
1
)
e
r
f
c
(
x
/
4
ν
t
)
/
2
]
.
{\displaystyle u(x,t)={\sqrt {\frac {\nu }{\pi t}}}\left[{\frac {(e^{Re}-1)e^{-x^{2}/4\nu t}}{1+(e^{Re}-1)\mathrm {erfc} (x/{\sqrt {4\nu t}})/2}}\right].}
In the limit
R
e
→
0
{\displaystyle Re\to 0}
, the limiting behaviour is a diffusional spreading of a source and therefore is given by
u
(
x
,
t
)
=
2
ν
R
e
4
π
ν
t
exp
(
−
x
2
4
ν
t
)
.
{\displaystyle u(x,t)={\frac {2\nu Re}{\sqrt {4\pi \nu t}}}\exp \left(-{\frac {x^{2}}{4\nu t}}\right).}
On the other hand, In the limit
R
e
→
∞
{\displaystyle Re\to \infty }
, the solution approaches that of the aforementioned Chandrasekhar's shock-wave solution of the inviscid Burgers' equation and is given by
u
(
x
,
t
)
=
{
x
t
,
0
<
x
<
2
ν
R
e
t
,
0
,
otherwise
.
{\displaystyle u(x,t)={\begin{cases}{\frac {x}{t}},\quad 0<x<{\sqrt {2\nu Re\,t}},\\0,\quad {\text{otherwise}}.\end{cases}}}
The shock wave location and its speed are given by
x
=
2
ν
R
e
t
{\displaystyle x={\sqrt {2\nu Re\,t}}}
and
ν
R
e
/
t
.
{\displaystyle {\sqrt {\nu Re/t}}.}
=== N-wave solution ===
The N-wave solution comprises a compression wave followed by a rarafaction wave. A solution of this type is given by
u
(
x
,
t
)
=
x
t
[
1
+
1
e
R
e
0
−
1
t
t
0
exp
(
−
R
e
(
t
)
x
2
4
ν
R
e
0
t
)
]
−
1
{\displaystyle u(x,t)={\frac {x}{t}}\left[1+{\frac {1}{e^{Re_{0}-1}}}{\sqrt {\frac {t}{t_{0}}}}\exp \left(-{\frac {Re(t)x^{2}}{4\nu Re_{0}t}}\right)\right]^{-1}}
where
R
e
0
{\displaystyle Re_{0}}
may be regarded as an initial Reynolds number at time
t
=
t
0
{\displaystyle t=t_{0}}
and
R
e
(
t
)
=
(
1
/
2
ν
)
∫
0
∞
u
d
x
=
ln
(
1
+
τ
/
t
)
{\displaystyle Re(t)=(1/2\nu )\int _{0}^{\infty }udx=\ln(1+{\sqrt {\tau /t}})}
with
τ
=
t
0
e
R
e
0
−
1
{\displaystyle \tau =t_{0}{\sqrt {e^{Re_{0}}-1}}}
, may be regarded as the time-varying Reynold number.
== Other forms ==
=== Multi-dimensional Burgers' equation ===
In two or more dimensions, the Burgers' equation becomes
∂
u
∂
t
+
u
⋅
∇
u
=
ν
∇
2
u
.
{\displaystyle {\frac {\partial u}{\partial t}}+u\cdot \nabla u=\nu \nabla ^{2}u.}
One can also extend the equation for the vector field
u
{\displaystyle \mathbf {u} }
, as in
∂
u
∂
t
+
u
⋅
∇
u
=
ν
∇
2
u
.
{\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} =\nu \nabla ^{2}\mathbf {u} .}
=== Generalized Burgers' equation ===
The generalized Burgers' equation extends the quasilinear convective to more generalized form, i.e.,
∂
u
∂
t
+
c
(
u
)
∂
u
∂
x
=
ν
∂
2
u
∂
x
2
.
{\displaystyle {\frac {\partial u}{\partial t}}+c(u){\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}.}
where
c
(
u
)
{\displaystyle c(u)}
is any arbitrary function of u. The inviscid
ν
=
0
{\displaystyle \nu =0}
equation is still a quasilinear hyperbolic equation for
c
(
u
)
>
0
{\displaystyle c(u)>0}
and its solution can be constructed using method of characteristics as before.
=== Stochastic Burgers' equation ===
Added space-time noise
η
(
x
,
t
)
=
W
˙
(
x
,
t
)
{\displaystyle \eta (x,t)={\dot {W}}(x,t)}
, where
W
{\displaystyle W}
is an
L
2
(
R
)
{\displaystyle L^{2}(\mathbb {R} )}
Wiener process, forms a stochastic Burgers' equation
∂
u
∂
t
+
u
∂
u
∂
x
=
ν
∂
2
u
∂
x
2
−
λ
∂
η
∂
x
.
{\displaystyle {\frac {\partial u}{\partial t}}+u{\frac {\partial u}{\partial x}}=\nu {\frac {\partial ^{2}u}{\partial x^{2}}}-\lambda {\frac {\partial \eta }{\partial x}}.}
This stochastic PDE is the one-dimensional version of Kardar–Parisi–Zhang equation in a field
h
(
x
,
t
)
{\displaystyle h(x,t)}
upon substituting
u
(
x
,
t
)
=
−
λ
∂
h
/
∂
x
{\displaystyle u(x,t)=-\lambda \partial h/\partial x}
.
== See also ==
Chaplygin's equation
Conservation equation
Euler–Tricomi equation
Fokker–Planck equation
KdV-Burgers equation
== References ==
== External links ==
Burgers' Equation at EqWorld: The World of Mathematical Equations.
Burgers' Equation at NEQwiki, the nonlinear equations encyclopedia. | Wikipedia/Burgers'_equation |
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the d'Alembert principle of virtual work. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lagrangian mechanics describes a mechanical system as a pair (M, L) consisting of a configuration space M and a smooth function
L
{\textstyle L}
within that space called a Lagrangian. For many systems, L = T − V, where T and V are the kinetic and potential energy of the system, respectively.
The stationary action principle requires that the action functional of the system derived from L must remain at a stationary point (specifically, a maximum, minimum, or saddle point) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
== Introduction ==
Newton's laws and the concept of forces are the usual starting point for teaching about mechanical systems. This method works well for many problems, but for others the approach is
nightmarishly complicated. For example, in calculation of the motion of a torus rolling on a horizontal surface with a pearl sliding inside, the time-varying constraint forces like the angular velocity of the torus, motion of the pearl in relation to the torus made it difficult to determine the motion of the torus with Newton's equations. Lagrangian mechanics adopts energy rather than force as its basic ingredient, leading to more abstract equations capable of tackling more complex problems.
Particularly, Lagrange's approach was to set up independent generalized coordinates for the position and speed of every object, which allows the writing down of a general form of Lagrangian (total kinetic energy minus potential energy of the system) and summing this over all possible paths of motion of the particles yielded a formula for the 'action', which he minimized to give a generalized set of equations. This summed quantity is minimized along the path that the particle actually takes. This choice eliminates the need for the constraint force to enter into the resultant generalized system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment.
For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle. For a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so r1 = (x1, y1, z1), r2 = (x2, y2, z2) and so on. In three-dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles; a general point in space is written r = (x, y, z). The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus
v
1
=
d
r
1
d
t
,
v
2
=
d
r
2
d
t
,
…
,
v
N
=
d
r
N
d
t
.
{\displaystyle \mathbf {v} _{1}={\frac {d\mathbf {r} _{1}}{dt}},\mathbf {v} _{2}={\frac {d\mathbf {r} _{2}}{dt}},\ldots ,\mathbf {v} _{N}={\frac {d\mathbf {r} _{N}}{dt}}.}
In Newtonian mechanics, the equations of motion are given by Newton's laws. The second law "net force equals mass times acceleration",
∑
F
=
m
d
2
r
d
t
2
,
{\displaystyle \sum \mathbf {F} =m{\frac {d^{2}\mathbf {r} }{dt^{2}}},}
applies to each particle. For an N-particle system in 3 dimensions, there are 3N second-order ordinary differential equations in the positions of the particles to solve for.
=== Lagrangian ===
Instead of forces, Lagrangian mechanics uses the energies in the system. The central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian. It is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles in the absence of an electromagnetic field is given by
L
=
T
−
V
,
{\displaystyle L=T-V,}
where
T
=
1
2
∑
k
=
1
N
m
k
v
k
2
{\displaystyle T={\frac {1}{2}}\sum _{k=1}^{N}m_{k}v_{k}^{2}}
is the total kinetic energy of the system, equaling the sum Σ of the kinetic energies of the
N
{\displaystyle N}
particles. Each particle labeled
k
{\displaystyle k}
has mass
m
k
,
{\displaystyle m_{k},}
and vk2 = vk · vk is the magnitude squared of its velocity, equivalent to the dot product of the velocity with itself.
Kinetic energy T is the energy of the system's motion and is a function only of the velocities vk, not the positions rk, nor time t, so T = T(v1, v2, ...).
V, the potential energy of the system, reflects the energy of interaction between the particles, i.e. how much energy any one particle has due to all the others, together with any external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so V = V(r1, r2, ...). For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, V = V(r1, r2, ..., v1, v2, ...). If there is some external field or external driving force changing with time, the potential changes with time, so most generally V = V(r1, r2, ..., v1, v2, ..., t).
As already noted, this form of L is applicable to many important classes of system, but not everywhere. For relativistic Lagrangian mechanics it must be replaced as a whole by a function consistent with special relativity (scalar under Lorentz transformations) or general relativity (4-scalar). Where a magnetic field is present, the expression for the potential energy needs restating. And for dissipative forces (e.g., friction), another function must be introduced alongside Lagrangian often referred to as a "Rayleigh dissipation function" to account for the loss of energy.
One or more of the particles may each be subject to one or more holonomic constraints; such a constraint is described by an equation of the form f(r, t) = 0. If the number of constraints in the system is C, then each constraint has an equation f1(r, t) = 0, f2(r, t) = 0, ..., fC(r, t) = 0, each of which could apply to any of the particles. If particle k is subject to constraint i, then fi(rk, t) = 0. At any instant of time, the coordinates of a constrained particle are linked together and not independent. The constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are non-integrable, when the constraints have inequalities, or when the constraints involve complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics or use other methods.
If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian L(r1, r2, ... v1, v2, ... t) is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian L(r1, r2, ... v1, v2, ...) is explicitly independent of time. In either case, the Lagrangian always has implicit time dependence through the generalized coordinates.
With these definitions, Lagrange's equations of the first kind are
where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and
∂
∂
r
k
≡
(
∂
∂
x
k
,
∂
∂
y
k
,
∂
∂
z
k
)
,
∂
∂
r
˙
k
≡
(
∂
∂
x
˙
k
,
∂
∂
y
˙
k
,
∂
∂
z
˙
k
)
{\displaystyle {\frac {\partial }{\partial \mathbf {r} _{k}}}\equiv \left({\frac {\partial }{\partial x_{k}}},{\frac {\partial }{\partial y_{k}}},{\frac {\partial }{\partial z_{k}}}\right),\quad {\frac {\partial }{\partial {\dot {\mathbf {r} }}_{k}}}\equiv \left({\frac {\partial }{\partial {\dot {x}}_{k}}},{\frac {\partial }{\partial {\dot {y}}_{k}}},{\frac {\partial }{\partial {\dot {z}}_{k}}}\right)}
are each shorthands for a vector of partial derivatives ∂/∂ with respect to the indicated variables (not a derivative with respect to the entire vector). Each overdot is a shorthand for a time derivative. This procedure does increase the number of equations to solve compared to Newton's laws, from 3N to 3N + C, because there are 3N coupled second-order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces. The coordinates do not need to be eliminated by solving the constraint equations.
In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the partial derivative of L with respect to the z velocity component of particle 2, defined by vz,2 = dz2/dt, is just ∂L/∂vz,2; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2).
In each constraint equation, one coordinate is redundant because it is determined from the other coordinates. The number of independent coordinates is therefore n = 3N − C. We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple q = (q1, q2, ... qn), by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time:
r
k
=
r
k
(
q
,
t
)
=
(
x
k
(
q
,
t
)
,
y
k
(
q
,
t
)
,
z
k
(
q
,
t
)
,
t
)
.
{\displaystyle \mathbf {r} _{k}=\mathbf {r} _{k}(\mathbf {q} ,t)={\big (}x_{k}(\mathbf {q} ,t),y_{k}(\mathbf {q} ,t),z_{k}(\mathbf {q} ,t),t{\big )}.}
The vector q is a point in the configuration space of the system. The time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is
q
˙
j
=
d
q
j
d
t
,
v
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
q
˙
j
+
∂
r
k
∂
t
.
{\displaystyle {\dot {q}}_{j}={\frac {\mathrm {d} q_{j}}{\mathrm {d} t}},\quad \mathbf {v} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}{\dot {q}}_{j}+{\frac {\partial \mathbf {r} _{k}}{\partial t}}.}
Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so
T
=
T
(
q
,
q
˙
,
t
)
.
{\displaystyle T=T(\mathbf {q} ,{\dot {\mathbf {q} }},t).}
With these definitions, the Euler–Lagrange equations, or Lagrange's equations of the second kind
are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian L(q, dq/dt, t) gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to n = 3N − C coupled second-order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for.
Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but generally are nonlinear coupled equations in the coordinates.
== From Newtonian to Lagrangian mechanics ==
=== Newton's laws ===
For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system). The equation of motion for a particle of constant mass m is Newton's second law of 1687, in modern vector notation
F
=
m
a
,
{\displaystyle \mathbf {F} =m\mathbf {a} ,}
where a is its acceleration and F the resultant force acting on it. Where the mass is varying, the equation needs to be generalised to take the time derivative of the momentum. In three spatial dimensions, this is a system of three coupled second-order ordinary differential equations to solve, since there are three components in this vector equation. The solution is the position vector r of the particle at time t, subject to the initial conditions of r and v when t = 0.
Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated. In a set of curvilinear coordinates ξ = (ξ1, ξ2, ξ3), the law in tensor index notation is the "Lagrangian form"
F
a
=
m
(
d
2
ξ
a
d
t
2
+
Γ
a
b
c
d
ξ
b
d
t
d
ξ
c
d
t
)
=
g
a
k
(
d
d
t
∂
T
∂
ξ
˙
k
−
∂
T
∂
ξ
k
)
,
ξ
˙
a
≡
d
ξ
a
d
t
,
{\displaystyle F^{a}=m\left({\frac {\mathrm {d} ^{2}\xi ^{a}}{\mathrm {d} t^{2}}}+\Gamma ^{a}{}_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}\right)=g^{ak}\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\xi }}^{k}}}-{\frac {\partial T}{\partial \xi ^{k}}}\right),\quad {\dot {\xi }}^{a}\equiv {\frac {\mathrm {d} \xi ^{a}}{\mathrm {d} t}},}
where Fa is the a-th contravariant component of the resultant force acting on the particle, Γabc are the Christoffel symbols of the second kind,
T
=
1
2
m
g
b
c
d
ξ
b
d
t
d
ξ
c
d
t
{\displaystyle T={\frac {1}{2}}mg_{bc}{\frac {\mathrm {d} \xi ^{b}}{\mathrm {d} t}}{\frac {\mathrm {d} \xi ^{c}}{\mathrm {d} t}}}
is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates.
It may seem like an overcomplication to cast Newton's law in this form, but there are advantages. The acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, F = 0, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (these may end up being minimal, that is the shortest paths, but not necessarily). In flat 3D real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation and states that free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces F ≠ 0, the particle accelerates due to forces acting on it and deviates away from the geodesics it would follow if free. With appropriate extensions of the quantities given here in flat 3D space to 4D curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense.
However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C,
F
=
C
+
N
.
{\displaystyle \mathbf {F} =\mathbf {C} +\mathbf {N} .}
The constraint forces can be complicated, since they generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations.
The constraint forces can either be eliminated from the equations of motion, so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion.
=== D'Alembert's principle ===
A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles the virtual work, i.e. the work along a virtual displacement, δrk, is zero:
∑
k
=
1
N
(
N
k
+
C
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}+\mathbf {C} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
The virtual displacements, δrk, are by definition infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it. Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint).
Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero:
∑
k
=
1
N
C
k
⋅
δ
r
k
=
0
,
{\displaystyle \sum _{k=1}^{N}\mathbf {C} _{k}\cdot \delta \mathbf {r} _{k}=0,}
so that
∑
k
=
1
N
(
N
k
−
m
k
a
k
)
⋅
δ
r
k
=
0.
{\displaystyle \sum _{k=1}^{N}(\mathbf {N} _{k}-m_{k}\mathbf {a} _{k})\cdot \delta \mathbf {r} _{k}=0.}
Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion. The form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion.
=== Equations of motion from D'Alembert's principle ===
If there are constraints on particle k, then since the coordinates of the position rk = (xk, yk, zk) are linked together by a constraint equation, so are those of the virtual displacements δrk = (δxk, δyk, δzk). Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential,
δ
r
k
=
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
.
{\displaystyle \delta \mathbf {r} _{k}=\sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}.}
There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time.
The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces
Q
j
=
∑
k
=
1
N
N
k
⋅
∂
r
k
∂
q
j
,
{\displaystyle Q_{j}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}},}
so that
∑
k
=
1
N
N
k
⋅
δ
r
k
=
∑
k
=
1
N
N
k
⋅
∑
j
=
1
n
∂
r
k
∂
q
j
δ
q
j
=
∑
j
=
1
n
Q
j
δ
q
j
.
{\displaystyle \sum _{k=1}^{N}\mathbf {N} _{k}\cdot \delta \mathbf {r} _{k}=\sum _{k=1}^{N}\mathbf {N} _{k}\cdot \sum _{j=1}^{n}{\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}\delta q_{j}=\sum _{j=1}^{n}Q_{j}\delta q_{j}.}
This is half of the conversion to generalized coordinates. It remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result:
∑
k
=
1
N
m
k
a
k
⋅
∂
r
k
∂
q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
.
{\displaystyle \sum _{k=1}^{N}m_{k}\mathbf {a} _{k}\cdot {\frac {\partial \mathbf {r} _{k}}{\partial q_{j}}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}.}
Now D'Alembert's principle is in the generalized coordinates as required,
∑
j
=
1
n
[
Q
j
−
(
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
)
]
δ
q
j
=
0
,
{\displaystyle \sum _{j=1}^{n}\left[Q_{j}-\left({\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}\right)\right]\delta q_{j}=0,}
and since these virtual displacements δqj are independent and nonzero, the coefficients can be equated to zero, resulting in Lagrange's equations or the generalized equations of motion,
Q
j
=
d
d
t
∂
T
∂
q
˙
j
−
∂
T
∂
q
j
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {q}}_{j}}}-{\frac {\partial T}{\partial q_{j}}}}
These equations are equivalent to Newton's laws for the non-constraint forces. The generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle.
=== Euler–Lagrange equations and Hamilton's principle ===
For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that
Q
j
=
d
d
t
∂
V
∂
q
˙
j
−
∂
V
∂
q
j
,
{\displaystyle Q_{j}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial V}{\partial {\dot {q}}_{j}}}-{\frac {\partial V}{\partial q_{j}}},}
equating to Lagrange's equations and defining the Lagrangian as L = T − V obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
=
0.
{\displaystyle {\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0.}
However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown. This may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations.
The Euler–Lagrange equations also follow from the calculus of variations. The variation of the Lagrangian is
δ
L
=
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
∂
L
∂
q
˙
j
δ
q
˙
j
)
,
δ
q
˙
j
≡
δ
d
q
j
d
t
≡
d
(
δ
q
j
)
d
t
,
{\displaystyle \delta L=\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta {\dot {q}}_{j}\right),\quad \delta {\dot {q}}_{j}\equiv \delta {\frac {\mathrm {d} q_{j}}{\mathrm {d} t}}\equiv {\frac {\mathrm {d} (\delta q_{j})}{\mathrm {d} t}},}
which has a form similar to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian,
∫
t
1
t
2
δ
L
d
t
=
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
δ
q
j
+
d
d
t
(
∂
L
∂
q
˙
j
δ
q
j
)
−
d
d
t
∂
L
∂
q
˙
j
δ
q
j
)
d
t
=
∑
j
=
1
n
[
∂
L
∂
q
˙
j
δ
q
j
]
t
1
t
2
+
∫
t
1
t
2
∑
j
=
1
n
(
∂
L
∂
q
j
−
d
d
t
∂
L
∂
q
˙
j
)
δ
q
j
d
t
.
{\displaystyle {\begin{aligned}\int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t&=\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}\delta q_{j}+{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right)\,\mathrm {d} t\\&=\sum _{j=1}^{n}\left[{\frac {\partial L}{\partial {\dot {q}}_{j}}}\delta q_{j}\right]_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\sum _{j=1}^{n}\left({\frac {\partial L}{\partial q_{j}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)\delta q_{j}\,\mathrm {d} t.\end{aligned}}}
Now, if the condition δqj(t1) = δqj(t2) = 0 holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion. This can be summarized by Hamilton's principle:
∫
t
1
t
2
δ
L
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L\,\mathrm {d} t=0.}
The time integral of the Lagrangian is another quantity called the action, defined as
S
=
∫
t
1
t
2
L
d
t
,
{\displaystyle S=\int _{t_{1}}^{t_{2}}L\,\mathrm {d} t,}
which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as [angular momentum], [energy]·[time], or [length]·[momentum]. With this definition Hamilton's principle is
δ
S
=
0.
{\displaystyle \delta S=0.}
Instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is one of several action principles.
Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish. These ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others.
Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates. The resulting constraint equation can be rearranged into first order differential equation. This will not be given here.
=== Lagrange multipliers and constraints ===
The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles,
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed δrk(t1) = δrk(t2) = 0 for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation fi(rk, t) = 0 by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian
L
′
=
L
(
r
1
,
r
2
,
…
,
r
˙
1
,
r
˙
2
,
…
,
t
)
+
∑
i
=
1
C
λ
i
(
t
)
f
i
(
r
k
,
t
)
.
{\displaystyle L'=L(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,{\dot {\mathbf {r} }}_{1},{\dot {\mathbf {r} }}_{2},\ldots ,t)+\sum _{i=1}^{C}\lambda _{i}(t)f_{i}(\mathbf {r} _{k},t).}
The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives
∫
t
1
t
2
δ
L
′
d
t
=
∫
t
1
t
2
∑
k
=
1
N
(
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
)
⋅
δ
r
k
d
t
=
0.
{\displaystyle \int _{t_{1}}^{t_{2}}\delta L'\mathrm {d} t=\int _{t_{1}}^{t_{2}}\sum _{k=1}^{N}\left({\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}\right)\cdot \delta \mathbf {r} _{k}\,\mathrm {d} t=0.}
The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow. From the preceding analysis, obtaining the solution to this integral is equivalent to the statement
∂
L
′
∂
r
k
−
d
d
t
∂
L
′
∂
r
˙
k
=
0
⇒
∂
L
∂
r
k
−
d
d
t
∂
L
∂
r
˙
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle {\frac {\partial L'}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\mathbf {r} }}_{k}}}=0\quad \Rightarrow \quad {\frac {\partial L}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {\mathbf {r} }}_{k}}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations
∂
L
′
∂
λ
i
−
d
d
t
∂
L
′
∂
λ
˙
i
=
0
⇒
f
i
(
r
k
,
t
)
=
0.
{\displaystyle {\frac {\partial L'}{\partial \lambda _{i}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {\lambda }}_{i}}}=0\quad \Rightarrow \quad f_{i}(\mathbf {r} _{k},t)=0.}
For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian L = T − V gives
∂
T
∂
r
k
−
d
d
t
∂
T
∂
r
˙
k
⏟
−
F
k
+
−
∂
V
∂
r
k
⏟
N
k
+
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
=
0
,
{\displaystyle \underbrace {{\frac {\partial T}{\partial \mathbf {r} _{k}}}-{\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial T}{\partial {\dot {\mathbf {r} }}_{k}}}} _{-\mathbf {F} _{k}}+\underbrace {-{\frac {\partial V}{\partial \mathbf {r} _{k}}}} _{\mathbf {N} _{k}}+\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}}=0,}
and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are
C
k
=
∑
i
=
1
C
λ
i
∂
f
i
∂
r
k
,
{\displaystyle \mathbf {C} _{k}=\sum _{i=1}^{C}\lambda _{i}{\frac {\partial f_{i}}{\partial \mathbf {r} _{k}}},}
thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers.
== Properties of the Lagrangian ==
=== Non-uniqueness ===
The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a and shifted by an arbitrary constant b, and the new Lagrangian L′ = aL + b will describe the same motion as L. If one restricts as above to trajectories q over a given time interval [tst, tfin]} and fixed end points Pst = q(tst) and Pfin = q(tfin), then two Lagrangians describing the same system can differ by the "total time derivative" of a function f(q, t):
L
′
(
q
,
q
˙
,
t
)
=
L
(
q
,
q
˙
,
t
)
+
d
f
(
q
,
t
)
d
t
,
{\displaystyle L'(\mathbf {q} ,{\dot {\mathbf {q} }},t)=L(\mathbf {q} ,{\dot {\mathbf {q} }},t)+{\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}},}
where
d
f
(
q
,
t
)
d
t
{\textstyle {\frac {\mathrm {d} f(\mathbf {q} ,t)}{\mathrm {d} t}}}
means
∂
f
(
q
,
t
)
∂
t
+
∑
i
∂
f
(
q
,
t
)
∂
q
i
q
˙
i
.
{\textstyle {\frac {\partial f(\mathbf {q} ,t)}{\partial t}}+\sum _{i}{\frac {\partial f(\mathbf {q} ,t)}{\partial q_{i}}}{\dot {q}}_{i}.}
Both Lagrangians L and L′ produce the same equations of motion since the corresponding actions S and S′ are related via
S
′
[
q
]
=
∫
t
st
t
fin
L
′
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
=
∫
t
st
t
fin
L
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
d
t
+
∫
t
st
t
fin
d
f
(
q
(
t
)
,
t
)
d
t
d
t
=
S
[
q
]
+
f
(
P
fin
,
t
fin
)
−
f
(
P
st
,
t
st
)
,
{\displaystyle {\begin{aligned}S'[\mathbf {q} ]&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L'(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt\\&=\int _{t_{\text{st}}}^{t_{\text{fin}}}L(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)\,dt+\int _{t_{\text{st}}}^{t_{\text{fin}}}{\frac {\mathrm {d} f(\mathbf {q} (t),t)}{\mathrm {d} t}}\,dt\\&=S[\mathbf {q} ]+f(P_{\text{fin}},t_{\text{fin}})-f(P_{\text{st}},t_{\text{st}}),\end{aligned}}}
with the last two components f(Pfin, tfin) and f(Pst, tst) independent of q.
=== Invariance under point transformations ===
Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates Q according to a point transformation Q = Q(q, t) which is invertible as q = q(Q, t), the new Lagrangian L′ is a function of the new coordinates and similarly for the constraints
L
′
(
Q
,
Q
˙
,
t
)
=
L
(
q
(
Q
,
t
)
,
q
˙
(
Q
,
Q
˙
,
t
)
,
t
)
,
ϕ
j
′
(
Q
,
t
)
=
ϕ
j
(
q
(
Q
,
t
)
,
t
)
{\displaystyle {\begin{aligned}L'(\mathbf {Q} ,{\dot {\mathbf {Q} }},t)&=L(\mathbf {q} (\mathbf {Q} ,t),{\dot {\mathbf {q} }}(\mathbf {Q} ,{\dot {\mathbf {Q} }},t),t),\\\phi _{j}'(\mathbf {Q} ,t)&=\phi _{j}(\mathbf {q} (\mathbf {Q} ,t),t)\end{aligned}}}
and by the chain rule for partial differentiation, Lagrange's equations are invariant under this transformation;
d
d
t
∂
L
′
∂
Q
˙
i
=
∂
L
′
∂
Q
i
+
∑
j
λ
j
∂
ϕ
j
′
∂
Q
i
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L'}{\partial {\dot {Q}}_{i}}}={\frac {\partial L'}{\partial Q_{i}}}+\sum _{j}\lambda _{j}{\frac {\partial \phi '_{j}}{\partial Q_{i}}}.}
=== Cyclic coordinates and conserved momenta ===
An important property of the Lagrangian is that conserved quantities can easily be read off from it. The generalized momentum "canonically conjugate to" the coordinate qi is defined by
p
i
=
∂
L
∂
q
˙
i
.
{\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}.}
If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that
p
˙
i
=
d
d
t
∂
L
∂
q
˙
i
=
∂
L
∂
q
i
=
0
{\displaystyle {\dot {p}}_{i}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial L}{\partial q_{i}}}=0}
and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem. Such coordinates are called "cyclic" or "ignorable".
For example, a system may have a Lagrangian
L
(
r
,
θ
,
s
˙
,
z
˙
,
r
˙
,
θ
˙
,
ϕ
˙
,
t
)
,
{\displaystyle L(r,\theta ,{\dot {s}},{\dot {z}},{\dot {r}},{\dot {\theta }},{\dot {\phi }},t),}
where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta
p
z
=
∂
L
∂
z
˙
,
p
s
=
∂
L
∂
s
˙
,
p
ϕ
=
∂
L
∂
ϕ
˙
,
{\displaystyle p_{z}={\frac {\partial L}{\partial {\dot {z}}}},\quad p_{s}={\frac {\partial L}{\partial {\dot {s}}}},\quad p_{\phi }={\frac {\partial L}{\partial {\dot {\phi }}}},}
are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved.
=== Energy ===
Given a Lagrangian
L
,
{\displaystyle L,}
the Hamiltonian of the corresponding mechanical system is, by definition,
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
.
{\displaystyle H={\biggl (}\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}{\biggr )}-L.}
This quantity will be equivalent to energy if the generalized coordinates are natural coordinates, i.e., they have no explicit time dependence when expressing position vector:
r
=
r
(
q
1
,
⋯
,
q
n
)
{\displaystyle \mathbf {r} =\mathbf {r} (q_{1},\cdots ,q_{n})}
. From:
T
=
m
2
v
2
=
m
2
∑
i
,
j
(
∂
r
→
∂
q
i
q
˙
i
)
⋅
(
∂
r
→
∂
q
j
q
˙
j
)
=
m
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
{\displaystyle T={\frac {m}{2}}v^{2}={\frac {m}{2}}\sum _{i,j}\left({\frac {\partial {\vec {r}}}{\partial q_{i}}}{\dot {q}}_{i}\right)\cdot \left({\frac {\partial {\vec {r}}}{\partial q_{j}}}{\dot {q}}_{j}\right)={\frac {m}{2}}\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}}
∑
k
=
1
n
q
˙
k
∂
L
∂
q
˙
k
=
∑
k
=
1
n
q
˙
k
∂
T
∂
q
˙
k
=
m
2
(
2
∑
i
,
j
a
i
j
q
˙
i
q
˙
j
)
=
2
T
{\displaystyle \sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial L}{\partial {\dot {q}}_{k}}}=\sum _{k=1}^{n}{\dot {q}}_{k}{\frac {\partial T}{\partial {\dot {q}}_{k}}}={\frac {m}{2}}\left(2\sum _{i,j}a_{ij}{\dot {q}}_{i}{\dot {q}}_{j}\right)=2T}
H
=
(
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
)
−
L
=
2
T
−
(
T
−
V
)
=
T
+
V
=
E
{\displaystyle H=\left(\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\right)-L=2T-(T-V)=T+V=E}
where
a
i
j
=
∂
r
∂
q
i
⋅
∂
r
∂
q
j
{\displaystyle a_{ij}={\frac {\partial \mathbf {r} }{\partial q_{i}}}\cdot {\frac {\partial \mathbf {r} }{\partial q_{j}}}}
is a symmetric matrix that is defined for the derivation.
==== Invariance under coordinate transformations ====
At every time instant t, the energy is invariant under configuration space coordinate changes q → Q, i.e. (using natural coordinates)
E
(
q
,
q
˙
,
t
)
=
E
(
Q
,
Q
˙
,
t
)
.
{\displaystyle E(\mathbf {q} ,{\dot {\mathbf {q} }},t)=E(\mathbf {Q} ,{\dot {\mathbf {Q} }},t).}
Besides this result, the proof below shows that, under such change of coordinates, the derivatives
∂
L
/
∂
q
˙
i
{\displaystyle \partial L/\partial {\dot {q}}_{i}}
change as coefficients of a linear form.
==== Conservation ====
In Lagrangian mechanics, the system is closed if and only if its Lagrangian
L
{\displaystyle L}
does not explicitly depend on time. The energy conservation law states that the energy
E
{\displaystyle E}
of a closed system is an integral of motion.
More precisely, let q = q(t) be an extremal. (In other words, q satisfies the Euler–Lagrange equations). Taking the total time-derivative of L along this extremal and using the EL equations leads to
d
L
d
t
=
q
˙
∂
L
∂
q
+
q
¨
∂
L
∂
q
˙
+
∂
L
∂
t
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
)
q
˙
+
q
¨
∂
L
∂
q
˙
−
L
˙
−
∂
L
∂
t
=
d
d
t
(
∂
L
∂
q
˙
q
˙
−
L
)
=
d
H
d
t
{\displaystyle {\begin{aligned}{\frac {dL}{dt}}&={\dot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}+{\frac {\partial L}{\partial t}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right){\dot {\mathbf {q} }}+{\ddot {\mathbf {q} }}{\frac {\partial L}{\partial \mathbf {\dot {q}} }}-{\dot {L}}\\-{\frac {\partial L}{\partial t}}&={\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\mathbf {\dot {q}} -L\right)={\frac {dH}{dt}}\end{aligned}}}
If the Lagrangian L does not explicitly depend on time, then ∂L/∂t = 0, then H does not vary with time evolution of particle, indeed, an integral of motion, meaning that
H
(
q
(
t
)
,
q
˙
(
t
)
,
t
)
=
constant of time
.
{\displaystyle H(\mathbf {q} (t),{\dot {\mathbf {q} }}(t),t)={\text{constant of time}}.}
Hence, if the chosen coordinates were natural coordinates, the energy is conserved.
==== Kinetic and potential energies ====
Under all these circumstances, the constant
E
=
T
+
V
{\displaystyle E=T+V}
is the total energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant. This is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates.
=== Mechanical similarity ===
If the potential energy is a homogeneous function of the coordinates and independent of time, and all position vectors are scaled by the same nonzero constant α, rk′ = αrk, so that
V
(
α
r
1
,
α
r
2
,
…
,
α
r
N
)
=
α
N
V
(
r
1
,
r
2
,
…
,
r
N
)
{\displaystyle V(\alpha \mathbf {r} _{1},\alpha \mathbf {r} _{2},\ldots ,\alpha \mathbf {r} _{N})=\alpha ^{N}V(\mathbf {r} _{1},\mathbf {r} _{2},\ldots ,\mathbf {r} _{N})}
and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if
α
2
β
2
=
α
N
⇒
β
=
α
1
−
N
2
.
{\displaystyle {\frac {\alpha ^{2}}{\beta ^{2}}}=\alpha ^{N}\quad \Rightarrow \quad \beta =\alpha ^{1-{\frac {N}{2}}}.}
Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size. The length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios
t
′
t
=
(
l
′
l
)
1
−
N
2
.
{\displaystyle {\frac {t'}{t}}=\left({\frac {l'}{l}}\right)^{1-{\frac {N}{2}}}.}
=== Interacting particles ===
For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems:
L
=
L
A
+
L
B
.
{\displaystyle L=L_{A}+L_{B}.}
If they do interact this is not possible. In some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction,
L
=
L
A
+
L
B
+
L
A
B
.
{\displaystyle L=L_{A}+L_{B}+L_{AB}.}
This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above.
The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added.
=== Consequences of singular Lagrangians ===
From the Euler-Lagrange equations, it follows that:
d
d
t
∂
L
∂
q
˙
i
−
∂
L
∂
q
i
=
0
∂
2
L
∂
q
j
∂
q
˙
i
d
q
j
d
t
+
∂
2
L
∂
q
˙
j
∂
q
˙
i
d
q
˙
j
d
t
+
∂
L
∂
t
−
∂
L
∂
q
i
=
0
∑
j
W
i
j
(
q
,
q
˙
,
t
)
q
¨
j
=
∂
L
∂
q
i
−
∂
L
∂
t
−
∑
j
∂
2
L
∂
q
˙
i
∂
q
j
q
˙
j
,
{\displaystyle {\begin{aligned}&{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-{\frac {\partial L}{\partial q_{i}}}=0\\&{\frac {\partial ^{2}L}{\partial q_{j}\partial {\dot {q}}_{i}}}{\frac {dq_{j}}{dt}}+{\frac {\partial ^{2}L}{\partial {\dot {q}}_{j}\partial {\dot {q}}_{i}}}{\frac {d{\dot {q}}_{j}}{dt}}+{\frac {\partial L}{\partial t}}-{\frac {\partial L}{\partial q_{i}}}=0\\&\sum _{j}W_{ij}(q,{\dot {q}},t){\ddot {q}}_{j}={\frac {\partial L}{\partial q_{i}}}-{\frac {\partial L}{\partial t}}-\sum _{j}{\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial q_{j}}}{\dot {q}}_{j},\\\end{aligned}}}
where the matrix is defined as
W
i
j
=
∂
2
L
∂
q
˙
i
∂
q
˙
j
{\displaystyle W_{ij}={\frac {\partial ^{2}L}{\partial {\dot {q}}_{i}\partial {\dot {q}}_{j}}}}
. If the matrix
W
{\displaystyle W}
is non-singular, the above equations can be solved to represent
q
¨
{\displaystyle {\ddot {q}}}
as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
. If the matrix is non-invertible, it would not be possible to represent all
q
¨
{\displaystyle {\ddot {q}}}
's as a function of
(
q
˙
,
q
,
t
)
{\displaystyle ({\dot {q}},q,t)}
but also, the Hamiltonian equations of motions will not take the standard form.
== Examples ==
The following examples apply Lagrange's equations of the second kind to mechanical problems.
=== Conservative force ===
A particle of mass m moves under the influence of a conservative force derived from the gradient ∇ of a scalar potential,
F
=
−
∇
V
(
r
)
.
{\displaystyle \mathbf {F} =-{\boldsymbol {\nabla }}V(\mathbf {r} ).}
If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates.
==== Cartesian coordinates ====
The Lagrangian of the particle can be written
L
(
x
,
y
,
z
,
x
˙
,
y
˙
,
z
˙
)
=
1
2
m
(
x
˙
2
+
y
˙
2
+
z
˙
2
)
−
V
(
x
,
y
,
z
)
.
{\displaystyle L(x,y,z,{\dot {x}},{\dot {y}},{\dot {z}})={\frac {1}{2}}m({\dot {x}}^{2}+{\dot {y}}^{2}+{\dot {z}}^{2})-V(x,y,z).}
The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate
d
d
t
(
∂
L
∂
x
˙
)
=
∂
L
∂
x
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)={\frac {\partial L}{\partial x}},}
with derivatives
∂
L
∂
x
=
−
∂
V
∂
x
,
∂
L
∂
x
˙
=
m
x
˙
,
d
d
t
(
∂
L
∂
x
˙
)
=
m
x
¨
,
{\displaystyle {\frac {\partial L}{\partial x}}=-{\frac {\partial V}{\partial x}},\quad {\frac {\partial L}{\partial {\dot {x}}}}=m{\dot {x}},\quad {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {x}}}}\right)=m{\ddot {x}},}
hence
m
x
¨
=
−
∂
V
∂
x
,
{\displaystyle m{\ddot {x}}=-{\frac {\partial V}{\partial x}},}
and similarly for the y and z coordinates. Collecting the equations in vector form we find
m
r
¨
=
−
∇
V
{\displaystyle m{\ddot {\mathbf {r} }}=-{\boldsymbol {\nabla }}V}
which is Newton's second law of motion for a particle subject to a conservative force.
==== Polar coordinates in 2D and 3D ====
Using the spherical coordinates (r, θ, φ) as commonly used in physics (ISO 80000-2:2019 convention), where r is the radial distance to origin, θ is polar angle (also known as colatitude, zenith angle, normal angle, or inclination angle), and φ is the azimuthal angle, the Lagrangian for a central potential is
L
=
m
2
(
r
˙
2
+
r
2
θ
˙
2
+
r
2
sin
2
θ
φ
˙
2
)
−
V
(
r
)
.
{\displaystyle L={\frac {m}{2}}({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}+r^{2}\sin ^{2}\theta \,{\dot {\varphi }}^{2})-V(r).}
So, in spherical coordinates, the Euler–Lagrange equations are
m
r
¨
−
m
r
(
θ
˙
2
+
sin
2
θ
φ
˙
2
)
+
∂
V
∂
r
=
0
,
{\displaystyle m{\ddot {r}}-mr({\dot {\theta }}^{2}+\sin ^{2}\theta \,{\dot {\varphi }}^{2})+{\frac {\partial V}{\partial r}}=0,}
d
d
t
(
m
r
2
θ
˙
)
−
m
r
2
sin
θ
cos
θ
φ
˙
2
=
0
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}{\dot {\theta }})-mr^{2}\sin \theta \cos \theta \,{\dot {\varphi }}^{2}=0,}
d
d
t
(
m
r
2
sin
2
θ
φ
˙
)
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}(mr^{2}\sin ^{2}\theta \,{\dot {\varphi }})=0.}
The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum
p
φ
=
∂
L
∂
φ
˙
=
m
r
2
sin
2
θ
φ
˙
,
{\displaystyle p_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=mr^{2}\sin ^{2}\theta {\dot {\varphi }},}
in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant.
The Lagrangian in two-dimensional polar coordinates is recovered by fixing θ to the constant value π/2.
=== Pendulum on a movable support ===
Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the
x
{\displaystyle x}
-direction. Let
x
{\displaystyle x}
be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle
θ
{\displaystyle \theta }
from the vertical. The coordinates and velocity components of the pendulum bob are
x
p
e
n
d
=
x
+
ℓ
sin
θ
⇒
x
˙
p
e
n
d
=
x
˙
+
ℓ
θ
˙
cos
θ
y
p
e
n
d
=
−
ℓ
cos
θ
⇒
y
˙
p
e
n
d
=
ℓ
θ
˙
sin
θ
.
{\displaystyle {\begin{array}{rll}&x_{\mathrm {pend} }=x+\ell \sin \theta &\quad \Rightarrow \quad {\dot {x}}_{\mathrm {pend} }={\dot {x}}+\ell {\dot {\theta }}\cos \theta \\&y_{\mathrm {pend} }=-\ell \cos \theta &\quad \Rightarrow \quad {\dot {y}}_{\mathrm {pend} }=\ell {\dot {\theta }}\sin \theta .\end{array}}}
The generalized coordinates can be taken to be
x
{\displaystyle x}
and
θ
{\displaystyle \theta }
. The kinetic energy of the system is then
T
=
1
2
M
x
˙
2
+
1
2
m
(
x
˙
p
e
n
d
2
+
y
˙
p
e
n
d
2
)
{\displaystyle T={\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left({\dot {x}}_{\mathrm {pend} }^{2}+{\dot {y}}_{\mathrm {pend} }^{2}\right)}
and the potential energy is
V
=
m
g
y
p
e
n
d
{\displaystyle V=mgy_{\mathrm {pend} }}
giving the Lagrangian
L
=
T
−
V
=
1
2
M
x
˙
2
+
1
2
m
[
(
x
˙
+
ℓ
θ
˙
cos
θ
)
2
+
(
ℓ
θ
˙
sin
θ
)
2
]
+
m
g
ℓ
cos
θ
=
1
2
(
M
+
m
)
x
˙
2
+
m
x
˙
ℓ
θ
˙
cos
θ
+
1
2
m
ℓ
2
θ
˙
2
+
m
g
ℓ
cos
θ
.
{\displaystyle {\begin{array}{rcl}L&=&T-V\\&=&{\frac {1}{2}}M{\dot {x}}^{2}+{\frac {1}{2}}m\left[\left({\dot {x}}+\ell {\dot {\theta }}\cos \theta \right)^{2}+\left(\ell {\dot {\theta }}\sin \theta \right)^{2}\right]+mg\ell \cos \theta \\&=&{\frac {1}{2}}\left(M+m\right){\dot {x}}^{2}+m{\dot {x}}\ell {\dot {\theta }}\cos \theta +{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+mg\ell \cos \theta .\end{array}}}
Since x is absent from the Lagrangian, it is a cyclic coordinate. The conserved momentum is
p
x
=
∂
L
∂
x
˙
=
(
M
+
m
)
x
˙
+
m
ℓ
θ
˙
cos
θ
,
{\displaystyle p_{x}={\frac {\partial L}{\partial {\dot {x}}}}=(M+m){\dot {x}}+m\ell {\dot {\theta }}\cos \theta ,}
and the Lagrange equation for the support coordinate
x
{\displaystyle x}
is
(
M
+
m
)
x
¨
+
m
ℓ
θ
¨
cos
θ
−
m
ℓ
θ
˙
2
sin
θ
=
0.
{\displaystyle (M+m){\ddot {x}}+m\ell {\ddot {\theta }}\cos \theta -m\ell {\dot {\theta }}^{2}\sin \theta =0.}
The Lagrange equation for the angle θ is
d
d
t
[
m
(
x
˙
ℓ
cos
θ
+
ℓ
2
θ
˙
)
]
+
m
ℓ
(
x
˙
θ
˙
+
g
)
sin
θ
=
0
;
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left[m({\dot {x}}\ell \cos \theta +\ell ^{2}{\dot {\theta }})\right]+m\ell ({\dot {x}}{\dot {\theta }}+g)\sin \theta =0;}
and simplifying
θ
¨
+
x
¨
ℓ
cos
θ
+
g
ℓ
sin
θ
=
0.
{\displaystyle {\ddot {\theta }}+{\frac {\ddot {x}}{\ell }}\cos \theta +{\frac {g}{\ell }}\sin \theta =0.}
These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example,
x
¨
→
0
{\displaystyle {\ddot {x}}\to 0}
should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while
θ
¨
→
0
{\displaystyle {\ddot {\theta }}\to 0}
should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively.
=== Two-body central force problem ===
Two bodies of masses m1 and m2 with position vectors r1 and r2 are in orbit about each other due to an attractive central potential V. We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies r = r2 − r1 and the location of the center of mass R = (m1r1 + m2r2)/(m1 + m2). The Lagrangian is then
L
=
1
2
M
R
˙
2
⏟
L
cm
+
1
2
μ
r
˙
2
−
V
(
|
r
|
)
⏟
L
rel
{\displaystyle L=\underbrace {{\frac {1}{2}}M{\dot {\mathbf {R} }}^{2}} _{L_{\text{cm}}}+\underbrace {{\frac {1}{2}}\mu {\dot {\mathbf {r} }}^{2}-V(|\mathbf {r} |)} _{L_{\text{rel}}}}
where M = m1 + m2 is the total mass, μ = m1m2/(m1 + m2) is the reduced mass, and V the potential of the radial force, which depends only on the magnitude of the separation |r| = |r2 − r1|. The Lagrangian splits into a center-of-mass term Lcm and a relative motion term Lrel.
The Euler–Lagrange equation for R is simply
M
R
¨
=
0
,
{\displaystyle M{\ddot {\mathbf {R} }}=0,}
which states the center of mass moves in a straight line at constant velocity.
Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates (r, θ) and take r = |r|,
L
rel
=
1
2
μ
(
r
˙
2
+
r
2
θ
˙
2
)
−
V
(
r
)
,
{\displaystyle L_{\text{rel}}={\frac {1}{2}}\mu \left({\dot {r}}^{2}+r^{2}{\dot {\theta }}^{2}\right)-V(r),}
so θ is a cyclic coordinate with the corresponding conserved (angular) momentum
p
θ
=
∂
L
rel
∂
θ
˙
=
μ
r
2
θ
˙
=
ℓ
.
{\displaystyle p_{\theta }={\frac {\partial L_{\text{rel}}}{\partial {\dot {\theta }}}}=\mu r^{2}{\dot {\theta }}=\ell .}
The radial coordinate r and angular velocity dθ/dt can vary with time, but only in such a way that ℓ is constant. The Lagrange equation for r is
μ
r
θ
˙
2
−
d
V
d
r
=
μ
r
¨
.
{\displaystyle \mu r{\dot {\theta }}^{2}-{\frac {dV}{dr}}=\mu {\ddot {r}}.}
This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity dθ/dt from this radial equation,
μ
r
¨
=
−
d
V
d
r
+
ℓ
2
μ
r
3
.
{\displaystyle \mu {\ddot {r}}=-{\frac {\mathrm {d} V}{\mathrm {d} r}}+{\frac {\ell ^{2}}{\mu r^{3}}}.}
which is the equation of motion for a one-dimensional problem in which a particle of mass μ is subjected to the inward central force −dV/dr and a second outward force, called in this context the (Lagrangian) centrifugal force (see centrifugal force#Other uses of the term):
F
c
f
=
μ
r
θ
˙
2
=
ℓ
2
μ
r
3
.
{\displaystyle F_{\mathrm {cf} }=\mu r{\dot {\theta }}^{2}={\frac {\ell ^{2}}{\mu r^{3}}}.}
Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated.
If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates (r, θ) and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says:
"Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion.
This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method. This view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force. In the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces. That is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently."
It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system.
== Extensions to include non-conservative forces ==
=== Dissipative forces ===
Dissipation (i.e. non-conservative systems) can also be treated with an effective Lagrangian formulated by a certain doubling of the degrees of freedom.
In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form:
D
=
1
2
∑
j
=
1
m
∑
k
=
1
m
C
j
k
q
˙
j
q
˙
k
,
{\displaystyle D={\frac {1}{2}}\sum _{j=1}^{m}\sum _{k=1}^{m}C_{jk}{\dot {q}}_{j}{\dot {q}}_{k},}
where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then
Q
j
=
−
∂
V
∂
q
j
−
∂
D
∂
q
˙
j
{\displaystyle Q_{j}=-{\frac {\partial V}{\partial q_{j}}}-{\frac {\partial D}{\partial {\dot {q}}_{j}}}}
and
d
d
t
(
∂
L
∂
q
˙
j
)
−
∂
L
∂
q
j
+
∂
D
∂
q
˙
j
=
0.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {\partial L}{\partial {\dot {q}}_{j}}}\right)-{\frac {\partial L}{\partial q_{j}}}+{\frac {\partial D}{\partial {\dot {q}}_{j}}}=0.}
=== Electromagnetism ===
A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant. It is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. Not only can the fields form non conservative potentials, these potentials can also be velocity dependent.
The Lagrangian for a charged particle with electrical charge q, interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential ϕ = ϕ(r, t) and magnetic vector potential A = A(r, t) are defined from the electric field E = E(r, t) and magnetic field B = B(r, t) as follows:
E
=
−
∇
ϕ
−
∂
A
∂
t
,
B
=
∇
×
A
.
{\displaystyle \mathbf {E} =-{\boldsymbol {\nabla }}\phi -{\frac {\partial \mathbf {A} }{\partial t}},\quad \mathbf {B} ={\boldsymbol {\nabla }}\times \mathbf {A} .}
The Lagrangian of a massive charged test particle in an electromagnetic field
L
=
1
2
m
r
˙
2
+
q
r
˙
⋅
A
−
q
ϕ
,
{\displaystyle L={\tfrac {1}{2}}m{\dot {\mathbf {r} }}^{2}+q\,{\dot {\mathbf {r} }}\cdot \mathbf {A} -q\phi ,}
is called minimal coupling. This is a good example of when the common rule of thumb that the Lagrangian is the kinetic energy minus the potential energy is incorrect. Combined with Euler–Lagrange equation, it produces the Lorentz force law
m
r
¨
=
q
E
+
q
r
˙
×
B
{\displaystyle m{\ddot {\mathbf {r} }}=q\mathbf {E} +q{\dot {\mathbf {r} }}\times \mathbf {B} }
Under gauge transformation:
A
→
A
+
∇
f
,
ϕ
→
ϕ
−
f
˙
,
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} +{\boldsymbol {\nabla }}f,\quad \phi \rightarrow \phi -{\dot {f}},}
where f(r,t) is any scalar function of space and time, the aforementioned Lagrangian transforms like:
L
→
L
+
q
(
r
˙
⋅
∇
+
∂
∂
t
)
f
=
L
+
q
d
f
d
t
,
{\displaystyle L\rightarrow L+q\left({\dot {\mathbf {r} }}\cdot {\boldsymbol {\nabla }}+{\frac {\partial }{\partial t}}\right)f=L+q{\frac {df}{dt}},}
which still produces the same Lorentz force law.
Note that the canonical momentum (conjugate to position r) is the kinetic momentum plus a contribution from the A field (known as the potential momentum):
p
=
∂
L
∂
r
˙
=
m
r
˙
+
q
A
.
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial {\dot {\mathbf {r} }}}}=m{\dot {\mathbf {r} }}+q\mathbf {A} .}
This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. From this expression, we can see that the canonical momentum p is not gauge invariant, and therefore not a measurable physical quantity; However, if r is cyclic (i.e. Lagrangian is independent of position r), which happens if the ϕ and A fields are uniform, then this canonical momentum p given here is the conserved momentum, while the measurable physical kinetic momentum mv is not.
== Other contexts and formulations ==
The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations.
=== Alternative formulations of classical mechanics ===
A closely related formulation of classical mechanics is Hamiltonian mechanics. The Hamiltonian is defined by
H
=
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
−
L
{\displaystyle H=\sum _{i=1}^{n}{\dot {q}}_{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}-L}
and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta. This doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)).
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates.
=== Momentum space formulation ===
The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian L(q, dq/dt, t) obtains the generalized momenta Lagrangian L′(p, dp/dt, t) in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system. In practice generalized coordinates are more convenient to use and interpret than generalized momenta.
=== Higher derivatives of generalized coordinates ===
There is no mathematical reason to restrict the derivatives of generalized coordinates to first order only. It is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler–Lagrange equation for details. However, from the physical point-of-view there is an obstacle to include time derivatives higher than the first order, which is implied by Ostrogradsky's construction of a canonical formalism for nondegenerate higher derivative Lagrangians, see Ostrogradsky instability
=== Optics ===
Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow.
=== Relativistic formulation ===
Lagrangian mechanics can be formulated in special relativity and general relativity. Some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out.
=== Quantum mechanics ===
In quantum mechanics, action and quantum-mechanical phase are related via the Planck constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions.
In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics.
=== Classical field theory ===
In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system. In classical field theory, the physical system is not a set of discrete particles, but rather a continuous field ϕ(r, t) defined over a region of 3D space. Associated with the field is a Lagrangian density
L
(
ϕ
,
∇
ϕ
,
ϕ
˙
,
r
,
t
)
{\displaystyle {\mathcal {L}}(\phi ,\nabla \phi ,{\dot {\phi }},\mathbf {r} ,t)}
defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"). The Lagrangian is then the volume integral of the Lagrangian density over 3D space
L
(
t
)
=
∫
L
d
3
r
{\displaystyle L(t)=\int {\mathcal {L}}\,\mathrm {d} ^{3}\mathbf {r} }
where d3r is a 3D differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian.
=== Noether's theorem ===
The action principle, and the Lagrangian formalism, are tied closely to Noether's theorem, which connects physical conserved quantities to continuous symmetries of a physical system.
If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry. This characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity.
== See also ==
== Footnotes ==
== Notes ==
== References ==
== Further reading ==
Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988).
Cassel, Kevin (2013). Variational methods with applications in science and engineering. Cambridge: Cambridge University Press. ISBN 978-1-107-02258-4.
Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
== External links ==
David Tong. "Cambridge Lecture Notes on Classical Dynamics". DAMTP. Retrieved 2017-06-08.
Principle of least action interactive Excellent interactive explanation/webpage
Joseph Louis de Lagrange - Œuvres complètes (Gallica-Math)
Constrained motion and generalized coordinates, page 4 | Wikipedia/Lagrange_equation |
A separable partial differential equation can be broken into a set of equations of lower dimensionality (fewer independent variables) by a method of separation of variables. It generally relies upon the problem having some special form or symmetry. In this way, the partial differential equation (PDE) can be solved by solving a set of simpler PDEs, or even ordinary differential equations (ODEs) if the problem can be broken down into one-dimensional equations.
The most common form of separation of variables is simple separation of variables. A solution is obtained by assuming a solution of the form given by a product of functions of each individual coordinate. There is a special form of separation of variables called
R
{\displaystyle R}
-separation of variables which is accomplished by writing the solution as a particular fixed function of the coordinates multiplied by a product of functions of each individual coordinate. Laplace's equation on
R
n
{\displaystyle {\mathbb {R} }^{n}}
is an example of a partial differential equation that admits solutions through
R
{\displaystyle R}
-separation of variables; in the three-dimensional case this uses 6-sphere coordinates.
(This should not be confused with the case of a separable ODE, which refers to a somewhat different class of problems that can be broken into a pair of integrals; see separation of variables.)
== Example ==
For example, consider the time-independent Schrödinger equation
[
−
∇
2
+
V
(
x
)
]
ψ
(
x
)
=
E
ψ
(
x
)
{\displaystyle [-\nabla ^{2}+V(\mathbf {x} )]\psi (\mathbf {x} )=E\psi (\mathbf {x} )}
for the function
ψ
(
x
)
{\displaystyle \psi (\mathbf {x} )}
(in dimensionless units, for simplicity). (Equivalently, consider the inhomogeneous Helmholtz equation.) If the function
V
(
x
)
{\displaystyle V(\mathbf {x} )}
in three dimensions is of the form
V
(
x
1
,
x
2
,
x
3
)
=
V
1
(
x
1
)
+
V
2
(
x
2
)
+
V
3
(
x
3
)
,
{\displaystyle V(x_{1},x_{2},x_{3})=V_{1}(x_{1})+V_{2}(x_{2})+V_{3}(x_{3}),}
then it turns out that the problem can be separated into three one-dimensional ODEs for functions
ψ
1
(
x
1
)
{\displaystyle \psi _{1}(x_{1})}
,
ψ
2
(
x
2
)
{\displaystyle \psi _{2}(x_{2})}
, and
ψ
3
(
x
3
)
{\displaystyle \psi _{3}(x_{3})}
, and the final solution can be written as
ψ
(
x
)
=
ψ
1
(
x
1
)
⋅
ψ
2
(
x
2
)
⋅
ψ
3
(
x
3
)
{\displaystyle \psi (\mathbf {x} )=\psi _{1}(x_{1})\cdot \psi _{2}(x_{2})\cdot \psi _{3}(x_{3})}
. (More generally, the separable cases of the Schrödinger equation were enumerated by Eisenhart in 1948.)
== References == | Wikipedia/Separable_partial_differential_equation |
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.
Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions that are not differentiable, and the weak formulation allows one to find such solutions.
Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough.
Examples of equations that have weak solutions but fail to have strong solutions include the Tanaka equation and Tsirelson's stochastic differential equation.
== A concrete example ==
As an illustration of the concept, consider the first-order wave equation:
where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function
φ
{\displaystyle \varphi \,\!}
of compact support, known as a test function, taking
∫
−
∞
∞
∫
−
∞
∞
u
(
t
,
x
)
φ
(
t
,
x
)
d
x
d
t
{\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x)\,\varphi (t,x)\,dx\,dt}
For example, if
φ
{\displaystyle \varphi }
is a smooth probability distribution concentrated near a point
(
t
,
x
)
=
(
t
∘
,
x
∘
)
{\displaystyle (t,x)=(t_{\circ },x_{\circ })}
, the integral is approximately
u
(
t
∘
,
x
∘
)
{\displaystyle u(t_{\circ },x_{\circ })}
. Notice that while the integrals go from
−
∞
{\displaystyle -\infty }
to
∞
{\displaystyle \infty }
, they are essentially over a finite box where
φ
{\displaystyle \varphi }
is non-zero.
Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation (1) by a test function
φ
{\displaystyle \varphi }
(smooth of compact support), and integrate:
∫
−
∞
∞
∫
−
∞
∞
∂
u
(
t
,
x
)
∂
t
φ
(
t
,
x
)
d
t
d
x
+
∫
−
∞
∞
∫
−
∞
∞
∂
u
(
t
,
x
)
∂
x
φ
(
t
,
x
)
d
t
d
x
=
0.
{\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial t}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x+\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial x}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x=0.}
Using Fubini's theorem, which allows one to interchange the order of integration, as well as integration by parts (in t for the first term and in x for the second term) this equation becomes:
(Boundary terms vanish since
φ
{\displaystyle \varphi }
is zero outside a finite box.) We have shown that equation (1) implies equation (2) as long as u is continuously differentiable.
The key to the concept of weak solution is that there exist functions u that satisfy equation (2) for any
φ
{\displaystyle \varphi }
, but such u may not be differentiable and so cannot satisfy equation (1). An example is u(t, x) = |t − x|, as one may check by splitting the integrals over regions x ≥ t and x ≤ t, where u is smooth, and reversing the above computation using integration by parts. A weak solution of equation (1) means any solution u of equation (2) over all test functions
φ
{\displaystyle \varphi }
.
== General case ==
The general idea that follows from this example is that, when solving a differential equation in u, one can rewrite it using a test function
φ
{\displaystyle \varphi }
, such that whatever derivatives in u show up in the equation, they are "transferred" via integration by parts to
φ
{\displaystyle \varphi }
, resulting in an equation without derivatives of u. This new equation generalizes the original equation to include solutions that are not necessarily differentiable.
The approach illustrated above works in great generality. Indeed, consider a linear differential operator in an open set W in Rn:
P
(
x
,
∂
)
u
(
x
)
=
∑
a
α
1
,
α
2
,
…
,
α
n
(
x
)
∂
α
1
∂
α
2
⋯
∂
α
n
u
(
x
)
,
{\displaystyle P(x,\partial )u(x)=\sum a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\,\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}u(x),}
where the multi-index (α1, α2, ..., αn) varies over some finite set in Nn and the coefficients
a
α
1
,
α
2
,
…
,
α
n
{\displaystyle a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}}
are smooth enough functions of x in Rn.
The differential equation P(x, ∂)u(x) = 0 can, after being multiplied by a smooth test function
φ
{\displaystyle \varphi }
with compact support in W and integrated by parts, be written as
∫
W
u
(
x
)
Q
(
x
,
∂
)
φ
(
x
)
d
x
=
0
{\displaystyle \int _{W}u(x)Q(x,\partial )\varphi (x)\,\mathrm {d} x=0}
where the differential operator Q(x, ∂) is given by the formula
Q
(
x
,
∂
)
φ
(
x
)
=
∑
(
−
1
)
|
α
|
∂
α
1
∂
α
2
⋯
∂
α
n
[
a
α
1
,
α
2
,
…
,
α
n
(
x
)
φ
(
x
)
]
.
{\displaystyle Q(x,\partial )\varphi (x)=\sum (-1)^{|\alpha |}\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}\left[a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\varphi (x)\right].}
The number
(
−
1
)
|
α
|
=
(
−
1
)
α
1
+
α
2
+
⋯
+
α
n
{\displaystyle (-1)^{|\alpha |}=(-1)^{\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}}
shows up because one needs α1 + α2 + ⋯ + αn integrations by parts to transfer all the partial derivatives from u to
φ
{\displaystyle \varphi }
in each term of the differential equation, and each integration by parts entails a multiplication by −1.
The differential operator Q(x, ∂) is the formal adjoint of P(x, ∂) (cf. adjoint of an operator).
In summary, if the original (strong) problem was to find an |α|-times differentiable function u defined on the open set W such that
P
(
x
,
∂
)
u
(
x
)
=
0
for all
x
∈
W
{\displaystyle P(x,\partial )u(x)=0{\text{ for all }}x\in W}
(a so-called strong solution), then an integrable function u would be said to be a weak solution if
∫
W
u
(
x
)
Q
(
x
,
∂
)
φ
(
x
)
d
x
=
0
{\displaystyle \int _{W}u(x)\,Q(x,\partial )\varphi (x)\,\mathrm {d} x=0}
for every smooth function
φ
{\displaystyle \varphi }
with compact support in W.
== Other kinds of weak solution ==
The notion of weak solution based on distributions is sometimes inadequate. In the case of hyperbolic systems, the notion of weak solution based on distributions does not guarantee uniqueness, and it is necessary to supplement it with entropy conditions or some other selection criterion. In fully nonlinear PDE such as the Hamilton–Jacobi equation, there is a very different definition of weak solution called viscosity solution.
== References ==
Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2. | Wikipedia/Weak_solution |
In mathematics and physics, a nonlinear partial differential equation is a partial differential equation with nonlinear terms. They describe many different physical systems, ranging from gravitation to fluid dynamics, and have been used in mathematics to solve problems such as the Poincaré conjecture and the Calabi conjecture. They are difficult to study: almost no general techniques exist that work for all such equations, and usually each individual equation has to be studied as a separate problem.
The distinction between a linear and a nonlinear partial differential equation is usually made in terms of the properties of the operator that defines the PDE itself.
== Methods for studying nonlinear partial differential equations ==
=== Existence and uniqueness of solutions ===
A fundamental question for any PDE is the existence and uniqueness of a solution for given boundary conditions. For nonlinear equations these questions are in general very hard: for example, the hardest part of Yau's solution of the Calabi conjecture was the proof of existence for a Monge–Ampere equation. The open problem of existence (and smoothness) of solutions to the Navier–Stokes equations is one of the seven Millennium Prize problems in mathematics.
=== Singularities ===
The basic questions about singularities (their formation, propagation, and removal, and regularity of solutions) are the same as for linear PDE, but as usual much harder to study. In the linear case one can just use spaces of distributions, but nonlinear PDEs are not usually defined on arbitrary distributions, so one replaces spaces of distributions by refinements such as Sobolev spaces.
An example of singularity formation is given by the Ricci flow: Richard S. Hamilton showed that while short time solutions exist, singularities will usually form after a finite time. Grigori Perelman's solution of the Poincaré conjecture depended on a deep study of these singularities, where he showed how to continue the solution past the singularities.
=== Linear approximation ===
The solutions in a neighborhood of a known solution can sometimes be studied by linearizing the PDE around the solution. This corresponds to studying the tangent space of a point of the moduli space of all solutions.
=== Moduli space of solutions ===
Ideally one would like to describe the (moduli) space of all solutions explicitly, and
for some very special PDEs this is possible. (In general this is a hopeless problem: it is unlikely that there is any useful description of all solutions of the Navier–Stokes equation for example, as this would involve describing all possible fluid motions.) If the equation has a very large symmetry group, then one is usually only interested in the moduli space of solutions modulo the symmetry group, and this is sometimes a finite-dimensional compact manifold, possibly with singularities; for example, this happens in the case of the Seiberg–Witten equations. A slightly more complicated case is the self dual Yang–Mills equations, when the moduli space is finite-dimensional but not necessarily compact, though it can often be compactified explicitly. Another case when one can sometimes hope to describe all solutions is the case of completely integrable models, when solutions are sometimes a sort of superposition of solitons; this happens e.g. for the Korteweg–de Vries equation.
=== Exact solutions ===
It is often possible to write down some special solutions explicitly in terms of elementary functions (though it is rarely possible to describe all solutions like this). One way of finding such explicit solutions is to reduce the equations to equations of lower dimension, preferably ordinary differential equations, which can often be solved exactly. This can sometimes be done using separation of variables, or by looking for highly symmetric solutions.
Some equations have several different exact solutions.
=== Numerical solutions ===
Numerical solution on a computer is almost the only method that can be used for getting information about arbitrary systems of PDEs. There has been a lot of work done, but a lot of work still remains on solving certain systems numerically, especially for the Navier–Stokes and other equations related to weather prediction.
=== Lax pair ===
If a system of PDEs can be put into Lax pair form
d
L
d
t
=
L
A
−
A
L
{\displaystyle {\frac {dL}{dt}}=LA-AL}
then it usually has an infinite number of first integrals, which help to study it.
=== Euler–Lagrange equations ===
Systems of PDEs often arise as the Euler–Lagrange equations for a variational problem. Systems of this form can sometimes be solved by finding an extremum of the original variational problem.
=== Hamilton equations ===
=== Integrable systems ===
PDEs that arise from integrable systems are often the easiest to study, and can sometimes be completely solved. A well-known example is the Korteweg–de Vries equation.
=== Symmetry ===
Some systems of PDEs have large symmetry groups. For example, the Yang–Mills equations are invariant under an infinite-dimensional gauge group, and many systems of equations (such as the Einstein field equations) are invariant under diffeomorphisms of the underlying manifold. Any such symmetry groups can usually be used to help study the equations; in particular if one solution is known one can trivially generate more by acting with the symmetry group.
Sometimes equations are parabolic or hyperbolic "modulo the action of some group": for example, the Ricci flow equation is not quite parabolic, but is "parabolic modulo the action of the diffeomorphism group", which implies that it has most of the good properties of parabolic equations.
== List of equations ==
See the extensive List of nonlinear partial differential equations.
== See also ==
Euler–Lagrange equation
Nonlinear system
Integrable system
Inverse scattering transform
Dispersive partial differential equation
== References ==
Calogero, Francesco; Degasperis, Antonio (1982), Spectral transform and solitons. Vol. I. Tools to solve and investigate nonlinear evolution equations, Studies in Mathematics and its Applications, vol. 13, Amsterdam-New York: North-Holland Publishing Co., ISBN 0-444-86368-0, MR 0680040
Pokhozhaev, S.I. (2001) [1994], "Non-linear partial differential equation", Encyclopedia of Mathematics, EMS Press
Polyanin, Andrei D.; Zaitsev, Valentin F. (2004), Handbook of nonlinear partial differential equations, Boca Raton, FL: Chapman & Hall/CRC, pp. xx+814, ISBN 1-58488-355-3, MR 2042347
Roubíček, T. (2013), Nonlinear Partial Differential Equations with Applications, International Series of Numerical Mathematics, vol. 153 (2nd ed.), Basel, Boston, Berlin: Birkhäuser, doi:10.1007/978-3-0348-0513-1, ISBN 978-3-0348-0512-4, MR 3014456
Scott, Alwyn, ed. (2004), Encyclopedia of Nonlinear Science, Routledge, ISBN 978-1-57958-385-9. For errata, see this
Zwillinger, Daniel (1998), Handbook of differential equations (3rd ed.), Boston, MA: Academic Press, Inc., ISBN 978-0-12-784396-4, MR 0977062
== External links ==
EqWorld, The World of Mathematical Equations
dispersive PDE wiki
NEQwiki, the nonlinear equations encyclopedia Archived 2018-12-12 at the Wayback Machine | Wikipedia/Nonlinear_partial_differential_equation |
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
== Mathematical form ==
The Einstein field equations (EFE) may be written in the form:
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant.
The Einstein tensor is defined as
G
μ
ν
=
R
μ
ν
−
1
2
R
g
μ
ν
,
{\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },}
where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
κ
=
8
π
G
c
4
≈
2.07665
×
10
−
43
N
−
1
,
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},}
where G is the Newtonian constant of gravitation and c is the speed of light in vacuum.
The EFE can thus also be written as
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.}
In standard units, each term on the left has quantity dimension of L−2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
=== Sign convention ===
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
g
μ
ν
=
[
S
1
]
×
diag
(
−
1
,
+
1
,
+
1
,
+
1
)
R
μ
α
β
γ
=
[
S
2
]
×
(
Γ
α
γ
,
β
μ
−
Γ
α
β
,
γ
μ
+
Γ
σ
β
μ
Γ
γ
α
σ
−
Γ
σ
γ
μ
Γ
β
α
σ
)
G
μ
ν
=
[
S
3
]
×
κ
T
μ
ν
{\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}}
The third sign above is related to the choice of convention for the Ricci tensor:
R
μ
ν
=
[
S
2
]
×
[
S
3
]
×
R
α
μ
α
ν
{\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }}
With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −).
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
R
μ
ν
−
1
2
R
g
μ
ν
−
Λ
g
μ
ν
=
−
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.}
The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here.
=== Equivalent formulations ===
Taking the trace with respect to the metric of both sides of the EFE one gets
R
−
D
2
R
+
D
Λ
=
κ
T
,
{\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,}
where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
R
μ
ν
−
2
D
−
2
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
D
−
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).}
In D = 4 dimensions this reduces to
R
μ
ν
−
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).}
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy).
== Cosmological constant ==
In the Einstein field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,}
the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
T
μ
ν
(
v
a
c
)
=
−
Λ
κ
g
μ
ν
.
{\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.}
This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by
ρ
v
a
c
=
−
p
v
a
c
=
Λ
κ
,
{\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},}
where it is assumed that Λ has SI unit m−2 and κ is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
== Features ==
=== Conservation of energy and momentum ===
General relativity is consistent with the local conservation of energy and momentum expressed as
∇
β
T
α
β
=
T
α
β
;
β
=
0.
{\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.}
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
=== Nonlinearity ===
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction.
=== Correspondence principle ===
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations.
== Vacuum field equations ==
If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
R
μ
ν
=
0
.
{\displaystyle R_{\mu \nu }=0\,.}
In the case of nonzero cosmological constant, the equations are
R
μ
ν
=
Λ
D
2
−
1
g
μ
ν
.
{\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.}
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
== Einstein–Maxwell equations ==
If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
T
α
β
=
−
1
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
{\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)}
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory):
G
α
β
+
Λ
g
α
β
=
κ
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
.
{\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).}
Additionally, the covariant Maxwell equations are also applicable in free space:
F
α
β
;
β
=
0
F
[
α
β
;
γ
]
=
1
3
(
F
α
β
;
γ
+
F
β
γ
;
α
+
F
γ
α
;
β
)
=
1
3
(
F
α
β
,
γ
+
F
β
γ
,
α
+
F
γ
α
,
β
)
=
0
,
{\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}}
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that
F
α
β
=
A
α
;
β
−
A
β
;
α
=
A
α
,
β
−
A
β
,
α
{\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }}
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
== Solutions ==
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
== Linearized EFE ==
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
== Polynomial form ==
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
det
(
g
)
=
1
24
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
α
κ
g
β
λ
g
γ
μ
g
δ
ν
{\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
g
α
κ
=
1
6
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
β
λ
g
γ
μ
g
δ
ν
det
(
g
)
.
{\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.}
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
== See also ==
== Notes ==
== References ==
See General relativity resources.
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.
Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5.
Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724.
== External links ==
"Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
=== External images ===
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia. | Wikipedia/Einstein_equations |
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's atmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not (see Saddle points). The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
== Overview ==
In 1963, Edward Lorenz, with the help of Ellen Fetter, who was responsible for the numerical simulations and figures, and Margaret Hamilton, who helped in the initial, numerical computations leading up to the findings of the Lorenz model, developed a simplified mathematical model for atmospheric convection. The model is a system of three ordinary differential equations now known as the Lorenz equations:
d
x
d
t
=
σ
(
y
−
x
)
,
d
y
d
t
=
x
(
ρ
−
z
)
−
y
,
d
z
d
t
=
x
y
−
β
z
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma (y-x),\\[6pt]{\frac {\mathrm {d} y}{\mathrm {d} t}}&=x(\rho -z)-y,\\[6pt]{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-\beta z.\end{aligned}}}
The equations relate the properties of a two-dimensional fluid layer uniformly warmed from below and cooled from above. In particular, the equations describe the rate of change of three quantities with respect to time: x is proportional to the rate of convection, y to the horizontal temperature variation, and z to the vertical temperature variation. The constants σ, ρ, and β are system parameters proportional to the Prandtl number, Rayleigh number, and certain physical dimensions of the layer itself.
The Lorenz equations can arise in simplified models for lasers, dynamos, thermosyphons, brushless DC motors, electric circuits, chemical reactions and forward osmosis. Interestingly, the same Lorenz equations were also derived in 1963 by Sauermann and Haken for a single-mode laser. In 1975, Haken realized that their equations derived in 1963 were mathematically equivalent to the original Lorenz equations. Haken's paper thus started a new field called laser chaos or optical chaos. The Lorenz equations are often called Lorenz-Haken equations in optical literature. Later on, it was also shown the complex version of Lorenz equations also had laser equivalent ones.
The Lorenz equations are also the governing equations in Fourier space for the Malkus waterwheel. The Malkus waterwheel exhibits chaotic motion where instead of spinning in one direction at a constant speed, its rotation will speed up, slow down, stop, change directions, and oscillate back and forth between combinations of such behaviors in an unpredictable manner.
From a technical standpoint, the Lorenz system is nonlinear, aperiodic, three-dimensional and deterministic. The Lorenz equations have been the subject of hundreds of research articles, and at least one book-length study.
== Analysis ==
One normally assumes that the parameters σ, ρ, and β are positive. Lorenz used the values σ = 10, ρ = 28, and β = 8/3. The system exhibits chaotic behavior for these (and nearby) values.
If ρ < 1 then there is only one equilibrium point, which is at the origin. This point corresponds to no convection. All orbits converge to the origin, which is a global attractor, when ρ < 1.
A pitchfork bifurcation occurs at ρ = 1, and for ρ > 1 two additional critical points appear at
(
β
(
ρ
−
1
)
,
β
(
ρ
−
1
)
,
ρ
−
1
)
and
(
−
β
(
ρ
−
1
)
,
−
β
(
ρ
−
1
)
,
ρ
−
1
)
.
{\displaystyle \left({\sqrt {\beta (\rho -1)}},{\sqrt {\beta (\rho -1)}},\rho -1\right)\quad {\text{and}}\quad \left(-{\sqrt {\beta (\rho -1)}},-{\sqrt {\beta (\rho -1)}},\rho -1\right).}
These correspond to steady convection. This pair of equilibrium points is stable only if
ρ
<
σ
σ
+
β
+
3
σ
−
β
−
1
,
{\displaystyle \rho <\sigma {\frac {\sigma +\beta +3}{\sigma -\beta -1}},}
which can hold only for positive ρ if σ > β + 1. At the critical value, both equilibrium points lose stability through a subcritical Hopf bifurcation.
When ρ = 28, σ = 10, and β = 8/3, the Lorenz system has chaotic solutions (but not all solutions are chaotic). Almost all initial points will tend to an invariant set – the Lorenz attractor – a strange attractor, a fractal, and a self-excited attractor with respect to all three equilibria. Its Hausdorff dimension is estimated from above by the Lyapunov dimension (Kaplan-Yorke dimension) as 2.06±0.01, and the correlation dimension is estimated to be 2.05±0.01. The exact Lyapunov dimension formula of the global attractor can be found analytically under classical restrictions on the parameters:
3
−
2
(
σ
+
β
+
1
)
σ
+
1
+
(
σ
−
1
)
2
+
4
σ
ρ
{\displaystyle 3-{\frac {2(\sigma +\beta +1)}{\sigma +1+{\sqrt {\left(\sigma -1\right)^{2}+4\sigma \rho }}}}}
The Lorenz attractor is difficult to analyze, but the action of the differential equation on the attractor is described by a fairly simple geometric model. Proving that this is indeed the case is the fourteenth problem on the list of Smale's problems. This problem was the first one to be resolved, by Warwick Tucker in 2002.
For other values of ρ, the system displays knotted periodic orbits. For example, with ρ = 99.96 it becomes a T(3,2) torus knot.
== Connection to tent map ==
In Figure 4 of his paper, Lorenz plotted the relative maximum value in the z direction achieved by the system against the previous relative maximum in the z direction. This procedure later became known as a Lorenz map (not to be confused with a Poincaré plot, which plots the intersections of a trajectory with a prescribed surface). The resulting plot has a shape very similar to the tent map. Lorenz also found that when the maximum z value is above a certain cut-off, the system will switch to the next lobe. Combining this with the chaos known to be exhibited by the tent map, he showed that the system switches between the two lobes chaotically.
== A Generalized Lorenz System ==
Over the past several years, a series of papers regarding high-dimensional Lorenz models have yielded a generalized Lorenz model, which can be simplified into the classical Lorenz model for three state variables or the following five-dimensional Lorenz model for five state variables:
d
x
d
t
=
σ
(
y
−
x
)
,
d
y
d
t
=
x
(
ρ
−
z
)
−
y
,
d
z
d
t
=
x
y
−
x
y
1
−
β
z
,
d
y
1
d
t
=
x
z
−
2
x
z
1
−
d
0
y
1
,
d
z
1
d
t
=
2
x
y
1
−
4
β
z
1
.
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma (y-x),\\[6pt]{\frac {\mathrm {d} y}{\mathrm {d} t}}&=x(\rho -z)-y,\\[6pt]{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-xy_{1}-\beta z,\\[6pt]{\frac {\mathrm {d} y_{1}}{\mathrm {d} t}}&=xz-2xz_{1}-d_{0}y_{1},\\[6pt]{\frac {\mathrm {d} z_{1}}{\mathrm {d} t}}&=2xy_{1}-4\beta z_{1}.\end{aligned}}}
A choice of the parameter
d
0
=
19
3
{\textstyle d_{0}={\dfrac {19}{3}}}
has been applied to be consistent with the choice of the other parameters. See details in.
== Simulations ==
=== Julia simulation ===
=== Maple simulation ===
=== Maxima simulation ===
=== MATLAB simulation ===
=== Mathematica simulation ===
Standard way:
Less verbose:
=== Python simulation ===
=== R simulation ===
=== SageMath simulation ===
We try to solve this system of equations for
ρ
=
28
{\displaystyle \rho =28}
,
σ
=
10
{\displaystyle \sigma =10}
,
β
=
8
3
{\displaystyle \beta ={\frac {8}{3}}}
, with initial conditions
y
1
(
0
)
=
0
{\displaystyle y_{1}(0)=0}
,
y
2
(
0
)
=
0.5
{\displaystyle y_{2}(0)=0.5}
,
y
3
(
0
)
=
0
{\displaystyle y_{3}(0)=0}
.
== Applications ==
=== Model for atmospheric convection ===
As shown in Lorenz's original paper, the Lorenz system is a reduced version of a larger system studied earlier by Barry Saltzman. The Lorenz equations are derived from the Oberbeck–Boussinesq approximation to the equations describing fluid circulation in a shallow layer of fluid, heated uniformly from below and cooled uniformly from above. This fluid circulation is known as Rayleigh–Bénard convection. The fluid is assumed to circulate in two dimensions (vertical and horizontal) with periodic rectangular boundary conditions.
The partial differential equations modeling the system's stream function and temperature are subjected to a spectral Galerkin approximation: the hydrodynamic fields are expanded in Fourier series, which are then severely truncated to a single term for the stream function and two terms for the temperature. This reduces the model equations to a set of three coupled, nonlinear ordinary differential equations. A detailed derivation may be found, for example, in nonlinear dynamics texts from Hilborn (2000), Appendix C; Bergé, Pomeau & Vidal (1984), Appendix D; or Shen (2016), Supplementary Materials.
=== Model for the nature of chaos and order in the atmosphere ===
The scientific community accepts that the chaotic features found in low-dimensional Lorenz models could represent features of the Earth's atmosphere, yielding the statement of “weather is chaotic.” By comparison, based on the concept of attractor coexistence within the generalized Lorenz model and the original Lorenz model, Shen and his co-authors proposed a revised view that “weather possesses both chaos and order with distinct predictability”. The revised view, which is a build-up of the conventional view, is used to suggest that “the chaotic and regular features found in theoretical Lorenz models could better represent features of the Earth's atmosphere”.
=== Resolution of Smale's 14th problem ===
Smale's 14th problem asks, 'Do the properties of the Lorenz attractor exhibit that of a strange attractor?'. The problem was answered affirmatively by Warwick Tucker in 2002. To prove this result, Tucker used rigorous numerics methods like interval arithmetic and normal forms. First, Tucker defined a cross section
Σ
⊂
{
x
3
=
r
−
1
}
{\displaystyle \Sigma \subset \{x_{3}=r-1\}}
that is cut transversely by the flow trajectories. From this, one can define the first-return map
P
{\displaystyle P}
, which assigns to each
x
∈
Σ
{\displaystyle x\in \Sigma }
the point
P
(
x
)
{\displaystyle P(x)}
where the trajectory of
x
{\displaystyle x}
first intersects
Σ
{\displaystyle \Sigma }
.
Then the proof is split in three main points that are proved and imply the existence of a strange attractor. The three points are:
There exists a region
N
⊂
Σ
{\displaystyle N\subset \Sigma }
invariant under the first-return map, meaning
P
(
N
)
⊂
N
{\displaystyle P(N)\subset N}
.
The return map admits a forward invariant cone field.
Vectors inside this invariant cone field are uniformly expanded by the derivative
D
P
{\displaystyle DP}
of the return map.
To prove the first point, we notice that the cross section
Σ
{\displaystyle \Sigma }
is cut by two arcs formed by
P
(
Σ
)
{\displaystyle P(\Sigma )}
. Tucker covers the location of these two arcs by small rectangles
R
i
{\displaystyle R_{i}}
, the union of these rectangles gives
N
{\displaystyle N}
. Now, the goal is to prove that for all points in
N
{\displaystyle N}
, the flow will bring back the points in
Σ
{\displaystyle \Sigma }
, in
N
{\displaystyle N}
. To do that, we take a plan
Σ
′
{\displaystyle \Sigma '}
below
Σ
{\displaystyle \Sigma }
at a distance
h
{\displaystyle h}
small, then by taking the center
c
i
{\displaystyle c_{i}}
of
R
i
{\displaystyle R_{i}}
and using Euler integration method, one can estimate where the flow will bring
c
i
{\displaystyle c_{i}}
in
Σ
′
{\displaystyle \Sigma '}
which gives us a new point
c
i
′
{\displaystyle c_{i}'}
. Then, one can estimate where the points in
Σ
{\displaystyle \Sigma }
will be mapped in
Σ
′
{\displaystyle \Sigma '}
using Taylor expansion, this gives us a new rectangle
R
i
′
{\displaystyle R_{i}'}
centered on
c
i
{\displaystyle c_{i}}
. Thus we know that all points in
R
i
{\displaystyle R_{i}}
will be mapped in
R
i
′
{\displaystyle R_{i}'}
. The goal is to do this method recursively until the flow comes back to
Σ
{\displaystyle \Sigma }
and we obtain a rectangle
R
f
i
{\displaystyle Rf_{i}}
in
Σ
{\displaystyle \Sigma }
such that we know that
P
(
R
i
)
⊂
R
f
i
{\displaystyle P(R_{i})\subset Rf_{i}}
. The problem is that our estimation may become imprecise after several iterations, thus what Tucker does is to split
R
i
′
{\displaystyle R_{i}'}
into smaller rectangles
R
i
,
j
{\displaystyle R_{i,j}}
and then apply the process recursively. Another problem is that as we are applying this algorithm, the flow becomes more 'horizontal', leading to a dramatic increase in imprecision. To prevent this, the algorithm changes the orientation of the cross sections, becoming either horizontal or vertical.
== Gallery ==
== See also ==
Eden's conjecture on the Lyapunov dimension
Lorenz 96 model
List of chaotic maps
Takens' theorem
== Notes ==
== References ==
Bergé, Pierre; Pomeau, Yves; Vidal, Christian (1984). Order within Chaos: Towards a Deterministic Approach to Turbulence. New York: John Wiley & Sons. ISBN 978-0-471-84967-4.
Cuomo, Kevin M.; Oppenheim, Alan V. (1993). "Circuit implementation of synchronized chaos with applications to communications". Physical Review Letters. 71 (1): 65–68. Bibcode:1993PhRvL..71...65C. doi:10.1103/PhysRevLett.71.65. ISSN 0031-9007. PMID 10054374.
Gorman, M.; Widmann, P.J.; Robbins, K.A. (1986). "Nonlinear dynamics of a convection loop: A quantitative comparison of experiment with theory". Physica D. 19 (2): 255–267. Bibcode:1986PhyD...19..255G. doi:10.1016/0167-2789(86)90022-9.
Grassberger, P.; Procaccia, I. (1983). "Measuring the strangeness of strange attractors". Physica D. 9 (1–2): 189–208. Bibcode:1983PhyD....9..189G. doi:10.1016/0167-2789(83)90298-1.
Haken, H. (1975). "Analogy between higher instabilities in fluids and lasers". Physics Letters A. 53 (1): 77–78. Bibcode:1975PhLA...53...77H. doi:10.1016/0375-9601(75)90353-9.
Sauermann, H.; Haken, H. (1963). "Nonlinear Interaction of Laser Modes". Z. Phys. 173 (3): 261–275. Bibcode:1963ZPhy..173..261H. doi:10.1007/BF01377828.
Ning, C.Z.; Haken, H. (1990). "Detuned lasers and the complex Lorenz equations: Subcritical and supercritical Hopf bifurcations". Phys. Rev. A. 41 (7): 3826–3837. Bibcode:1990PhRvA..41.3826N. doi:10.1103/PhysRevA.41.3826. PMID 9903557.
Hemati, N. (1994). "Strange attractors in brushless DC motors". IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications. 41 (1): 40–45. doi:10.1109/81.260218. ISSN 1057-7122.
Hilborn, Robert C. (2000). Chaos and Nonlinear Dynamics: An Introduction for Scientists and Engineers (second ed.). Oxford University Press. ISBN 978-0-19-850723-9.
Hirsch, Morris W.; Smale, Stephen; Devaney, Robert (2003). Differential Equations, Dynamical Systems, & An Introduction to Chaos (Second ed.). Boston, MA: Academic Press. ISBN 978-0-12-349703-1.
Knobloch, Edgar (1981). "Chaos in the segmented disc dynamo". Physics Letters A. 82 (9): 439–440. Bibcode:1981PhLA...82..439K. doi:10.1016/0375-9601(81)90274-7.
Kolář, Miroslav; Gumbs, Godfrey (1992). "Theory for the experimental observation of chaos in a rotating waterwheel". Physical Review A. 45 (2): 626–637. Bibcode:1992PhRvA..45..626K. doi:10.1103/PhysRevA.45.626. PMID 9907027.
Leonov, G.A.; Kuznetsov, N.V.; Korzhemanova, N.A.; Kusakin, D.V. (2016). "Lyapunov dimension formula for the global attractor of the Lorenz system". Communications in Nonlinear Science and Numerical Simulation. 41: 84–103. arXiv:1508.07498. Bibcode:2016CNSNS..41...84L. doi:10.1016/j.cnsns.2016.04.032. S2CID 119614076.
Lorenz, Edward Norton (1963). "Deterministic nonperiodic flow". Journal of the Atmospheric Sciences. 20 (2): 130–141. Bibcode:1963JAtS...20..130L. doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2.
Mishra, Aashwin; Sanghi, Sanjeev (2006). "A study of the asymmetric Malkus waterwheel: The biased Lorenz equations". Chaos: An Interdisciplinary Journal of Nonlinear Science. 16 (1): 013114. Bibcode:2006Chaos..16a3114M. doi:10.1063/1.2154792. PMID 16599745.
Pchelintsev, A.N. (2014). "Numerical and Physical Modeling of the Dynamics of the Lorenz System". Numerical Analysis and Applications. 7 (2): 159–167. doi:10.1134/S1995423914020098. S2CID 123023929.
Poland, Douglas (1993). "Cooperative catalysis and chemical chaos: a chemical model for the Lorenz equations". Physica D. 65 (1): 86–99. Bibcode:1993PhyD...65...86P. doi:10.1016/0167-2789(93)90006-M.
Saltzman, Barry (1962). "Finite Amplitude Free Convection as an Initial Value Problem—I". Journal of the Atmospheric Sciences. 19 (4): 329–341. Bibcode:1962JAtS...19..329S. doi:10.1175/1520-0469(1962)019<0329:FAFCAA>2.0.CO;2.
Shen, B.-W. (2015-12-21). "Nonlinear feedback in a six-dimensional Lorenz model: impact of an additional heating term". Nonlinear Processes in Geophysics. 22 (6): 749–764. doi:10.5194/npg-22-749-2015. ISSN 1607-7946.
Sparrow, Colin (1982). The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors. Springer.
Tucker, Warwick (2002). "A Rigorous ODE Solver and Smale's 14th Problem" (PDF). Foundations of Computational Mathematics. 2 (1): 53–117. CiteSeerX 10.1.1.545.3996. doi:10.1007/s002080010018. S2CID 353254.
Tzenov, Stephan (2014). "Strange Attractors Characterizing the Osmotic Instability". arXiv:1406.0979v1 [physics.flu-dyn].
Viana, Marcelo (2000). "What's new on Lorenz strange attractors?". The Mathematical Intelligencer. 22 (3): 6–19. doi:10.1007/BF03025276. S2CID 121427433.
Lorenz, Edward N. (1960). "The statistical prediction of solutions of dynamic equations" (PDF). Symposium on Numerical Weather Prediction in Tokyo. Archived from the original (PDF) on 2019-05-23. Retrieved 2020-09-16.
== Further reading ==
G.A. Leonov & N.V. Kuznetsov (2015). "On differences and similarities in the analysis of Lorenz, Chen, and Lu systems". Applied Mathematics and Computation. 256: 334–343. arXiv:1409.8649. doi:10.1016/j.amc.2014.12.132.
Pchelintsev, A.N. (2022). "On a high-precision method for studying attractors of dynamical systems and systems of explosive type". Mathematics. 10 (8): 1207. arXiv:2206.08195. doi:10.3390/math10081207.
== External links ==
"Lorenz attractor", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Lorenz attractor". MathWorld.
Lorenz attractor by Rob Morris, Wolfram Demonstrations Project.
Lorenz equation Archived 2009-06-07 at the Wayback Machine on planetmath.org
Synchronized Chaos and Private Communications, with Kevin Cuomo. The implementation of Lorenz attractor in an electronic circuit.
Lorenz attractor interactive animation (you need the Adobe Shockwave plugin)
3D Attractors: Mac program to visualize and explore the Lorenz attractor in 3 dimensions
Lorenz Attractor implemented in analog electronic
Lorenz Attractor interactive animation (implemented in Ada with GTK+. Sources & executable)
Interactive web based Lorenz Attractor made with Iodide | Wikipedia/Lorenz_equation |
In the field of numerical analysis, meshfree methods are those that do not require connection between nodes of the simulation domain, i.e. a mesh, but are rather based on interaction of each node with all its neighbors. As a consequence, original extensive properties such as mass or kinetic energy are no longer assigned to mesh elements but rather to the single nodes. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort. The absence of a mesh allows Lagrangian simulations, in which the nodes can move according to the velocity field.
== Motivation ==
Numerical methods such as the finite difference method, finite-volume method, and finite element method were originally defined on meshes of data points. In such a mesh, each point has a fixed number of predefined neighbors, and this connectivity between neighbors can be used to define mathematical operators like the derivative. These operators are then used to construct the equations to simulate—such as the Euler equations or the Navier–Stokes equations.
But in simulations where the material being simulated can move around (as in computational fluid dynamics) or where large deformations of the material can occur (as in simulations of plastic materials), the connectivity of the mesh can be difficult to maintain without introducing error into the simulation. If the mesh becomes tangled or degenerate during simulation, the operators defined on it may no longer give correct values. The mesh may be recreated during simulation (a process called remeshing), but this can also introduce error, since all the existing data points must be mapped onto a new and different set of data points. Meshfree methods are intended to remedy these problems. Meshfree methods are also useful for:
Simulations where creating a useful mesh from the geometry of a complex 3D object may be especially difficult or require human assistance
Simulations where nodes may be created or destroyed, such as in cracking simulations
Simulations where the problem geometry may move out of alignment with a fixed mesh, such as in bending simulations
Simulations containing nonlinear material behavior, discontinuities or singularities
== Example ==
In a traditional finite difference simulation, the domain of a one-dimensional simulation would be some function
u
(
x
,
t
)
{\displaystyle u(x,t)}
, represented as a mesh of data values
u
i
n
{\displaystyle u_{i}^{n}}
at points
x
i
{\displaystyle x_{i}}
, where
i
=
0
,
1
,
2...
{\displaystyle i=0,1,2...}
n
=
0
,
1
,
2...
{\displaystyle n=0,1,2...}
x
i
+
1
−
x
i
=
h
∀
i
{\displaystyle x_{i+1}-x_{i}=h\ \forall i}
t
n
+
1
−
t
n
=
k
∀
n
{\displaystyle t_{n+1}-t_{n}=k\ \forall n}
We can define the derivatives that occur in the equation being simulated using some finite difference formulae on this domain, for example
∂
u
∂
x
=
u
i
+
1
n
−
u
i
−
1
n
2
h
{\displaystyle {\partial u \over \partial x}={u_{i+1}^{n}-u_{i-1}^{n} \over 2h}}
and
∂
u
∂
t
=
u
i
n
+
1
−
u
i
n
k
{\displaystyle {\partial u \over \partial t}={u_{i}^{n+1}-u_{i}^{n} \over k}}
Then we can use these definitions of
u
(
x
,
t
)
{\displaystyle u(x,t)}
and its spatial and temporal derivatives to write the equation being simulated in finite difference form, then simulate the equation with one of many finite difference methods.
In this simple example, the steps (here the spatial step
h
{\displaystyle h}
and timestep
k
{\displaystyle k}
) are constant along all the mesh, and the left and right mesh neighbors of the data value at
x
i
{\displaystyle x_{i}}
are the values at
x
i
−
1
{\displaystyle x_{i-1}}
and
x
i
+
1
{\displaystyle x_{i+1}}
, respectively. Generally in finite differences one can allow very simply for steps variable along the mesh, but all the original nodes should be preserved and they can move independently only by deforming the original elements. If even only two of all the nodes change their order, or even only one node is added to or removed from the simulation, that creates a defect in the original mesh and the simple finite difference approximation can no longer hold.
Smoothed-particle hydrodynamics (SPH), one of the oldest meshfree methods, solves this problem by treating data points as physical particles with mass and density that can move around over time, and carry some value
u
i
{\displaystyle u_{i}}
with them. SPH then defines the value of
u
(
x
,
t
)
{\displaystyle u(x,t)}
between the particles by
u
(
x
,
t
n
)
=
∑
i
m
i
u
i
n
ρ
i
W
(
|
x
−
x
i
|
)
{\displaystyle u(x,t_{n})=\sum _{i}m_{i}{\frac {u_{i}^{n}}{\rho _{i}}}W(|x-x_{i}|)}
where
m
i
{\displaystyle m_{i}}
is the mass of particle
i
{\displaystyle i}
,
ρ
i
{\displaystyle \rho _{i}}
is the density of particle
i
{\displaystyle i}
, and
W
{\displaystyle W}
is a kernel function that operates on nearby data points and is chosen for smoothness and other useful qualities. By linearity, we can write the spatial derivative as
∂
u
∂
x
=
∑
i
m
i
u
i
n
ρ
i
∂
W
(
|
x
−
x
i
|
)
∂
x
{\displaystyle {\partial u \over \partial x}=\sum _{i}m_{i}{\frac {u_{i}^{n}}{\rho _{i}}}{\partial W(|x-x_{i}|) \over \partial x}}
Then we can use these definitions of
u
(
x
,
t
)
{\displaystyle u(x,t)}
and its spatial derivatives to write the equation being simulated as an ordinary differential equation, and simulate the equation with one of many numerical methods. In physical terms, this means calculating the forces between the particles, then integrating these forces over time to determine their motion.
The advantage of SPH in this situation is that the formulae for
u
(
x
,
t
)
{\displaystyle u(x,t)}
and its derivatives do not depend on any adjacency information about the particles; they can use the particles in any order, so it doesn't matter if the particles move around or even exchange places.
One disadvantage of SPH is that it requires extra programming to determine the nearest neighbors of a particle. Since the kernel function
W
{\displaystyle W}
only returns nonzero results for nearby particles within twice the "smoothing length" (because we typically choose kernel functions with compact support), it would be a waste of effort to calculate the summations above over every particle in a large simulation. So typically SPH simulators require some extra code to speed up this nearest neighbor calculation.
== History ==
One of the earliest meshfree methods is smoothed particle hydrodynamics, presented in 1977. Libersky et al. were the first to apply SPH in solid mechanics. The main drawbacks of SPH are inaccurate results near boundaries and tension instability that was first investigated by Swegle.
In the 1990s a new class of meshfree methods emerged based on the Galerkin method. This first method called the diffuse element method (DEM), pioneered by Nayroles et al., utilized the MLS approximation in the Galerkin solution of partial differential equations, with approximate derivatives of the MLS function. Thereafter Belytschko pioneered the Element Free Galerkin (EFG) method, which employed MLS with Lagrange multipliers to enforce boundary conditions, higher order numerical quadrature in the weak form, and full derivatives of the MLS approximation which gave better accuracy. Around the same time, the reproducing kernel particle method (RKPM) emerged, the approximation motivated in part to correct the kernel estimate in SPH: to give accuracy near boundaries, in non-uniform discretizations, and higher-order accuracy in general. Notably, in a parallel development, the Material point methods were developed around the same time which offer similar capabilities. Material point methods are widely used in the movie industry to simulate large deformation solid mechanics, such as snow in the movie Frozen. RKPM and other meshfree methods were extensively developed by Chen, Liu, and Li in the late 1990s for a variety of applications and various classes of problems. During the 1990s and thereafter several other varieties were developed including those listed below.
== List of methods and acronyms ==
The following numerical methods are generally considered to fall within the general class of "meshfree" methods. Acronyms are provided in parentheses.
Smoothed particle hydrodynamics (SPH) (1977)
Diffuse element method (DEM) (1992)
Dissipative particle dynamics (DPD) (1992)
Element-free Galerkin method (EFG / EFGM) (1994)
Reproducing kernel particle method (RKPM) (1995)
Finite point method (FPM) (1996)
Finite pointset method (FPM) (1998)
hp-clouds
Natural element method (NEM)
Material point method (MPM)
Meshless local Petrov Galerkin (MLPG) (1998)
Generalized-strain mesh-free (GSMF) formulation (2016)
Moving particle semi-implicit (MPS)
Generalized finite difference method (GFDM)
Particle-in-cell (PIC)
Moving particle finite element method (MPFEM)
Finite cloud method (FCM)
Boundary node method (BNM)
Meshfree moving Kriging interpolation method (MK)
Boundary cloud method (BCM)
Method of fundamental solutions (MFS)
Method of particular solution (MPS)
Method of finite spheres (MFS)
Discrete vortex method (DVM)
Reproducing Kernel Particle Method (RKPM) (1995)
Generalized/Gradient Reproducing Kernel Particle Method (2011)
Finite mass method (FMM) (2000)
Smoothed point interpolation method (S-PIM) (2005).
Meshfree local radial point interpolation method (RPIM).
Local radial basis function collocation Method (LRBFCM)
Viscous vortex domains method (VVD)
Cracking Particles Method (CPM) (2004)
Discrete least squares meshless method (DLSM) (2006)
Immersed Particle Method (IPM) (2006)
Optimal Transportation Meshfree method (OTM) (2010)
Repeated replacement method (RRM) (2012)
Radial basis integral equation method
Least-square collocation meshless method (2001)
Exponential Basis Functions method (EBFs) (2010)
Related methods:
Moving least squares (MLS) – provide general approximation method for arbitrary set of nodes
Partition of unity methods (PoUM) – provide general approximation formulation used in some meshfree methods
Continuous blending method (enrichment and coupling of finite elements and meshless methods) – see Huerta & Fernández-Méndez (2000)
eXtended FEM, Generalized FEM (XFEM, GFEM) – variants of FEM (finite element method) combining some meshless aspects
Smoothed finite element method (S-FEM) (2007)
Gradient smoothing method (GSM) (2008)
Advancing front node generation (AFN)
Local maximum-entropy (LME) – see Arroyo & Ortiz (2006)
Space-Time Meshfree Collocation Method (STMCM) – see Netuzhylov (2008), Netuzhylov & Zilian (2009)
Meshfree Interface-Finite Element Method (MIFEM) (2015) - a hybrid finite element-meshfree method for numerical simulation of phase transformation and multiphase flow problems
== Recent development ==
The primary areas of advancement in meshfree methods are to address issues with essential boundary enforcement, numerical quadrature, and contact and large deformations. The common weak form requires strong enforcement of the essential boundary conditions, yet meshfree methods in general lack the Kronecker delta property. This make essential boundary condition enforcement non-trivial, at least more difficult than the Finite element method, where they can be imposed directly. Techniques have been developed to overcome this difficulty and impose conditions strongly. Several methods have been developed to impose the essential boundary conditions weakly, including Lagrange multipliers, Nitche's method, and the penalty method.
As for quadrature, nodal integration is generally preferred which offers simplicity, efficiency, and keeps the meshfree method free of any mesh (as opposed to using Gauss quadrature, which necessitates a mesh to generate quadrature points and weights). Nodal integration however, suffers from numerical instability due to underestimation of strain energy associated with short-wavelength modes, and also yields inaccurate and non-convergent results due to under-integration of the weak form. One major advance in numerical integration has been the development of a stabilized conforming nodal integration (SCNI) which provides a nodal integration method which does not suffer from either of these problems. The method is based on strain-smoothing which satisfies the first order patch test. However, it was later realized that low-energy modes were still present in SCNI, and additional stabilization methods have been developed. This method has been applied to a variety of problems including thin and thick plates, poromechanics, convection-dominated problems, among others. More recently, a framework has been developed to pass arbitrary-order patch tests, based on a Petrov–Galerkin method.
One recent advance in meshfree methods aims at the development of computational tools for automation in modeling and simulations. This is enabled by the so-called weakened weak (W2) formulation based on the G space theory. The W2 formulation offers possibilities to formulate various (uniformly) "soft" models that work well with triangular meshes. Because a triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence enables automation in modeling and simulation. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. Typical W2 models are the Smoothed Point Interpolation Methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments. The W2 formulation has also led to the development of combination of meshfree techniques with the well-developed FEM techniques, and one can now use triangular mesh with excellent accuracy and desired softness. A typical such a formulation is the so-called smoothed finite element method (or S-FEM). The S-FEM is the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler.
It is a general perception that meshfree methods are much more expensive than the FEM counterparts. The recent study has found however, some meshfree methods such as the S-PIM and S-FEM can be much faster than the FEM counterparts.
The S-PIM and S-FEM works well for solid mechanics problems. For CFD problems, the formulation can be simpler, via strong formulation. A Gradient Smoothing Methods (GSM) has also been developed recently for CFD problems, implementing the gradient smoothing idea in strong form. The GSM is similar to [FVM], but uses gradient smoothing operations exclusively in nested fashions, and is a general numerical method for PDEs.
Nodal integration has been proposed as a technique to use finite elements to emulate a meshfree behaviour. However, the obstacle that must be overcome in using nodally integrated elements is that the quantities at nodal points are not continuous, and the nodes are shared among multiple elements.
== See also ==
Continuum mechanics
Smoothed finite element method
G space
Weakened weak form
Boundary element method
Immersed boundary method
Stencil code
Particle method
== References ==
== Further reading ==
== External links ==
The USACM blog on Meshfree Methods | Wikipedia/Meshfree_methods |
In mathematics a partial differential algebraic equation (PDAE) set is an incomplete system of partial differential equations that is closed with a set of algebraic equations.
== Definition ==
A general PDAE is defined as:
0
=
F
(
x
,
y
,
∂
y
i
∂
x
j
,
∂
2
y
i
∂
x
j
∂
x
k
,
…
,
z
)
,
{\displaystyle 0=\mathbf {F} \left(\mathbf {x} ,\mathbf {y} ,{\frac {\partial y_{i}}{\partial x_{j}}},{\frac {\partial ^{2}y_{i}}{\partial x_{j}\partial x_{k}}},\ldots ,\mathbf {z} \right),}
where:
F is a set of arbitrary functions;
x is a set of independent variables;
y is a set of dependent variables for which partial derivatives are defined; and
z is a set of dependent variables for which no partial derivatives are defined.
The relationship between a PDAE and a partial differential equation (PDE) is analogous to the relationship between an ordinary differential equation (ODE) and a differential algebraic equation (DAE).
PDAEs of this general form are challenging to solve. Simplified forms are studied in more detail in the literature. Even as recently as 2000, the term "PDAE" has been handled as unfamiliar by those in related fields.
== Solution methods ==
Semi-discretization is a common method for solving PDAEs whose independent variables are those of time and space, and has been used for decades. This method involves removing the spatial variables using a discretization method, such as the finite volume method, and incorporating the resulting linear equations as part of the algebraic relations. This reduces the system to a DAE, for which conventional solution methods can be employed.
== References == | Wikipedia/Partial_differential_algebraic_equation |
Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).
In principle, specialized methods for hyperbolic, parabolic or elliptic partial differential equations exist.
== Overview of methods ==
=== Finite difference method ===
In this method, functions are represented by their values at certain grid points and derivatives are approximated through differences in these values.
=== Method of lines ===
The method of lines (MOL, NMOL, NUMOL) is a technique for solving partial differential equations (PDEs) in which all dimensions except one are discretized. MOL allows standard, general-purpose methods and software, developed for the numerical integration of ordinary differential equations (ODEs) and differential algebraic equations (DAEs), to be used. A large number of integration routines have been developed over the years in many different programming languages, and some have been published as open source resources.
The method of lines most often refers to the construction or analysis of numerical methods for partial differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time variable continuous. This leads to a system of ordinary differential equations to which a numerical method for initial value ordinary equations can be applied. The method of lines in this context dates back to at least the early 1960s.
=== Finite element method ===
The finite element method (FEM) is a numerical technique for finding approximate solutions to boundary value problems for differential equations. It uses variational methods (the calculus of variations) to minimize an error function and produce a stable solution. Analogous to the idea that connecting many tiny straight lines can approximate a larger circle, FEM encompasses all the methods for connecting many simple element equations over many small subdomains, named finite elements, to approximate a more complex equation over a larger domain.
=== Gradient discretization method ===
The gradient discretization method (GDM) is a numerical technique that encompasses a few standard or recent methods. It is based on the separate approximation of a function and of its gradient. Core properties allow the convergence of the method for a series of linear and nonlinear problems, and therefore all the methods that enter the GDM framework (conforming and nonconforming finite element, mixed finite element, mimetic finite difference...) inherit these convergence properties.
=== Finite volume method ===
The finite-volume method is a numerical technique for representing and evaluating partial differential equations in the form of algebraic equations [LeVeque, 2002; Toro, 1999].
Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages.
=== Spectral method ===
Spectral methods are techniques used in applied mathematics and scientific computing to numerically solve certain differential equations, often involving the use of the fast Fourier transform. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series, which is a sum of sinusoids) and then to choose the coefficients in the sum that best satisfy the differential equation.
Spectral methods and finite element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains. In other words, spectral methods take on a global approach while finite element methods use a local approach. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single domain spectral shock capturing results. In the finite element community, a method where the degree of the elements is very high or increases as the grid parameter h decreases to zero is sometimes called a spectral element method.
=== Meshfree methods ===
Meshfree methods do not require a mesh connecting the data points of the simulation domain. Meshfree methods enable the simulation of some otherwise difficult types of problems, at the cost of extra computing time and programming effort.
=== Domain decomposition methods ===
Domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method or GMRES.
In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstract additive Schwarz method.
In non-overlapping methods, the subdomains intersect only on their interface. In primal methods, such as Balancing domain decomposition and BDDC, the continuity of the solution across subdomain interface is enforced by representing the value of the solution on all neighboring subdomains by the same unknown. In dual methods, such as FETI, the continuity of the solution across the subdomain interface is enforced by Lagrange multipliers. The FETI-DP method is hybrid between a dual and a primal method.
Non-overlapping domain decomposition methods are also called iterative substructuring methods.
Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.
Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
=== Multigrid methods ===
Multigrid (MG) methods in numerical analysis are a group of algorithms for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in (but not limited to) problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.
The main idea of multigrid is to accelerate the convergence of a basic iterative method by global correction from time to time, accomplished by solving a coarse problem. This principle is similar to interpolation between coarser and finer grids. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions.
Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method. In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like the Lamé system of elasticity or the Navier–Stokes equations.
== Comparison ==
The finite difference method is often regarded as the simplest method to learn and use. The finite element and finite volume methods are widely used in engineering and in computational fluid dynamics, and are well suited to problems in complicated geometries.
Spectral methods are generally the most accurate, provided that the solutions are sufficiently smooth.
== See also ==
List of numerical analysis topics#Numerical methods for partial differential equations
Numerical methods for ordinary differential equations
== Further reading ==
LeVeque, Randall J. (1992). Numerical Methods for Conservation Laws. Basel: Birkhäuser Basel. doi:10.1007/978-3-0348-8629-1. ISBN 9783764327231. Retrieved 2021-11-15.
Anderson, Dale A.; Pletcher, Richard H.; Tannehill, John C. (2013). Computational fluid mechanics and heat transfer. Series in computational and physical processes in mechanics and thermal sciences (3rd. ed.). Boca Raton: CRC Press, Taylor & Francis Group. ISBN 9781591690375.
== References ==
== External links ==
Numerical Methods for Partial Differential Equations course at MIT OpenCourseWare.
IMS, the Open Source IMTEK Mathematica Supplement (IMS)
Numerical PDE Techniques for Scientists and Engineers, open access Lectures and Codes for Numerical PDEs | Wikipedia/Numerical_partial_differential_equations |
In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high-degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method (see Development History)
== Discussion ==
The spectral method expands the solution in trigonometric series, a chief advantage being that the resulting method is of a very high order.
This approach relies on the fact that trigonometric polynomials are an orthonormal basis for
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
.
The spectral element method chooses instead a high degree piecewise polynomial basis functions, also achieving a very high order of accuracy.
Such polynomials are usually orthogonal Chebyshev polynomials or very high order Lagrange polynomials over non-uniformly spaced nodes.
In SEM computational error decreases exponentially as the order of approximating polynomial increases, therefore a fast convergence of solution to the exact solution is realized with fewer degrees of freedom of the structure in comparison with FEM.
In structural health monitoring, FEM can be used for detecting large flaws in a structure, but as the size of the flaw is reduced there is a need to use a high-frequency wave. In order to simulate the propagation of a high-frequency wave, the FEM mesh required is very fine resulting in increased computational time. On the other hand, SEM provides good accuracy with fewer degrees of freedom.
Non-uniformity of nodes helps to make the mass matrix diagonal, which saves time and memory and is also useful for adopting a central difference method (CDM).
The disadvantages of SEM include difficulty in modeling complex geometry, compared to the flexibility of FEM.
Although the method can be applied with a modal piecewise orthogonal polynomial basis, it is most often implemented with a nodal tensor product Lagrange basis. The method gains its efficiency by placing the nodal points at the Legendre-Gauss-Lobatto (LGL) points and performing the Galerkin method integrations with a reduced Gauss-Lobatto quadrature using the same nodes. With this combination, simplifications result such that mass lumping occurs at all nodes and a collocation procedure results at interior points.
The most popular applications of the method are in computational fluid dynamics and modeling seismic wave propagation.
== A-priori error estimate ==
The classic analysis of Galerkin methods and Céa's lemma holds here and it can be shown that, if
u
{\displaystyle u}
is the solution of the weak equation,
u
N
{\displaystyle u_{N}}
is the approximate solution and
u
∈
H
s
+
1
(
Ω
)
{\displaystyle u\in H^{s+1}(\Omega )}
:
‖
u
−
u
N
‖
H
1
(
Ω
)
≦
C
s
N
−
s
‖
u
‖
H
s
+
1
(
Ω
)
{\displaystyle \|u-u_{N}\|_{H^{1}(\Omega )}\leqq C_{s}N^{-s}\|u\|_{H^{s+1}(\Omega )}}
where
N
{\displaystyle N}
is related to the discretization of the domain (ie. element length),
C
s
{\displaystyle C_{s}}
is independent from
N
{\displaystyle N}
, and
s
{\displaystyle s}
is no larger than the degree of the piecewise polynomial basis. Similar results can be obtained to bound the error in stronger topologies. If
k
≤
s
+
1
{\displaystyle k\leq s+1}
‖
u
−
u
N
‖
H
k
(
Ω
)
≤
C
s
,
k
N
k
−
1
−
s
‖
u
‖
H
s
+
1
(
Ω
)
{\displaystyle \|u-u_{N}\|_{H^{k}(\Omega )}\leq C_{s,k}N^{k-1-s}\|u\|_{H^{s+1}(\Omega )}}
As we increase
N
{\displaystyle N}
, we can also increase the degree of the basis functions. In this case, if
u
{\displaystyle u}
is an analytic function:
‖
u
−
u
N
‖
H
1
(
Ω
)
≦
C
exp
(
−
γ
N
)
{\displaystyle \|u-u_{N}\|_{H^{1}(\Omega )}\leqq C\exp(-\gamma N)}
where
γ
{\displaystyle \gamma }
depends only on
u
{\displaystyle u}
.
The Hybrid-Collocation-Galerkin possesses some superconvergence properties. The LGL form of SEM is equivalent, so it achieves the same superconvergence properties.
== Development History ==
Development of the most popular LGL form of the method is normally attributed to Maday and Patera. However, it was developed more than a decade earlier. First, there is the Hybrid-Collocation-Galerkin method (HCGM), which applies collocation at the interior Lobatto points and uses a Galerkin-like integral procedure at element interfaces. The Lobatto-Galerkin method described by Young is identical to SEM, while the HCGM is equivalent to these methods. This earlier work is ignored in the spectral literature.
== Related methods ==
G-NI or SEM-NI are the most used spectral methods. The Galerkin formulation of spectral methods or spectral element methods, for G-NI or SEM-NI respectively, is modified and Gauss-Lobatto integration is used instead of integrals in the definition of the bilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
and in the functional
F
{\displaystyle F}
. Their convergence is a consequence of Strang's lemma.
SEM is a Galerkin based FEM (finite element method) with Lagrange basis (shape) functions and reduced numerical integration by Lobatto quadrature using the same nodes.
The pseudospectral method, orthogonal collocation, differential quadrature method, and G-NI are different names for the same method. These methods employ global rather than piecewise polynomial basis functions. The extension to a piecewise FEM or SEM basis is almost trivial.
The spectral element method uses a tensor product space spanned by nodal basis functions associated with Gauss–Lobatto points. In contrast, the p-version finite element method spans a space of high order polynomials by nodeless basis functions, chosen approximately orthogonal for numerical stability. Since not all interior basis functions need to be present, the p-version finite element method can create a space that contains all polynomials up to a given degree with fewer degrees of freedom. However, some speedup techniques possible in spectral methods due to their tensor-product character are no longer available. The name p-version means that accuracy is increased by increasing the order of the approximating polynomials (thus, p) rather than decreasing the mesh size, h.
The hp finite element method (hp-FEM) combines the advantages of the h and p refinements to obtain exponential convergence rates.
== References == | Wikipedia/Spectral_element_method |
In mathematics, a (real) Monge–Ampère equation is a nonlinear second-order partial differential equation of special kind. A second-order equation for the unknown function u of two variables x,y is of Monge–Ampère type if it is linear in the determinant of the Hessian matrix of u and in the second-order partial derivatives of u. The independent variables (x,y) vary over a given domain D of R2. The term also applies to analogous equations with n independent variables. The most complete results so far have been obtained when the equation is elliptic.
Monge–Ampère equations frequently arise in differential geometry, for example, in the Weyl and Minkowski problems in differential geometry of surfaces. They were first studied by Gaspard Monge in 1784 and later by André-Marie Ampère in 1820. Important results in the theory of Monge–Ampère equations have been obtained by Sergei Bernstein, Aleksei Pogorelov, Charles Fefferman, and Louis Nirenberg. More recently, Alessio Figalli and Luis Caffarelli were recognized for their work on the regularity of the Monge–Ampère equation, with the former winning the Fields Medal in 2018 and the latter the Abel Prize in 2023.
== Description ==
Given two independent variables x and y, and one dependent variable u, the general Monge–Ampère equation is of the form
L
[
u
]
=
A
det
(
∇
2
u
)
+
B
Δ
u
+
2
C
u
x
y
+
(
D
−
B
)
u
y
y
+
E
=
A
(
u
x
x
u
y
y
−
u
x
y
2
)
+
B
u
x
x
+
2
C
u
x
y
+
D
u
y
y
+
E
=
0
,
{\displaystyle L[u]=A\,{\text{det}}(\nabla ^{2}u)+B\Delta u+2Cu_{xy}+(D-B)u_{yy}+E=A(u_{xx}u_{yy}-u_{xy}^{2})+Bu_{xx}+2Cu_{xy}+Du_{yy}+E=0,}
where A, B, C, D, and E are functions depending on the first-order variables x, y, u, ux, and uy only.
== Rellich's theorem ==
Let Ω be a bounded domain in R3, and suppose that on Ω A, B, C, D, and E are continuous functions of x and y only. Consider the Dirichlet problem to find u so that
L
[
u
]
=
0
,
on
Ω
{\displaystyle L[u]=0,\quad {\text{on}}\ \Omega }
u
|
∂
Ω
=
g
.
{\displaystyle u|_{\partial \Omega }=g.}
If
B
D
−
C
2
−
A
E
>
0
,
{\displaystyle BD-C^{2}-AE>0,}
then the Dirichlet problem has at most two solutions.
== Ellipticity results ==
Suppose now that x is a variable with values in a domain in Rn, and that f(x,u,Du) is a positive function. Then the Monge–Ampère equation
L
[
u
]
=
det
D
2
u
−
f
(
x
,
u
,
D
u
)
=
0
(
1
)
{\displaystyle L[u]=\det D^{2}u-f(\mathbf {x} ,u,Du)=0\qquad \qquad (1)}
is a nonlinear elliptic partial differential equation (in the sense that its linearization is elliptic), provided one confines attention to convex solutions.
Accordingly, the operator L satisfies versions of the maximum principle, and in particular solutions to the Dirichlet problem are unique, provided they exist.
== Applications ==
Monge–Ampère equations arise naturally in several problems in Riemannian geometry, conformal geometry, and CR geometry. One of the simplest of these applications is to the problem of prescribed Gauss curvature. Suppose that a real-valued function K is specified on a domain Ω in Rn, the problem of prescribed Gauss curvature seeks to identify a hypersurface of Rn+1 as a graph z = u(x) over x ∈ Ω so that at each point of the surface the Gauss curvature is given by K(x). The resulting partial differential equation is
det
D
2
u
−
K
(
x
)
(
1
+
|
D
u
|
2
)
(
n
+
2
)
/
2
=
0.
{\displaystyle \det D^{2}u-K(\mathbf {x} )(1+|Du|^{2})^{(n+2)/2}=0.}
The Monge–Ampère equations are related to the Monge–Kantorovich optimal mass transportation problem, when the "cost functional" therein is given by the Euclidean distance.
== See also ==
List of nonlinear partial differential equations
Complex Monge–Ampère equation
== References ==
== Additional references ==
== External links ==
"Monge–Ampère equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Monge-Ampère Differential Equation". MathWorld. | Wikipedia/Monge–Ampère_equation |
In mathematics, a dispersive partial differential equation or dispersive PDE is a partial differential equation that is dispersive. In this context, dispersion means that waves of different wavelength propagate at different phase velocities.
== Examples ==
=== Linear equations ===
Euler–Bernoulli beam equation with time-dependent loading
Airy equation
Schrödinger equation
Klein–Gordon equation
=== Nonlinear equations ===
nonlinear Schrödinger equation
Korteweg–de Vries equation (or KdV equation)
Boussinesq equation (water waves)
sine–Gordon equation
== See also ==
Dispersion (optics)
Dispersion (water waves)
Dispersionless equation
== References ==
Erdoğan, M. Burak; Tzirakis, Nikolaos (2016). Dispersive Partial Differential Equations. Cambridge: Cambridge University Press. ISBN 978-1-107-14904-5.
== External links ==
The Dispersive PDE Wiki. | Wikipedia/Dispersive_partial_differential_equation |
In mathematics, the Helmholtz equation is the eigenvalue problem for the Laplace operator. It corresponds to the elliptic partial differential equation:
∇
2
f
=
−
k
2
f
,
{\displaystyle \nabla ^{2}f=-k^{2}f,}
where ∇2 is the Laplace operator, k2 is the eigenvalue, and f is the (eigen)function. When the equation is applied to waves, k is known as the wave number. The Helmholtz equation has a variety of applications in physics and other sciences, including the wave equation, the diffusion equation, and the Schrödinger equation for a free particle.
In optics, the Helmholtz equation is the wave equation for the electric field.
The equation is named after Hermann von Helmholtz, who studied it in 1860.
== Motivation and uses ==
The Helmholtz equation often arises in the study of physical problems involving partial differential equations (PDEs) in both space and time. The Helmholtz equation, which represents a time-independent form of the wave equation, results from applying the technique of separation of variables to reduce the complexity of the analysis.
For example, consider the wave equation
(
∇
2
−
1
c
2
∂
2
∂
t
2
)
u
(
r
,
t
)
=
0.
{\displaystyle \left(\nabla ^{2}-{\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\right)u(\mathbf {r} ,t)=0.}
Separation of variables begins by assuming that the wave function u(r, t) is in fact separable:
u
(
r
,
t
)
=
A
(
r
)
T
(
t
)
.
{\displaystyle u(\mathbf {r} ,t)=A(\mathbf {r} )T(t).}
Substituting this form into the wave equation and then simplifying, we obtain the following equation:
∇
2
A
A
=
1
c
2
T
d
2
T
d
t
2
.
{\displaystyle {\frac {\nabla ^{2}A}{A}}={\frac {1}{c^{2}T}}{\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}.}
Notice that the expression on the left side depends only on r, whereas the right expression depends only on t. As a result, this equation is valid in the general case if and only if both sides of the equation are equal to the same constant value. This argument is key in the technique of solving linear partial differential equations by separation of variables. From this observation, we obtain two equations, one for A(r), the other for T(t):
∇
2
A
A
=
−
k
2
{\displaystyle {\frac {\nabla ^{2}A}{A}}=-k^{2}}
1
c
2
T
d
2
T
d
t
2
=
−
k
2
,
{\displaystyle {\frac {1}{c^{2}T}}{\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}=-k^{2},}
where we have chosen, without loss of generality, the expression −k2 for the value of the constant. (It is equally valid to use any constant k as the separation constant; −k2 is chosen only for convenience in the resulting solutions.)
Rearranging the first equation, we obtain the (homogeneous) Helmholtz equation:
∇
2
A
+
k
2
A
=
(
∇
2
+
k
2
)
A
=
0.
{\displaystyle \nabla ^{2}A+k^{2}A=(\nabla ^{2}+k^{2})A=0.}
Likewise, after making the substitution ω = kc, where k is the wave number, and ω is the angular frequency (assuming a monochromatic field), the second equation becomes
d
2
T
d
t
2
+
ω
2
T
=
(
d
2
d
t
2
+
ω
2
)
T
=
0.
{\displaystyle {\frac {\mathrm {d} ^{2}T}{\mathrm {d} t^{2}}}+\omega ^{2}T=\left({\frac {\mathrm {d} ^{2}}{\mathrm {d} t^{2}}}+\omega ^{2}\right)T=0.}
We now have Helmholtz's equation for the spatial variable r and a second-order ordinary differential equation in time. The solution in time will be a linear combination of sine and cosine functions, whose exact form is determined by initial conditions, while the form of the solution in space will depend on the boundary conditions. Alternatively, integral transforms, such as the Laplace or Fourier transform, are often used to transform a hyperbolic PDE into a form of the Helmholtz equation.
Because of its relationship to the wave equation, the Helmholtz equation arises in problems in such areas of physics as the study of electromagnetic radiation, seismology, and acoustics.
== Solving the Helmholtz equation using separation of variables ==
The solution to the spatial Helmholtz equation:
∇
2
A
=
−
k
2
A
{\displaystyle \nabla ^{2}A=-k^{2}A}
can be obtained for simple geometries using separation of variables.
=== Vibrating membrane ===
The two-dimensional analogue of the vibrating string is the vibrating membrane, with the edges clamped to be motionless. The Helmholtz equation was solved for many basic shapes in the 19th century: the rectangular membrane by Siméon Denis Poisson in 1829, the equilateral triangle by Gabriel Lamé in 1852, and the circular membrane by Alfred Clebsch in 1862. The elliptical drumhead was studied by Émile Mathieu, leading to Mathieu's differential equation.
If the edges of a shape are straight line segments, then a solution is integrable or knowable in closed-form only if it is expressible as a finite linear combination of plane waves that satisfy the boundary conditions (zero at the boundary, i.e., membrane clamped).
If the domain is a circle of radius a, then it is appropriate to introduce polar coordinates r and θ. The Helmholtz equation takes the form
A
r
r
+
1
r
A
r
+
1
r
2
A
θ
θ
+
k
2
A
=
0
.
{\displaystyle \ A_{rr}+{\frac {1}{r}}A_{r}+{\frac {1}{r^{2}}}A_{\theta \theta }+k^{2}A=0~.}
We may impose the boundary condition that A vanishes if r = a ; thus
A
(
a
,
θ
)
=
0
.
{\displaystyle \ A(a,\theta )=0~.}
the method of separation of variables leads to trial solutions of the form
A
(
r
,
θ
)
=
R
(
r
)
Θ
(
θ
)
,
{\displaystyle \ A(r,\theta )=R(r)\Theta (\theta )\ ,}
where Θ must be periodic of period 2 π . This leads to
Θ
″
+
n
2
Θ
=
0
,
{\displaystyle \ \Theta ''+n^{2}\Theta =0\ ,}
r
2
R
″
+
r
R
′
+
r
2
k
2
R
−
n
2
R
=
0
.
{\displaystyle \ r^{2}R''+rR'+r^{2}k^{2}R-n^{2}R=0~.}
It follows from the periodicity condition that
Θ
=
α
cos
n
θ
+
β
sin
n
θ
,
{\displaystyle \ \Theta =\alpha \cos n\theta +\beta \sin n\theta \ ,}
and that n must be an integer. The radial component R has the form
R
=
γ
J
n
(
ρ
)
,
{\displaystyle \ R=\gamma \ J_{n}(\rho )\ ,}
where the Bessel function Jn(ρ) satisfies Bessel's equation
z
2
J
n
″
+
z
J
n
′
+
(
z
2
−
n
2
)
J
n
=
0
,
{\displaystyle \ z^{2}J_{n}''+zJ_{n}'+(z^{2}-n^{2})J_{n}=0\ ,}
and z = k r . The radial function Jn has infinitely many roots for each value of n , denoted by ρm,n . The boundary condition that A vanishes where r = a will be satisfied if the corresponding wavenumbers are given by
k
m
,
n
=
1
a
ρ
m
,
n
.
{\displaystyle \ k_{m,n}={\frac {1}{a}}\rho _{m,n}~.}
The general solution A then takes the form of a generalized Fourier series of terms involving products of Jn(km,nr) and the sine (or cosine) of n θ . These solutions are the modes of vibration of a circular drumhead.
=== Three-dimensional solutions ===
In spherical coordinates, the solution is:
A
(
r
,
θ
,
φ
)
=
∑
ℓ
=
0
∞
∑
m
=
−
ℓ
+
ℓ
(
a
ℓ
m
j
ℓ
(
k
r
)
+
b
ℓ
m
y
ℓ
(
k
r
)
)
Y
ℓ
m
(
θ
,
φ
)
.
{\displaystyle \ A(r,\theta ,\varphi )=\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{+\ell }{\bigl (}\ a_{\ell m}\ j_{\ell }(kr)+b_{\ell m}\ y_{\ell }(kr)\ {\bigr )}\ Y_{\ell }^{m}(\theta ,\varphi )~.}
This solution arises from the spatial solution of the wave equation and diffusion equation. Here jℓ(kr) and yℓ(kr) are the spherical Bessel functions, and Ymℓ(θ, φ) are the spherical harmonics (Abramowitz and Stegun, 1964). Note that these forms are general solutions, and require boundary conditions to be specified to be used in any specific case. For infinite exterior domains, a radiation condition may also be required (Sommerfeld, 1949).
Writing r0 = (x, y, z) function A(r0) has asymptotics
A
(
r
0
)
=
e
i
k
r
0
r
0
f
(
r
0
r
0
,
k
,
u
0
)
+
o
(
1
r
0
)
as
r
0
→
∞
{\displaystyle A(r_{0})={\frac {e^{ikr_{0}}}{r_{0}}}f\left({\frac {\mathbf {r} _{0}}{r_{0}}},k,u_{0}\right)+o\left({\frac {1}{r_{0}}}\right){\text{ as }}r_{0}\to \infty }
where function f is called scattering amplitude and u0(r0) is the value of A at each boundary point r0.
==== Three-dimensional solutions given the function on a 2-dimensional plane ====
Given a 2-dimensional plane where A is known, the solution to the Helmholtz equation is given by:
A
(
x
,
y
,
z
)
=
−
1
2
π
∬
−
∞
+
∞
A
′
(
x
′
,
y
′
)
e
i
k
r
r
z
r
(
i
k
−
1
r
)
d
x
′
d
y
′
,
{\displaystyle A(x,y,z)=-{\frac {1}{2\pi }}\iint _{-\infty }^{+\infty }A'(x',y')\ {\frac {~~e^{ikr}\ }{r}}\ {\frac {\ z\ }{r}}\left(\ i\ k-{\frac {1}{r}}\ \right)\ \operatorname {d} x'\ \operatorname {d} y'\ ,}
where
A
′
(
x
′
,
y
′
)
{\displaystyle \ A'(x',y')\ }
is the solution at the 2-dimensional plane,
r
=
(
x
−
x
′
)
2
+
(
y
−
y
′
)
2
+
z
2
,
{\displaystyle \ r={\sqrt {(x-x')^{2}+(y-y')^{2}+z^{2}\ }}\ ,}
As z approaches zero, all contributions from the integral vanish except for r = 0 . Thus
A
(
x
,
y
,
0
)
=
A
′
(
x
,
y
)
{\displaystyle \ A(x,y,0)=A'(x,y)\ }
up to a numerical factor, which can be verified to be 1 by transforming the integral to polar coordinates
(
ρ
,
θ
)
.
{\displaystyle \ \left(\rho ,\theta \right)~.}
This solution is important in diffraction theory, e.g. in deriving Fresnel diffraction.
== Paraxial approximation ==
In the paraxial approximation of the Helmholtz equation, the complex amplitude A is expressed as
A
(
r
)
=
u
(
r
)
e
i
k
z
{\displaystyle A(\mathbf {r} )=u(\mathbf {r} )e^{ikz}}
where u represents the complex-valued amplitude which modulates the sinusoidal plane wave represented by the exponential factor. Then under a suitable assumption, u approximately solves
∇
⊥
2
u
+
2
i
k
∂
u
∂
z
=
0
,
{\displaystyle \nabla _{\perp }^{2}u+2ik{\frac {\partial u}{\partial z}}=0,}
where
∇
⊥
2
=
def
∂
2
∂
x
2
+
∂
2
∂
y
2
{\textstyle \nabla _{\perp }^{2}{\overset {\text{ def }}{=}}{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}}
is the transverse part of the Laplacian.
This equation has important applications in the science of optics, where it provides solutions that describe the propagation of electromagnetic waves (light) in the form of either paraboloidal waves or Gaussian beams. Most lasers emit beams that take this form.
The assumption under which the paraxial approximation is valid is that the z derivative of the amplitude function u is a slowly varying function of z:
|
∂
2
u
∂
z
2
|
≪
|
k
∂
u
∂
z
|
.
{\displaystyle \left|{\frac {\partial ^{2}u}{\partial z^{2}}}\right|\ll \left|k{\frac {\partial u}{\partial z}}\right|.}
This condition is equivalent to saying that the angle θ between the wave vector k and the optical axis z is small: θ ≪ 1.
The paraxial form of the Helmholtz equation is found by substituting the above-stated expression for the complex amplitude into the general form of the Helmholtz equation as follows:
∇
2
(
u
(
x
,
y
,
z
)
e
i
k
z
)
+
k
2
u
(
x
,
y
,
z
)
e
i
k
z
=
0.
{\displaystyle \nabla ^{2}(u\left(x,y,z\right)e^{ikz})+k^{2}u\left(x,y,z\right)e^{ikz}=0.}
Expansion and cancellation yields the following:
(
∂
2
∂
x
2
+
∂
2
∂
y
2
)
u
(
x
,
y
,
z
)
e
i
k
z
+
(
∂
2
∂
z
2
u
(
x
,
y
,
z
)
)
e
i
k
z
+
2
(
∂
∂
z
u
(
x
,
y
,
z
)
)
i
k
e
i
k
z
=
0.
{\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)u(x,y,z)e^{ikz}+\left({\frac {\partial ^{2}}{\partial z^{2}}}u(x,y,z)\right)e^{ikz}+2\left({\frac {\partial }{\partial z}}u(x,y,z)\right)ik{e^{ikz}}=0.}
Because of the paraxial inequality stated above, the ∂2u/∂z2 term is neglected in comparison with the k·∂u/∂z term. This yields the paraxial Helmholtz equation. Substituting u(r) = A(r) e−ikz then gives the paraxial equation for the original complex amplitude A:
∇
⊥
2
A
+
2
i
k
∂
A
∂
z
+
2
k
2
A
=
0.
{\displaystyle \nabla _{\perp }^{2}A+2ik{\frac {\partial A}{\partial z}}+2k^{2}A=0.}
The Fresnel diffraction integral is an exact solution to the paraxial Helmholtz equation.
== Inhomogeneous Helmholtz equation ==
The inhomogeneous Helmholtz equation is the equation
∇
2
A
(
x
)
+
k
2
A
(
x
)
=
−
f
(
x
)
,
∀
x
∈
R
n
,
{\displaystyle \nabla ^{2}A(\mathbf {x} )+k^{2}A(\mathbf {x} )=-f(\mathbf {x} ),\quad \forall \mathbf {x} \in \mathbb {R} ^{n},}
where ƒ : Rn → C is a function with compact support, and n = 1, 2, 3. This equation is very similar to the screened Poisson equation, and would be identical if the plus sign (in front of the k term) were switched to a minus sign.
=== Solution ===
In order to solve this equation uniquely, one needs to specify a boundary condition at infinity, which is typically the Sommerfeld radiation condition
lim
r
→
∞
r
n
−
1
2
(
∂
∂
r
−
i
k
)
A
(
x
)
=
0
,
{\displaystyle \lim _{r\to \infty }r^{\frac {n-1}{2}}\left({\frac {\partial }{\partial r}}-ik\right)A(\mathbf {x} )=0,}
in
n
{\displaystyle n}
spatial dimensions, for all angles (i.e. any value of
θ
,
ϕ
{\displaystyle \theta ,\phi }
). Here
r
=
∑
i
=
1
n
x
i
2
{\displaystyle r={\sqrt {\sum _{i=1}^{n}x_{i}^{2}}}}
where
x
i
,
{\displaystyle x_{i},}
are the coordinates of the vector
x
{\displaystyle \mathbf {x} }
.
With this condition, the solution to the inhomogeneous Helmholtz equation is
A
(
x
)
=
∫
R
n
G
(
x
,
x
′
)
f
(
x
′
)
d
x
′
{\displaystyle A(\mathbf {x} )=\int _{\mathbb {R} ^{n}}\!G(\mathbf {x} ,\mathbf {x'} )f(\mathbf {x'} )\,\mathrm {d} \mathbf {x'} }
(notice this integral is actually over a finite region, since f has compact support). Here, G is the Green's function of this equation, that is, the solution to the inhomogeneous Helmholtz equation with f equaling the Dirac delta function, so G satisfies
∇
2
G
(
x
,
x
′
)
+
k
2
G
(
x
,
x
′
)
=
−
δ
(
x
,
x
′
)
∈
R
n
.
{\displaystyle \nabla ^{2}G(\mathbf {x} ,\mathbf {x'} )+k^{2}G(\mathbf {x} ,\mathbf {x'} )=-\delta (\mathbf {x} ,\mathbf {x'} )\in \mathbb {R} ^{n}.}
The expression for the Green's function depends on the dimension n of the space. One has
G
(
x
,
x
′
)
=
i
e
i
k
|
x
−
x
′
|
2
k
{\displaystyle G(x,x')={\frac {ie^{ik|x-x'|}}{2k}}}
for n = 1,
G
(
x
,
x
′
)
=
i
4
H
0
(
1
)
(
k
|
x
−
x
′
|
)
{\displaystyle G(\mathbf {x} ,\mathbf {x'} )={\frac {i}{4}}H_{0}^{(1)}(k|\mathbf {x} -\mathbf {x'} |)}
for n = 2, where H(1)0 is a Hankel function, and
G
(
x
,
x
′
)
=
e
i
k
|
x
−
x
′
|
4
π
|
x
−
x
′
|
{\displaystyle G(\mathbf {x} ,\mathbf {x'} )={\frac {e^{ik|\mathbf {x} -\mathbf {x'} |}}{4\pi |\mathbf {x} -\mathbf {x'} |}}}
for n = 3. Note that we have chosen the boundary condition that the Green's function is an outgoing wave for |x| → ∞.
Finally, for general n,
G
(
x
,
x
′
)
=
c
d
k
p
H
p
(
1
)
(
k
|
x
−
x
′
|
)
|
x
−
x
′
|
p
{\displaystyle G(\mathbf {x} ,\mathbf {x'} )=c_{d}k^{p}{\frac {H_{p}^{(1)}(k|\mathbf {x} -\mathbf {x'} |)}{|\mathbf {x} -\mathbf {x'} |^{p}}}}
where
p
=
n
−
2
2
{\displaystyle p={\frac {n-2}{2}}}
and
c
d
=
i
4
(
2
π
)
p
{\displaystyle c_{d}={\frac {i}{4(2\pi )^{p}}}}
.
== See also ==
Laplace's equation (a particular case of the Helmholtz equation)
Weyl expansion
== Notes ==
== References ==
Blanche, Pierre-Alexandre (2014). Field Guide to Holography. Bellingham, Washington USA: SPIE-International Society for Optical Engineering. ISBN 978-0-8194-9957-8.
Engquist, Björn; Zhao, Hongkai (2018). "Approximate Separability of the Green's Function of the Helmholtz Equation in the High Frequency Limit". Communications on Pure and Applied Mathematics. 71 (11): 2220–2274. doi:10.1002/cpa.21755. ISSN 0010-3640.
Goodman, Joseph W. (1996). Introduction to Fourier Optics. New York: McGraw-Hill Science, Engineering & Mathematics. ISBN 978-0-07-024254-8.
Grella, R (1982). "Fresnel propagation and diffraction and paraxial wave equation". Journal of Optics. 13 (6): 367–374. Bibcode:1982JOpt...13..367G. doi:10.1088/0150-536X/13/6/006. ISSN 0150-536X.
Mehrabkhani, Soheil; Schneider, Thomas (2017). "Is the Rayleigh-Sommerfeld diffraction always an exact reference for high speed diffraction algorithms?". Optics Express. 25 (24): 30229–30240. arXiv:1709.09727. Bibcode:2017OExpr..2530229M. doi:10.1364/OE.25.030229. ISSN 1094-4087. PMID 29221054.
Noble, Ben (1958). Methods Based on the Wiener-Hopf Technique for the Solution of Partial Differential Equations. New York, N.Y: Taylor & Francis US.
== Further reading ==
Abramowitz, Milton; Stegun, Irene, eds. (1965). Handbook of Mathematical functions with Formulas, Graphs and Mathematical Tables. New York: Dover Publications. ISBN 978-0-486-61272-0.
Riley, K. F.; Hobson, M. P.; Bence, S. J. (2002). "Chapter 19". Mathematical methods for physics and engineering. New York: Cambridge University Press. ISBN 978-0-521-89067-0.
Riley, K. F. (2002). "Chapter 16". Mathematical Methods for Scientists and Engineers. Sausalito, California: University Science Books. ISBN 978-1-891389-24-5.
Saleh, Bahaa E. A.; Teich, Malvin Carl (1991). "Chapter 3". Fundamentals of Photonics. Wiley Series in Pure and Applied Optics. New York: John Wiley & Sons. pp. 80–107. ISBN 978-0-471-83965-1.
Sommerfeld, Arnold (1949). "Chapter 16". Partial Differential Equations in Physics. New York: Academic Press.
Howe, M. S. (1998). Acoustics of fluid-structure interactions. New York: Cambridge University Press. ISBN 978-0-521-63320-8.
== External links ==
Helmholtz Equation at EqWorld: The World of Mathematical Equations.
"Helmholtz equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Vibrating Circular Membrane by Sam Blake, The Wolfram Demonstrations Project.
Green's functions for the wave, Helmholtz and Poisson equations in a two-dimensional boundless domain | Wikipedia/Helmholtz_equation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.