text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry.
Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group S3 which acts by permuting the three legs. This gives rise to an S3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the vector representation and two chiral spin representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on F8—consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations.
No other connected Dynkin diagram has an automorphism group of order greater than 2; for other Dn (corresponding to other even Spin groups, Spin(2n)), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation.
Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality".
The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4.
== General formulation ==
A duality between two vector spaces over a field F is a non-degenerate bilinear form
V
1
×
V
2
→
F
,
{\displaystyle V_{1}\times V_{2}\to F,}
i.e., for each non-zero vector v in one of the two vector spaces, the pairing with v is a non-zero linear functional on the other.
Similarly, a triality between three vector spaces over a field F is a non-degenerate trilinear form
V
1
×
V
2
×
V
3
→
F
,
{\displaystyle V_{1}\times V_{2}\times V_{3}\to F,}
i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two.
By choosing vectors ei in each Vi on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by V, the triality may be re-expressed as a bilinear multiplication
V
×
V
→
V
{\displaystyle V\times V\to V}
where each ei corresponds to the identity element in V. The non-degeneracy condition now implies that V is a composition algebra. It follows that V has dimension 1, 2, 4 or 8. If further F = R and the form used to identify V with its dual is positive definite, then V is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O.
Conversely, composition algebras immediately give rise to trialities by taking each Vi equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form.
An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8).
== See also ==
Triple product, may be related to the 4-dimensional triality (on quaternions)
== References ==
John Frank Adams (1981), Spin(8), Triality, F4 and all that, in "Superspace and supergravity", edited by Stephen Hawking and Martin Roček, Cambridge University Press, pages 435–445.
John Frank Adams (1996), Lectures on Exceptional Lie Groups (Chicago Lectures in Mathematics), edited by Zafer Mahmud and Mamora Mimura, University of Chicago Press, ISBN 0-226-00527-5.
== Further reading ==
Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by J. Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001.
Wilson, Robert (2009). The Finite Simple Groups. Graduate Texts in Mathematics. Vol. 251. Springer-Verlag. ISBN 978-1-84800-987-5. Zbl 1203.20012.
== External links ==
Spinors and Trialities by John Baez
Triality with Zometool by David Richter | Wikipedia/Triality |
In algebra, an Okubo algebra or pseudo-octonion algebra is an 8-dimensional non-associative algebra similar to the one studied by Susumu Okubo. Okubo algebras are composition algebras, flexible algebras (A(BA) = (AB)A), Lie admissible algebras, and power associative, but are not associative, not alternative algebras, and do not have an identity element.
Okubo's example was the algebra of 3-by-3 trace-zero complex matrices, with the product of X and Y given by aXY + bYX – Tr(XY)I/3 where I is the identity matrix and a and b satisfy a + b = 3ab = 1. The Hermitian elements form an 8-dimensional real non-associative division algebra. A similar construction works for any cubic alternative separable algebra over a field containing a primitive cube root of unity. An Okubo algebra is an algebra constructed in this way from the trace-zero elements of a degree-3 central simple algebra over a field.
== Construction of Para-Hurwitz algebra ==
Unital composition algebras are called Hurwitz algebras.: 22 If the ground field K is the field of real numbers and N is positive-definite, then A is called a Euclidean Hurwitz algebra.
=== Scalar product ===
If K has characteristic not equal to 2, then a bilinear form (a, b) = 1/2[N(a + b) − N(a) − N(b)] is associated with the quadratic form N.
=== Involution in Hurwitz algebras ===
Assuming A has a multiplicative unity, define involution and right and left multiplication operators by
a
¯
=
−
a
+
2
(
a
,
1
)
1
,
L
(
a
)
b
=
a
b
,
R
(
a
)
b
=
b
a
.
{\displaystyle \displaystyle {{\bar {a}}=-a+2(a,1)1,\,\,\,L(a)b=ab,\,\,\,R(a)b=ba.}}
Evidently is an involution and preserves the quadratic form. The overline notation stresses the fact that complex and quaternion conjugation are partial cases of it. These operators have the following properties:
The involution is an antiautomorphism, i.e. a b = b a
a a = N(a) 1 = a a
L(a) = L(a)*, R(a) = R(a)*, where * denotes the adjoint operator with respect to the form ( , )
Re(a b) = Re(b a) where Re x = (x + x)/2 = (x, 1)
Re((a b) c) = Re(a (b c))
L(a2) = L(a)2, R(a2) = R(a)2, so that A is an alternative algebra
These properties are proved starting from polarized version of the identity (a b, a b) = (a, a)(b, b):
2
(
a
,
b
)
(
c
,
d
)
=
(
a
c
,
b
d
)
+
(
a
d
,
b
c
)
.
{\displaystyle \displaystyle {2(a,b)(c,d)=(ac,bd)+(ad,bc).}}
Setting b = 1 or d = 1 yields L(a) = L(a)* and R(c) = R(c)*. Hence Re(a b) = (a b, 1) = (a, b) = (b a, 1) = Re(b a). Similarly (a b, c) = (a b, c) = (b, a c) = (1, b (a c)) = (1, (b a) c) = (b a, c). Hence Re(a b)c = ((a b)c, 1) = (a b, c) = (a, c b) = (a(b c), 1) = Re(a(b c)). By the polarized identity N(a) (c, d) = (a c, a d) = (a a c, d) so L(a) L(a) = N(a). Applied to 1 this gives a a = N(a). Replacing a by a gives the other identity. Substituting the formula for a in L(a) L(a) = L(a a) gives L(a)2 = L(a2).
=== Para-Hurwitz algebra ===
Another operation ∗ may be defined in a Hurwitz algebra as
x ∗ y = x y
The algebra (A, ∗) is a composition algebra not generally unital, known as a para-Hurwitz algebra.: 484 In dimensions 4 and 8 these are para-quaternion and para-octonion algebras.: 40, 41
A para-Hurwitz algebra satisfies: 48
(
x
∗
y
)
∗
x
=
x
∗
(
y
∗
x
)
=
⟨
x
|
x
⟩
y
.
{\displaystyle (x*y)*x=x*(y*x)=\langle x|x\rangle y\ .}
Conversely, an algebra with a non-degenerate symmetric bilinear form satisfying this equation is either a para-Hurwitz algebra or an eight-dimensional pseudo-octonion algebra.: 49 Similarly, a flexible algebra satisfying
⟨
x
y
|
x
y
⟩
=
⟨
x
|
x
⟩
⟨
y
|
y
⟩
{\displaystyle \langle xy|xy\rangle =\langle x|x\rangle \langle y|y\rangle \ }
is either a Hurwitz algebra, a para-Hurwitz algebra or an eight-dimensional pseudo-octonion algebra.
== References ==
"Okubo_algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Okubo, Susumu (1978), "Pseudo-quaternion and pseudo-octonion algebras", Hadronic Journal, 1 (4): 1250–1278, MR 0510100
Susumu Okubo & J. Marshall Osborn (1981) "Algebras with nondegenerate associative symmetric bilinear forms permitting composition", Communications in Algebra 9(12): 1233–61, MR0618901 and 9(20): 2015–73 MR0640611. | Wikipedia/Okubo_algebra |
In mathematics, an octonion algebra or Cayley algebra over a field F is a composition algebra over F that has dimension 8 over F. In other words, it is a 8-dimensional unital non-associative algebra A over F with a non-degenerate quadratic form N (called the norm form) such that
N
(
x
y
)
=
N
(
x
)
N
(
y
)
{\displaystyle N(xy)=N(x)N(y)}
for all x and y in A.
The most well-known example of an octonion algebra is the classical octonions, which are an octonion algebra over R, the field of real numbers. The split-octonions also form an octonion algebra over R. Up to R-algebra isomorphism, these are the only octonion algebras over the reals. The algebra of bioctonions is the octonion algebra over the complex numbers C.
The octonion algebra for N is a division algebra if and only if the form N is anisotropic. A split octonion algebra is one for which the quadratic form N is isotropic (i.e., there exists a non-zero vector x with N(x) = 0). Up to F-algebra isomorphism, there is a unique split octonion algebra over any field F. When F is algebraically closed or a finite field, these are the only octonion algebras over F.
Octonion algebras are always non-associative. They are, however, alternative algebras, alternativity being a weaker form of associativity. Moreover, the Moufang identities hold in any octonion algebra. It follows that the invertible elements in any octonion algebra form a Moufang loop, as do the elements of unit norm.
The construction of general octonion algebras over an arbitrary field k was described by Leonard Dickson in his book Algebren und ihre Zahlentheorie (1927) (Seite 264) and repeated by Max Zorn. The product depends on selection of a γ from k. Given q and Q from a quaternion algebra over k, the octonion is written q + Qe. Another octonion may be written r + Re. Then with * denoting the conjugation in the quaternion algebra, their product is
(
q
+
Q
e
)
(
r
+
R
e
)
=
(
q
r
+
γ
R
∗
Q
)
+
(
R
q
+
Q
r
∗
)
e
.
{\displaystyle (q+Qe)(r+Re)=(qr+\gamma R^{*}Q)+(Rq+Qr^{*})e.}
Zorn's German language description of this Cayley–Dickson construction contributed to the persistent use of this eponym describing the construction of composition algebras.
Cohl Furey has proposed that octonion algebras can be utilized in an attempt to reconcile components of the Standard Model.
== Classification ==
It is a theorem of Adolf Hurwitz that the F-isomorphism classes of the norm form are in one-to-one correspondence with the isomorphism classes of octonion F-algebras. Moreover, the possible norm forms are exactly the Pfister 3-forms over F.
Since any two octonion F-algebras become isomorphic over the algebraic closure of F, one can apply the ideas of non-abelian Galois cohomology. In particular, by using the fact that the automorphism group of the split octonions is the split algebraic group G2, one sees the correspondence of isomorphism classes of octonion F-algebras with isomorphism classes of G2-torsors over F. These isomorphism classes form the non-abelian Galois cohomology set
H
1
(
F
,
G
2
)
{\displaystyle H^{1}(F,G_{2})}
.
== References ==
Garibaldi, Skip; Merkurjev, Alexander; Serre, Jean-Pierre (2003). Cohomological invariants in Galois cohomology. University Lecture Series. Vol. 28. Providence, RI: American Mathematical Society. ISBN 0-8218-3287-5. Zbl 1159.12311.
Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023.
Okubo, Susumu (1995). Introduction to octonion and other non-associative algebras in physics. Montroll Memorial Lecture Series in Mathematical Physics. Vol. 2. Cambridge: Cambridge University Press. p. 22. ISBN 0-521-47215-6. Zbl 0841.17001.
Schafer, Richard D. (1995) [1966]. An introduction to non-associative algebras. Dover Publications. ISBN 0-486-68813-5. Zbl 0145.25601.
Zhevlakov, K.A.; Slin'ko, A.M.; Shestakov, I.P.; Shirshov, A.I. (1982) [1978]. Rings that are nearly associative. Academic Press. ISBN 0-12-779850-1. MR 0518614. Zbl 0487.17001.
Serre, J. P. (2002). Galois Cohomology. Springer Monographs in Mathematics. Translated from the French by Patrick Ion. Berlin: Springer-Verlag. ISBN 3-540-42192-0. Zbl 1004.12003.
Springer, T. A.; Veldkamp, F. D. (2000). Octonions, Jordan Algebras and Exceptional Groups. Springer-Verlag. ISBN 3-540-66337-1.
== External links ==
"Cayley–Dickson algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Octonion_algebra |
Elements of Algebra is an elementary mathematics textbook written by mathematician Leonhard Euler around 1765 in German. It was first published in Russian as "Universal Arithmetic" (Универсальная арифметика), two volumes appearing in 1768-9 and in 1770 was printed from the original text. Elements of Algebra is one of the earliest books to set out algebra in the modern form we would recognize today (another early book being Elements of Algebra by Nicholas Saunderson, published in 1740), and is one of Euler's few writings, along with Letters to a German Princess, that are accessible to the general public. Written in numbered paragraphs as was common practice till the 19th century, Elements begins with the definition of mathematics and builds on the fundamental operations of arithmetic and number systems, and gradually moves towards more abstract topics.
In 1771, Joseph-Louis Lagrange published an addendum titled Additions to Euler's Elements of Algebra, which featured a number of important mathematical results.
The original German title of the book was Vollständige Anleitung zur Algebra, which literally translates to Complete Instruction to Algebra. Two English translations are now extant, one by John Hewlett (1822), and the other, which is translated to English from a French translation of the book, by Charles Tayler (1824). On the 300th birth anniversary of Euler in 2007, mathematician Christopher Sangwin working with Tarquin Publications published a digitized copy based on Hewlett's translation of the first four sections (or Part I) of the book.
In 2015, Scott Hecht published both print and Kindle versions of Elements of Algebra (ISBN 978-1508901181) with Euler's Part I (Containing the Analysis of Determinate Quantities), Part II (Containing the Analysis of Indeterminate Quantities), Lagrange's Additions, and footnotes by Johann Bernoulli and others.
== See also ==
Introductio in analysin infinitorum (1748)
Institutiones calculi differentialis (1755)
== References ==
== External links ==
Elements of Algebra, 1822, Full text
Elements of Algebra, Part I, HTML
The origin of the problems in Euler's Algebra | Wikipedia/Elements_of_Algebra |
In the mathematical field of set theory, an ultrafilter on a set
X
{\displaystyle X}
is a maximal filter on the set
X
.
{\displaystyle X.}
In other words, it is a collection of subsets of
X
{\displaystyle X}
that satisfies the definition of a filter on
X
{\displaystyle X}
and that is maximal with respect to inclusion, in the sense that there does not exist a strictly larger collection of subsets of
X
{\displaystyle X}
that is also a filter. (In the above, by definition a filter on a set does not contain the empty set.) Equivalently, an ultrafilter on the set
X
{\displaystyle X}
can also be characterized as a filter on
X
{\displaystyle X}
with the property that for every subset
A
{\displaystyle A}
of
X
{\displaystyle X}
either
A
{\displaystyle A}
or its complement
X
∖
A
{\displaystyle X\setminus A}
belongs to the ultrafilter.
Ultrafilters on sets are an important special instance of ultrafilters on partially ordered sets, where the partially ordered set consists of the power set
℘
(
X
)
{\displaystyle \wp (X)}
and the partial order is subset inclusion
⊆
.
{\displaystyle \,\subseteq .}
This article deals specifically with ultrafilters on a set and does not cover the more general notion.
There are two types of ultrafilter on a set. A principal ultrafilter on
X
{\displaystyle X}
is the collection of all subsets of
X
{\displaystyle X}
that contain a fixed element
x
∈
X
{\displaystyle x\in X}
. The ultrafilters that are not principal are the free ultrafilters. The existence of free ultrafilters on any infinite set is implied by the ultrafilter lemma, which can be proven in ZFC. On the other hand, there exists models of ZF where every ultrafilter on a set is principal.
Ultrafilters have many applications in set theory, model theory, and topology.: 186 Usually, only free ultrafilters lead to non-trivial constructions. For example, an ultraproduct modulo a principal ultrafilter is always isomorphic to one of the factors, while an ultraproduct modulo a free ultrafilter usually has a more complex structure.
== Definitions ==
Given an arbitrary set
X
,
{\displaystyle X,}
an ultrafilter on
X
{\displaystyle X}
is a non-empty family
U
{\displaystyle U}
of subsets of
X
{\displaystyle X}
such that:
Proper or non-degenerate: The empty set is not an element of
U
.
{\displaystyle U.}
Upward closed in
X
{\displaystyle X}
: If
A
∈
U
{\displaystyle A\in U}
and if
B
⊆
X
{\displaystyle B\subseteq X}
is any superset of
A
{\displaystyle A}
(that is, if
A
⊆
B
⊆
X
{\displaystyle A\subseteq B\subseteq X}
) then
B
∈
U
.
{\displaystyle B\in U.}
π−system: If
A
{\displaystyle A}
and
B
{\displaystyle B}
are elements of
U
{\displaystyle U}
then so is their intersection
A
∩
B
.
{\displaystyle A\cap B.}
If
A
⊆
X
{\displaystyle A\subseteq X}
then either
A
{\displaystyle A}
or its complement
X
∖
A
{\displaystyle X\setminus A}
is an element of
U
.
{\displaystyle U.}
Properties (1), (2), and (3) are the defining properties of a filter on
X
.
{\displaystyle X.}
Some authors do not include non-degeneracy (which is property (1) above) in their definition of "filter". However, the definition of "ultrafilter" (and also of "prefilter" and "filter subbase") always includes non-degeneracy as a defining condition. This article requires that all filters be proper although a filter might be described as "proper" for emphasis.
A filter subbase is a non-empty family of sets that has the finite intersection property (i.e. all finite intersections are non-empty). Equivalently, a filter subbase is a non-empty family of sets that is contained in some (proper) filter. The smallest (relative to
⊆
{\displaystyle \subseteq }
) filter containing a given filter subbase is said to be generated by the filter subbase.
The upward closure in
X
{\displaystyle X}
of a family of sets
P
{\displaystyle P}
is the set
P
↑
X
:=
{
S
:
A
⊆
S
⊆
X
for some
A
∈
P
}
.
{\displaystyle P^{\uparrow X}:=\{S:A\subseteq S\subseteq X{\text{ for some }}A\in P\}.}
A prefilter or filter base is a non-empty and proper (i.e.
∅
∉
P
{\displaystyle \varnothing \not \in P}
) family of sets
P
{\displaystyle P}
that is downward directed, which means that if
B
,
C
∈
P
{\displaystyle B,C\in P}
then there exists some
A
∈
P
{\displaystyle A\in P}
such that
A
⊆
B
∩
C
.
{\displaystyle A\subseteq B\cap C.}
Equivalently, a prefilter is any family of sets
P
{\displaystyle P}
whose upward closure
P
↑
X
{\displaystyle P^{\uparrow X}}
is a filter, in which case this filter is called the filter generated by
P
{\displaystyle P}
and
P
{\displaystyle P}
is said to be a filter base for
P
↑
X
.
{\displaystyle P^{\uparrow X}.}
The dual in
X
{\displaystyle X}
of a family of sets
P
{\displaystyle P}
is the set
X
∖
P
:=
{
X
∖
B
:
B
∈
P
}
.
{\displaystyle X\setminus P:=\{X\setminus B:B\in P\}.}
For example, the dual of the power set
℘
(
X
)
{\displaystyle \wp (X)}
is itself:
X
∖
℘
(
X
)
=
℘
(
X
)
.
{\displaystyle X\setminus \wp (X)=\wp (X).}
A family of sets is a proper filter on
X
{\displaystyle X}
if and only if its dual is a proper ideal on
X
{\displaystyle X}
("proper" means not equal to the power set).
== Generalization to ultra prefilters ==
A family
U
≠
∅
{\displaystyle U\neq \varnothing }
of subsets of
X
{\displaystyle X}
is called ultra if
∅
∉
U
{\displaystyle \varnothing \not \in U}
and any of the following equivalent conditions are satisfied:
For every set
S
⊆
X
{\displaystyle S\subseteq X}
there exists some set
B
∈
U
{\displaystyle B\in U}
such that
B
⊆
S
{\displaystyle B\subseteq S}
or
B
⊆
X
∖
S
{\displaystyle B\subseteq X\setminus S}
(or equivalently, such that
B
∩
S
{\displaystyle B\cap S}
equals
B
{\displaystyle B}
or
∅
{\displaystyle \varnothing }
).
For every set
S
⊆
⋃
B
∈
U
B
{\displaystyle S\subseteq {\textstyle \bigcup \limits _{B\in U}}B}
there exists some set
B
∈
U
{\displaystyle B\in U}
such that
B
∩
S
{\displaystyle B\cap S}
equals
B
{\displaystyle B}
or
∅
.
{\displaystyle \varnothing .}
Here,
⋃
B
∈
U
B
{\displaystyle {\textstyle \bigcup \limits _{B\in U}}B}
is defined to be the union of all sets in
U
.
{\displaystyle U.}
This characterization of "
U
{\displaystyle U}
is ultra" does not depend on the set
X
,
{\displaystyle X,}
so mentioning the set
X
{\displaystyle X}
is optional when using the term "ultra."
For every set
S
{\displaystyle S}
(not necessarily even a subset of
X
{\displaystyle X}
) there exists some set
B
∈
U
{\displaystyle B\in U}
such that
B
∩
S
{\displaystyle B\cap S}
equals
B
{\displaystyle B}
or
∅
.
{\displaystyle \varnothing .}
If
U
{\displaystyle U}
satisfies this condition then so does every superset
V
⊇
U
.
{\displaystyle V\supseteq U.}
In particular, a set
V
{\displaystyle V}
is ultra if and only if
∅
∉
V
{\displaystyle \varnothing \not \in V}
and
V
{\displaystyle V}
contains as a subset some ultra family of sets.
A filter subbase that is ultra is necessarily a prefilter.
The ultra property can now be used to define both ultrafilters and ultra prefilters:
An ultra prefilter is a prefilter that is ultra. Equivalently, it is a filter subbase that is ultra.
An ultrafilter on
X
{\displaystyle X}
is a (proper) filter on
X
{\displaystyle X}
that is ultra. Equivalently, it is any filter on
X
{\displaystyle X}
that is generated by an ultra prefilter.
Ultra prefilters as maximal prefilters
To characterize ultra prefilters in terms of "maximality," the following relation is needed.
Given two families of sets
M
{\displaystyle M}
and
N
,
{\displaystyle N,}
the family
M
{\displaystyle M}
is said to be coarser than
N
,
{\displaystyle N,}
and
N
{\displaystyle N}
is finer than and subordinate to
M
,
{\displaystyle M,}
written
M
≤
N
{\displaystyle M\leq N}
or N ⊢ M, if for every
C
∈
M
,
{\displaystyle C\in M,}
there is some
F
∈
N
{\displaystyle F\in N}
such that
F
⊆
C
.
{\displaystyle F\subseteq C.}
The families
M
{\displaystyle M}
and
N
{\displaystyle N}
are called equivalent if
M
≤
N
{\displaystyle M\leq N}
and
N
≤
M
.
{\displaystyle N\leq M.}
The families
M
{\displaystyle M}
and
N
{\displaystyle N}
are comparable if one of these sets is finer than the other.
The subordination relationship, i.e.
≥
,
{\displaystyle \,\geq ,\,}
is a preorder so the above definition of "equivalent" does form an equivalence relation.
If
M
⊆
N
{\displaystyle M\subseteq N}
then
M
≤
N
{\displaystyle M\leq N}
but the converse does not hold in general.
However, if
N
{\displaystyle N}
is upward closed, such as a filter, then
M
≤
N
{\displaystyle M\leq N}
if and only if
M
⊆
N
.
{\displaystyle M\subseteq N.}
Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters.
If two families of sets
M
{\displaystyle M}
and
N
{\displaystyle N}
are equivalent then either both
M
{\displaystyle M}
and
N
{\displaystyle N}
are ultra (resp. prefilters, filter subbases) or otherwise neither one of them is ultra (resp. a prefilter, a filter subbase).
In particular, if a filter subbase is not also a prefilter, then it is not equivalent to the filter or prefilter that it generates. If
M
{\displaystyle M}
and
N
{\displaystyle N}
are both filters on
X
{\displaystyle X}
then
M
{\displaystyle M}
and
N
{\displaystyle N}
are equivalent if and only if
M
=
N
.
{\displaystyle M=N.}
If a proper filter (resp. ultrafilter) is equivalent to a family of sets
M
{\displaystyle M}
then
M
{\displaystyle M}
is necessarily a prefilter (resp. ultra prefilter).
Using the following characterization, it is possible to define prefilters (resp. ultra prefilters) using only the concept of filters (resp. ultrafilters) and subordination:
An arbitrary family of sets is a prefilter if and only it is equivalent to a (proper) filter.
An arbitrary family of sets is an ultra prefilter if and only it is equivalent to an ultrafilter.
A maximal prefilter on
X
{\displaystyle X}
is a prefilter
U
⊆
℘
(
X
)
{\displaystyle U\subseteq \wp (X)}
that satisfies any of the following equivalent conditions:
U
{\displaystyle U}
is ultra.
U
{\displaystyle U}
is maximal on
Prefilters
(
X
)
{\displaystyle \operatorname {Prefilters} (X)}
with respect to
≤
,
{\displaystyle \,\leq ,}
meaning that if
P
∈
Prefilters
(
X
)
{\displaystyle P\in \operatorname {Prefilters} (X)}
satisfies
U
≤
P
{\displaystyle U\leq P}
then
P
≤
U
.
{\displaystyle P\leq U.}
There is no prefilter properly subordinate to
U
.
{\displaystyle U.}
If a (proper) filter
F
{\displaystyle F}
on
X
{\displaystyle X}
satisfies
U
≤
F
{\displaystyle U\leq F}
then
F
≤
U
.
{\displaystyle F\leq U.}
The filter on
X
{\displaystyle X}
generated by
U
{\displaystyle U}
is ultra.
== Characterizations ==
There are no ultrafilters on the empty set, so it is henceforth assumed that
X
{\displaystyle X}
is nonempty.
A filter subbase
U
{\displaystyle U}
on
X
{\displaystyle X}
is an ultrafilter on
X
{\displaystyle X}
if and only if any of the following equivalent conditions hold:
for any
S
⊆
X
,
{\displaystyle S\subseteq X,}
either
S
∈
U
{\displaystyle S\in U}
or
X
∖
S
∈
U
.
{\displaystyle X\setminus S\in U.}
U
{\displaystyle U}
is a maximal filter subbase on
X
,
{\displaystyle X,}
meaning that if
F
{\displaystyle F}
is any filter subbase on
X
{\displaystyle X}
then
U
⊆
F
{\displaystyle U\subseteq F}
implies
U
=
F
.
{\displaystyle U=F.}
A (proper) filter
U
{\displaystyle U}
on
X
{\displaystyle X}
is an ultrafilter on
X
{\displaystyle X}
if and only if any of the following equivalent conditions hold:
U
{\displaystyle U}
is ultra;
U
{\displaystyle U}
is generated by an ultra prefilter;
For any subset
S
⊆
X
,
{\displaystyle S\subseteq X,}
S
∈
U
{\displaystyle S\in U}
or
X
∖
S
∈
U
.
{\displaystyle X\setminus S\in U.}
So an ultrafilter
U
{\displaystyle U}
decides for every
S
⊆
X
{\displaystyle S\subseteq X}
whether
S
{\displaystyle S}
is "large" (i.e.
S
∈
U
{\displaystyle S\in U}
) or "small" (i.e.
X
∖
S
∈
U
{\displaystyle X\setminus S\in U}
).
For each subset
A
⊆
X
,
{\displaystyle A\subseteq X,}
either
A
{\displaystyle A}
is in
U
{\displaystyle U}
or (
X
∖
A
{\displaystyle X\setminus A}
) is.
U
∪
(
X
∖
U
)
=
℘
(
X
)
.
{\displaystyle U\cup (X\setminus U)=\wp (X).}
This condition can be restated as:
℘
(
X
)
{\displaystyle \wp (X)}
is partitioned by
U
{\displaystyle U}
and its dual
X
∖
U
.
{\displaystyle X\setminus U.}
The sets
P
{\displaystyle P}
and
X
∖
P
{\displaystyle X\setminus P}
are disjoint for all prefilters
P
{\displaystyle P}
on
X
.
{\displaystyle X.}
℘
(
X
)
∖
U
=
{
S
∈
℘
(
X
)
:
S
∉
U
}
{\displaystyle \wp (X)\setminus U=\left\{S\in \wp (X):S\not \in U\right\}}
is an ideal on
X
.
{\displaystyle X.}
For any finite family
S
1
,
…
,
S
n
{\displaystyle S_{1},\ldots ,S_{n}}
of subsets of
X
{\displaystyle X}
(where
n
≥
1
{\displaystyle n\geq 1}
), if
S
1
∪
⋯
∪
S
n
∈
U
{\displaystyle S_{1}\cup \cdots \cup S_{n}\in U}
then
S
i
∈
U
{\displaystyle S_{i}\in U}
for some index
i
.
{\displaystyle i.}
In words, a "large" set cannot be a finite union of sets none of which is large.
For any
R
,
S
⊆
X
,
{\displaystyle R,S\subseteq X,}
if
R
∪
S
=
X
{\displaystyle R\cup S=X}
then
R
∈
U
{\displaystyle R\in U}
or
S
∈
U
.
{\displaystyle S\in U.}
For any
R
,
S
⊆
X
,
{\displaystyle R,S\subseteq X,}
if
R
∪
S
∈
U
{\displaystyle R\cup S\in U}
then
R
∈
U
{\displaystyle R\in U}
or
S
∈
U
{\displaystyle S\in U}
(a filter with this property is called a prime filter).
For any
R
,
S
⊆
X
,
{\displaystyle R,S\subseteq X,}
if
R
∪
S
∈
U
{\displaystyle R\cup S\in U}
and
R
∩
S
=
∅
{\displaystyle R\cap S=\varnothing }
then either
R
∈
U
{\displaystyle R\in U}
or
S
∈
U
.
{\displaystyle S\in U.}
U
{\displaystyle U}
is a maximal filter; that is, if
F
{\displaystyle F}
is a filter on
X
{\displaystyle X}
such that
U
⊆
F
{\displaystyle U\subseteq F}
then
U
=
F
.
{\displaystyle U=F.}
Equivalently,
U
{\displaystyle U}
is a maximal filter if there is no filter
F
{\displaystyle F}
on
X
{\displaystyle X}
that contains
U
{\displaystyle U}
as a proper subset (that is, no filter is strictly finer than
U
{\displaystyle U}
).
=== Grills and filter-grills ===
If
B
⊆
℘
(
X
)
{\displaystyle {\mathcal {B}}\subseteq \wp (X)}
then its grill on
X
{\displaystyle X}
is the family
B
#
X
:=
{
S
⊆
X
:
S
∩
B
≠
∅
for all
B
∈
B
}
{\displaystyle {\mathcal {B}}^{\#X}:=\{S\subseteq X~:~S\cap B\neq \varnothing {\text{ for all }}B\in {\mathcal {B}}\}}
where
B
#
{\displaystyle {\mathcal {B}}^{\#}}
may be written if
X
{\displaystyle X}
is clear from context.
If
F
{\displaystyle {\mathcal {F}}}
is a filter then
F
#
{\displaystyle {\mathcal {F}}^{\#}}
is the set of positive sets with respect to
F
{\displaystyle {\mathcal {F}}}
and is usually written as
F
+
{\displaystyle {\mathcal {F}}^{+}}
.
For example,
∅
#
=
℘
(
X
)
{\displaystyle \varnothing ^{\#}=\wp (X)}
and if
∅
∈
B
{\displaystyle \varnothing \in {\mathcal {B}}}
then
B
#
=
∅
.
{\displaystyle {\mathcal {B}}^{\#}=\varnothing .}
If
A
⊆
B
{\displaystyle {\mathcal {A}}\subseteq {\mathcal {B}}}
then
B
#
⊆
A
#
{\displaystyle {\mathcal {B}}^{\#}\subseteq {\mathcal {A}}^{\#}}
and moreover, if
B
{\displaystyle {\mathcal {B}}}
is a filter subbase then
B
⊆
B
#
.
{\displaystyle {\mathcal {B}}\subseteq {\mathcal {B}}^{\#}.}
The grill
B
#
X
{\displaystyle {\mathcal {B}}^{\#X}}
is upward closed in
X
{\displaystyle X}
if and only if
∅
∉
B
,
{\displaystyle \varnothing \not \in {\mathcal {B}},}
which will henceforth be assumed. Moreover,
B
#
#
=
B
↑
X
{\displaystyle {\mathcal {B}}^{\#\#}={\mathcal {B}}^{\uparrow X}}
so that
B
{\displaystyle {\mathcal {B}}}
is upward closed in
X
{\displaystyle X}
if and only if
B
#
#
=
B
.
{\displaystyle {\mathcal {B}}^{\#\#}={\mathcal {B}}.}
The grill of a filter on
X
{\displaystyle X}
is called a filter-grill on
X
.
{\displaystyle X.}
For any
∅
≠
B
⊆
℘
(
X
)
,
{\displaystyle \varnothing \neq {\mathcal {B}}\subseteq \wp (X),}
B
{\displaystyle {\mathcal {B}}}
is a filter-grill on
X
{\displaystyle X}
if and only if (1)
B
{\displaystyle {\mathcal {B}}}
is upward closed in
X
{\displaystyle X}
and (2) for all sets
R
{\displaystyle R}
and
S
,
{\displaystyle S,}
if
R
∪
S
∈
B
{\displaystyle R\cup S\in {\mathcal {B}}}
then
R
∈
B
{\displaystyle R\in {\mathcal {B}}}
or
S
∈
B
.
{\displaystyle S\in {\mathcal {B}}.}
The grill operation
F
↦
F
#
X
{\displaystyle {\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}}
induces a bijection
∙
#
X
:
Filters
(
X
)
→
FilterGrills
(
X
)
{\displaystyle {\bullet }^{\#X}~:~\operatorname {Filters} (X)\to \operatorname {FilterGrills} (X)}
whose inverse is also given by
F
↦
F
#
X
.
{\displaystyle {\mathcal {F}}\mapsto {\mathcal {F}}^{\#X}.}
If
F
∈
Filters
(
X
)
{\displaystyle {\mathcal {F}}\in \operatorname {Filters} (X)}
then
F
{\displaystyle {\mathcal {F}}}
is a filter-grill on
X
{\displaystyle X}
if and only if
F
=
F
#
X
,
{\displaystyle {\mathcal {F}}={\mathcal {F}}^{\#X},}
or equivalently, if and only if
F
{\displaystyle {\mathcal {F}}}
is an ultrafilter on
X
.
{\displaystyle X.}
That is, a filter on
X
{\displaystyle X}
is a filter-grill if and only if it is ultra. For any non-empty
F
⊆
℘
(
X
)
,
{\displaystyle {\mathcal {F}}\subseteq \wp (X),}
F
{\displaystyle {\mathcal {F}}}
is both a filter on
X
{\displaystyle X}
and a filter-grill on
X
{\displaystyle X}
if and only if (1)
∅
∉
F
{\displaystyle \varnothing \not \in {\mathcal {F}}}
and (2) for all
R
,
S
⊆
X
,
{\displaystyle R,S\subseteq X,}
the following equivalences hold:
R
∪
S
∈
F
{\displaystyle R\cup S\in {\mathcal {F}}}
if and only if
R
,
S
∈
F
{\displaystyle R,S\in {\mathcal {F}}}
if and only if
R
∩
S
∈
F
.
{\displaystyle R\cap S\in {\mathcal {F}}.}
=== Free or principal ===
If
P
{\displaystyle P}
is any non-empty family of sets then the Kernel of
P
{\displaystyle P}
is the intersection of all sets in
P
:
{\displaystyle P:}
ker
P
:=
⋂
B
∈
P
B
.
{\displaystyle \operatorname {ker} P:=\bigcap _{B\in P}B.}
A non-empty family of sets
P
{\displaystyle P}
is called:
free if
ker
P
=
∅
{\displaystyle \operatorname {ker} P=\varnothing }
and fixed otherwise (that is, if
ker
P
≠
∅
{\displaystyle \operatorname {ker} P\neq \varnothing }
).
principal if
ker
P
∈
P
.
{\displaystyle \operatorname {ker} P\in P.}
principal at a point if
ker
P
∈
P
{\displaystyle \operatorname {ker} P\in P}
and
ker
P
{\displaystyle \operatorname {ker} P}
is a singleton set; in this case, if
ker
P
=
{
x
}
{\displaystyle \operatorname {ker} P=\{x\}}
then
P
{\displaystyle P}
is said to be principal at
x
.
{\displaystyle x.}
If a family of sets
P
{\displaystyle P}
is fixed then
P
{\displaystyle P}
is ultra if and only if some element of
P
{\displaystyle P}
is a singleton set, in which case
P
{\displaystyle P}
will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter
P
{\displaystyle P}
is ultra if and only if
ker
P
{\displaystyle \operatorname {ker} P}
is a singleton set. A singleton set is ultra if and only if its sole element is also a singleton set.
The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point.
Every filter on
X
{\displaystyle X}
that is principal at a single point is an ultrafilter, and if in addition
X
{\displaystyle X}
is finite, then there are no ultrafilters on
X
{\displaystyle X}
other than these. In particular, if a set
X
{\displaystyle X}
has finite cardinality
n
<
∞
,
{\displaystyle n<\infty ,}
then there are exactly
n
{\displaystyle n}
ultrafilters on
X
{\displaystyle X}
and those are the ultrafilters generated by each singleton subset of
X
.
{\displaystyle X.}
Consequently, free ultrafilters can only exist on an infinite set.
== Examples, properties, and sufficient conditions ==
If
X
{\displaystyle X}
is an infinite set then there are as many ultrafilters over
X
{\displaystyle X}
as there are families of subsets of
X
;
{\displaystyle X;}
explicitly, if
X
{\displaystyle X}
has infinite cardinality
κ
{\displaystyle \kappa }
then the set of ultrafilters over
X
{\displaystyle X}
has the same cardinality as
℘
(
℘
(
X
)
)
;
{\displaystyle \wp (\wp (X));}
that cardinality being
2
2
κ
.
{\displaystyle 2^{2^{\kappa }}.}
If
U
{\displaystyle U}
and
S
{\displaystyle S}
are families of sets such that
U
{\displaystyle U}
is ultra,
∅
∉
S
,
{\displaystyle \varnothing \not \in S,}
and
U
≤
S
,
{\displaystyle U\leq S,}
then
S
{\displaystyle S}
is necessarily ultra.
A filter subbase
U
{\displaystyle U}
that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by
U
{\displaystyle U}
to be ultra.
Suppose
U
⊆
℘
(
X
)
{\displaystyle U\subseteq \wp (X)}
is ultra and
Y
{\displaystyle Y}
is a set.
The trace
U
|
Y
:=
{
B
∩
Y
:
B
∈
U
}
{\displaystyle U\vert _{Y}:=\{B\cap Y:B\in U\}}
is ultra if and only if it does not contain the empty set.
Furthermore, at least one of the sets
U
|
Y
∖
{
∅
}
{\displaystyle U\vert _{Y}\setminus \{\varnothing \}}
and
U
|
X
∖
Y
∖
{
∅
}
{\displaystyle U\vert _{X\setminus Y}\setminus \{\varnothing \}}
will be ultra (this result extends to any finite partition of
X
{\displaystyle X}
).
If
F
1
,
…
,
F
n
{\displaystyle F_{1},\ldots ,F_{n}}
are filters on
X
,
{\displaystyle X,}
U
{\displaystyle U}
is an ultrafilter on
X
,
{\displaystyle X,}
and
F
1
∩
⋯
∩
F
n
≤
U
,
{\displaystyle F_{1}\cap \cdots \cap F_{n}\leq U,}
then there is some
F
i
{\displaystyle F_{i}}
that satisfies
F
i
≤
U
.
{\displaystyle F_{i}\leq U.}
This result is not necessarily true for an infinite family of filters.
The image under a map
f
:
X
→
Y
{\displaystyle f:X\to Y}
of an ultra set
U
⊆
℘
(
X
)
{\displaystyle U\subseteq \wp (X)}
is again ultra and if
U
{\displaystyle U}
is an ultra prefilter then so is
f
(
U
)
.
{\displaystyle f(U).}
The property of being ultra is preserved under bijections. However, the preimage of an ultrafilter is not necessarily ultra, not even if the map is surjective. For example, if
X
{\displaystyle X}
has more than one point and if the range of
f
:
X
→
Y
{\displaystyle f:X\to Y}
consists of a single point
{
y
}
{\displaystyle \{y\}}
then
{
y
}
{\displaystyle \{y\}}
is an ultra prefilter on
Y
{\displaystyle Y}
but its preimage is not ultra. Alternatively, if
U
{\displaystyle U}
is a principal filter generated by a point in
Y
∖
f
(
X
)
{\displaystyle Y\setminus f(X)}
then the preimage of
U
{\displaystyle U}
contains the empty set and so is not ultra.
The elementary filter induced by an infinite sequence, all of whose points are distinct, is not an ultrafilter. If
n
=
2
,
{\displaystyle n=2,}
then
U
n
{\displaystyle U_{n}}
denotes the set consisting all subsets of
X
{\displaystyle X}
having cardinality
n
,
{\displaystyle n,}
and if
X
{\displaystyle X}
contains at least
2
n
−
1
{\displaystyle 2n-1}
(
=
3
{\displaystyle =3}
) distinct points, then
U
n
{\displaystyle U_{n}}
is ultra but it is not contained in any prefilter. This example generalizes to any integer
n
>
1
{\displaystyle n>1}
and also to
n
=
1
{\displaystyle n=1}
if
X
{\displaystyle X}
contains more than one element. Ultra sets that are not also prefilters are rarely used.
For every
S
⊆
X
×
X
{\displaystyle S\subseteq X\times X}
and every
a
∈
X
,
{\displaystyle a\in X,}
let
S
|
{
a
}
×
X
:=
{
y
∈
X
:
(
a
,
y
)
∈
S
}
.
{\displaystyle S{\big \vert }_{\{a\}\times X}:=\{y\in X~:~(a,y)\in S\}.}
If
U
{\displaystyle {\mathcal {U}}}
is an ultrafilter on
X
{\displaystyle X}
then the set of all
S
⊆
X
×
X
{\displaystyle S\subseteq X\times X}
such that
{
a
∈
X
:
S
|
{
a
}
×
X
∈
U
}
∈
U
{\displaystyle \left\{a\in X~:~S{\big \vert }_{\{a\}\times X}\in {\mathcal {U}}\right\}\in {\mathcal {U}}}
is an ultrafilter on
X
×
X
.
{\displaystyle X\times X.}
=== Monad structure ===
The functor associating to any set
X
{\displaystyle X}
the set of
U
(
X
)
{\displaystyle U(X)}
of all ultrafilters on
X
{\displaystyle X}
forms a monad called the ultrafilter monad. The unit map
X
→
U
(
X
)
{\displaystyle X\to U(X)}
sends any element
x
∈
X
{\displaystyle x\in X}
to the principal ultrafilter given by
x
.
{\displaystyle x.}
This ultrafilter monad is the codensity monad of the inclusion of the category of finite sets into the category of all sets, which gives a conceptual explanation of this monad.
Similarly, the ultraproduct monad is the codensity monad of the inclusion of the category of finite families of sets into the category of all families of set. So in this sense, ultraproducts are categorically inevitable.
== The ultrafilter lemma ==
The ultrafilter lemma was first proved by Alfred Tarski in 1930.
The ultrafilter lemma is equivalent to each of the following statements:
For every prefilter on a set
X
,
{\displaystyle X,}
there exists a maximal prefilter on
X
{\displaystyle X}
subordinate to it.
Every proper filter subbase on a set
X
{\displaystyle X}
is contained in some ultrafilter on
X
.
{\displaystyle X.}
A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it.
The following results can be proven using the ultrafilter lemma.
A free ultrafilter exists on a set
X
{\displaystyle X}
if and only if
X
{\displaystyle X}
is infinite. Every proper filter is equal to the intersection of all ultrafilters containing it. Since there are filters that are not ultra, this shows that the intersection of a family of ultrafilters need not be ultra. A family of sets
F
≠
∅
{\displaystyle \mathbb {F} \neq \varnothing }
can be extended to a free ultrafilter if and only if the intersection of any finite family of elements of
F
{\displaystyle \mathbb {F} }
is infinite.
=== Relationships to other statements under ZF ===
Throughout this section, ZF refers to Zermelo–Fraenkel set theory and ZFC refers to ZF with the Axiom of Choice (AC). The ultrafilter lemma is independent of ZF. That is, there exist models in which the axioms of ZF hold but the ultrafilter lemma does not. There also exist models of ZF in which every ultrafilter is necessarily principal.
Every filter that contains a singleton set is necessarily an ultrafilter and given
x
∈
X
,
{\displaystyle x\in X,}
the definition of the discrete ultrafilter
{
S
⊆
X
:
x
∈
S
}
{\displaystyle \{S\subseteq X:x\in S\}}
does not require more than ZF.
If
X
{\displaystyle X}
is finite then every ultrafilter is a discrete filter at a point; consequently, free ultrafilters can only exist on infinite sets.
In particular, if
X
{\displaystyle X}
is finite then the ultrafilter lemma can be proven from the axioms ZF.
The existence of free ultrafilter on infinite sets can be proven if the axiom of choice is assumed.
More generally, the ultrafilter lemma can be proven by using the axiom of choice, which in brief states that any Cartesian product of non-empty sets is non-empty. Under ZF, the axiom of choice is, in particular, equivalent to (a) Zorn's lemma, (b) Tychonoff's theorem, (c) the weak form of the vector basis theorem (which states that every vector space has a basis), (d) the strong form of the vector basis theorem, and other statements.
However, the ultrafilter lemma is strictly weaker than the axiom of choice.
While free ultrafilters can be proven to exist, it is not possible to construct an explicit example of a free ultrafilter (using only ZF and the ultrafilter lemma); that is, free ultrafilters are intangible.
Alfred Tarski proved that under ZFC, the cardinality of the set of all free ultrafilters on an infinite set
X
{\displaystyle X}
is equal to the cardinality of
℘
(
℘
(
X
)
)
,
{\displaystyle \wp (\wp (X)),}
where
℘
(
X
)
{\displaystyle \wp (X)}
denotes the power set of
X
.
{\displaystyle X.}
Other authors attribute this discovery to Bedřich Pospíšil (following a combinatorial argument from Fichtenholz, and Kantorovitch, improved by Hausdorff).
Under ZF, the axiom of choice can be used to prove both the ultrafilter lemma and the Krein–Milman theorem; conversely, under ZF, the ultrafilter lemma together with the Krein–Milman theorem can prove the axiom of choice.
==== Statements that cannot be deduced ====
The ultrafilter lemma is a relatively weak axiom. For example, each of the statements in the following list can not be deduced from ZF together with only the ultrafilter lemma:
A countable union of countable sets is a countable set.
The axiom of countable choice (ACC).
The axiom of dependent choice (ADC).
==== Equivalent statements ====
Under ZF, the ultrafilter lemma is equivalent to each of the following statements:
The Boolean prime ideal theorem (BPIT).
Stone's representation theorem for Boolean algebras.
Any product of Boolean spaces is a Boolean space.
Boolean Prime Ideal Existence Theorem: Every nondegenerate Boolean algebra has a prime ideal.
Tychonoff's theorem for Hausdorff spaces: Any product of compact Hausdorff spaces is compact.
If
{
0
,
1
}
{\displaystyle \{0,1\}}
is endowed with the discrete topology then for any set
I
,
{\displaystyle I,}
the product space
{
0
,
1
}
I
{\displaystyle \{0,1\}^{I}}
is compact.
Each of the following versions of the Banach-Alaoglu theorem is equivalent to the ultrafilter lemma:
Any equicontinuous set of scalar-valued maps on a topological vector space (TVS) is relatively compact in the weak-* topology (that is, it is contained in some weak-* compact set).
The polar of any neighborhood of the origin in a TVS
X
{\displaystyle X}
is a weak-* compact subset of its continuous dual space.
The closed unit ball in the continuous dual space of any normed space is weak-* compact.
If the normed space is separable then the ultrafilter lemma is sufficient but not necessary to prove this statement.
A topological space
X
{\displaystyle X}
is compact if every ultrafilter on
X
{\displaystyle X}
converges to some limit.
A topological space
X
{\displaystyle X}
is compact if and only if every ultrafilter on
X
{\displaystyle X}
converges to some limit.
The addition of the words "and only if" is the only difference between this statement and the one immediately above it.
The Alexander subbase theorem.
The Ultranet lemma: Every net has a universal subnet.
By definition, a net in
X
{\displaystyle X}
is called an ultranet or an universal net if for every subset
S
⊆
X
,
{\displaystyle S\subseteq X,}
the net is eventually in
S
{\displaystyle S}
or in
X
∖
S
.
{\displaystyle X\setminus S.}
A topological space
X
{\displaystyle X}
is compact if and only if every ultranet on
X
{\displaystyle X}
converges to some limit.
If the words "and only if" are removed then the resulting statement remains equivalent to the ultrafilter lemma.
A convergence space
X
{\displaystyle X}
is compact if every ultrafilter on
X
{\displaystyle X}
converges.
A uniform space is compact if it is complete and totally bounded.
The Stone–Čech compactification Theorem.
Each of the following versions of the compactness theorem is equivalent to the ultrafilter lemma:
If
Σ
{\displaystyle \Sigma }
is a set of first-order sentences such that every finite subset of
Σ
{\displaystyle \Sigma }
has a model, then
Σ
{\displaystyle \Sigma }
has a model.
If
Σ
{\displaystyle \Sigma }
is a set of zero-order sentences such that every finite subset of
Σ
{\displaystyle \Sigma }
has a model, then
Σ
{\displaystyle \Sigma }
has a model.
The completeness theorem: If
Σ
{\displaystyle \Sigma }
is a set of zero-order sentences that is syntactically consistent, then it has a model (that is, it is semantically consistent).
==== Weaker statements ====
Any statement that can be deduced from the ultrafilter lemma (together with ZF) is said to be weaker than the ultrafilter lemma.
A weaker statement is said to be strictly weaker if under ZF, it is not equivalent to the ultrafilter lemma.
Under ZF, the ultrafilter lemma implies each of the following statements:
The Axiom of Choice for Finite sets (ACF): Given
I
≠
∅
{\displaystyle I\neq \varnothing }
and a family
(
X
i
)
i
∈
I
{\displaystyle \left(X_{i}\right)_{i\in I}}
of non-empty finite sets, their product
∏
i
∈
I
X
i
{\displaystyle {\textstyle \prod \limits _{i\in I}}X_{i}}
is not empty.
A countable union of finite sets is a countable set.
However, ZF with the ultrafilter lemma is too weak to prove that a countable union of countable sets is a countable set.
The Hahn–Banach theorem.
In ZF, the Hahn–Banach theorem is strictly weaker than the ultrafilter lemma.
The Banach–Tarski paradox.
In fact, under ZF, the Banach–Tarski paradox can be deduced from the Hahn–Banach theorem, which is strictly weaker than the Ultrafilter Lemma.
Every set can be linearly ordered.
Every field has a unique algebraic closure.
Non-trivial ultraproducts exist.
The weak ultrafilter theorem: A free ultrafilter exists on
N
.
{\displaystyle \mathbb {N} .}
Under ZF, the weak ultrafilter theorem does not imply the ultrafilter lemma; that is, it is strictly weaker than the ultrafilter lemma.
There exists a free ultrafilter on every infinite set;
This statement is actually strictly weaker than the ultrafilter lemma.
ZF alone does not even imply that there exists a non-principal ultrafilter on some set.
== Completeness ==
The completeness of an ultrafilter
U
{\displaystyle U}
on a powerset is the smallest cardinal κ such that there are κ elements of
U
{\displaystyle U}
whose intersection is not in
U
.
{\displaystyle U.}
The definition of an ultrafilter implies that the completeness of any powerset ultrafilter is at least
ℵ
0
{\displaystyle \aleph _{0}}
. An ultrafilter whose completeness is greater than
ℵ
0
{\displaystyle \aleph _{0}}
—that is, the intersection of any countable collection of elements of
U
{\displaystyle U}
is still in
U
{\displaystyle U}
—is called countably complete or σ-complete.
The completeness of a countably complete nonprincipal ultrafilter on a powerset is always a measurable cardinal.
== Ordering on ultrafilters ==
The Rudin–Keisler ordering (named after Mary Ellen Rudin and Howard Jerome Keisler) is a preorder on the class of powerset ultrafilters defined as follows: if
U
{\displaystyle U}
is an ultrafilter on
℘
(
X
)
,
{\displaystyle \wp (X),}
and
V
{\displaystyle V}
an ultrafilter on
℘
(
Y
)
,
{\displaystyle \wp (Y),}
then
V
≤
R
K
U
{\displaystyle V\leq {}_{RK}U}
if there exists a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
such that
C
∈
V
{\displaystyle C\in V}
if and only if
f
−
1
[
C
]
∈
U
{\displaystyle f^{-1}[C]\in U}
for every subset
C
⊆
Y
.
{\displaystyle C\subseteq Y.}
Ultrafilters
U
{\displaystyle U}
and
V
{\displaystyle V}
are called Rudin–Keisler equivalent, denoted U ≡RK V, if there exist sets
A
∈
U
{\displaystyle A\in U}
and
B
∈
V
{\displaystyle B\in V}
and a bijection
f
:
A
→
B
{\displaystyle f:A\to B}
that satisfies the condition above. (If
X
{\displaystyle X}
and
Y
{\displaystyle Y}
have the same cardinality, the definition can be simplified by fixing
A
=
X
,
{\displaystyle A=X,}
B
=
Y
.
{\displaystyle B=Y.}
)
It is known that ≡RK is the kernel of ≤RK, i.e., that U ≡RK V if and only if
U
≤
R
K
V
{\displaystyle U\leq {}_{RK}V}
and
V
≤
R
K
U
.
{\displaystyle V\leq {}_{RK}U.}
== Ultrafilters on ℘(ω) ==
There are several special properties that an ultrafilter on
℘
(
ω
)
,
{\displaystyle \wp (\omega ),}
where
ω
{\displaystyle \omega }
extends the natural numbers, may possess, which prove useful in various areas of set theory and topology.
A non-principal ultrafilter
U
{\displaystyle U}
is called a P-point (or weakly selective) if for every partition
{
C
n
:
n
<
ω
}
{\displaystyle \left\{C_{n}:n<\omega \right\}}
of
ω
{\displaystyle \omega }
such that for all
n
<
ω
,
{\displaystyle n<\omega ,}
C
n
∉
U
,
{\displaystyle C_{n}\not \in U,}
there exists some
A
∈
U
{\displaystyle A\in U}
such that
A
∩
C
n
{\displaystyle A\cap C_{n}}
is a finite set for each
n
.
{\displaystyle n.}
A non-principal ultrafilter
U
{\displaystyle U}
is called Ramsey (or selective) if for every partition
{
C
n
:
n
<
ω
}
{\displaystyle \left\{C_{n}:n<\omega \right\}}
of
ω
{\displaystyle \omega }
such that for all
n
<
ω
,
{\displaystyle n<\omega ,}
C
n
∉
U
,
{\displaystyle C_{n}\not \in U,}
there exists some
A
∈
U
{\displaystyle A\in U}
such that
A
∩
C
n
{\displaystyle A\cap C_{n}}
is a singleton set for each
n
.
{\displaystyle n.}
It is a trivial observation that all Ramsey ultrafilters are P-points. Walter Rudin proved that the continuum hypothesis implies the existence of Ramsey ultrafilters.
In fact, many hypotheses imply the existence of Ramsey ultrafilters, including Martin's axiom. Saharon Shelah later showed that it is consistent that there are no P-point ultrafilters. Therefore, the existence of these types of ultrafilters is independent of ZFC.
P-points are called as such because they are topological P-points in the usual topology of the space βω \ ω of non-principal ultrafilters. The name Ramsey comes from Ramsey's theorem. To see why, one can prove that an ultrafilter is Ramsey if and only if for every 2-coloring of
[
ω
]
2
{\displaystyle [\omega ]^{2}}
there exists an element of the ultrafilter that has a homogeneous color.
An ultrafilter on
℘
(
ω
)
{\displaystyle \wp (\omega )}
is Ramsey if and only if it is minimal in the Rudin–Keisler ordering of non-principal powerset ultrafilters.
== See also ==
Extender (set theory) – in set theory, a system of ultrafilters representing an elementary embedding witnessing large cardinal propertiesPages displaying wikidata descriptions as a fallback
Filter (mathematics) – In mathematics, a special subset of a partially ordered set
Filter (set theory) – Family of sets representing "large" sets
Filters in topology – Use of filters to describe and characterize all basic topological notions and results.
Łoś's theorem – Mathematical constructionPages displaying short descriptions of redirect targets
Ultrafilter – Maximal proper filter
Universal net – Generalization of a sequence of pointsPages displaying short descriptions of redirect targets
== Notes ==
Proofs
== References ==
== Bibliography ==
Arkhangel'skii, Alexander Vladimirovich; Ponomarev, V.I. (1984). Fundamentals of General Topology: Problems and Exercises. Mathematics and Its Applications. Vol. 13. Dordrecht Boston: D. Reidel. ISBN 978-90-277-1355-1. OCLC 9944489.
Bourbaki, Nicolas (1989) [1966]. General Topology: Chapters 1–4 [Topologie Générale]. Éléments de mathématique. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-64241-1. OCLC 18588129.
Dixmier, Jacques (1984). General Topology. Undergraduate Texts in Mathematics. Translated by Berberian, S. K. New York: Springer-Verlag. ISBN 978-0-387-90972-1. OCLC 10277303.
Dolecki, Szymon; Mynard, Frédéric (2016). Convergence Foundations Of Topology. New Jersey: World Scientific Publishing Company. ISBN 978-981-4571-52-4. OCLC 945169917.
Dugundji, James (1966). Topology. Boston: Allyn and Bacon. ISBN 978-0-697-06889-7. OCLC 395340485.
Császár, Ákos (1978). General topology. Translated by Császár, Klára. Bristol England: Adam Hilger Ltd. ISBN 0-85274-275-4. OCLC 4146011.
Jech, Thomas (2006). Set Theory: The Third Millennium Edition, Revised and Expanded. Berlin New York: Springer Science & Business Media. ISBN 978-3-540-44085-7. OCLC 50422939.
Joshi, K. D. (1983). Introduction to General Topology. New York: John Wiley and Sons Ltd. ISBN 978-0-85226-444-7. OCLC 9218750.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
Schubert, Horst (1968). Topology. London: Macdonald & Co. ISBN 978-0-356-02077-8. OCLC 463753.
== Further reading ==
Comfort, W. W. (1977). "Ultrafilters: some old and some new results" (PDF). Bulletin of the American Mathematical Society. 83 (4): 417–455. doi:10.1090/S0002-9904-1977-14316-4. ISSN 0002-9904. MR 0454893.
Comfort, W. W.; Negrepontis, S. (1974), The theory of ultrafilters, Berlin, New York: Springer-Verlag, MR 0396267
Ultrafilter at the nLab | Wikipedia/Ultrafilter_lemma |
Algebraic closure of a subset
A
{\displaystyle A}
of a vector space
X
{\displaystyle X}
is the set of all points that are linearly accessible from
A
{\displaystyle A}
. It is denoted by
acl
A
{\displaystyle \operatorname {acl} A}
or
acl
X
A
{\displaystyle \operatorname {acl} _{X}A}
.
A point
x
∈
X
{\displaystyle x\in X}
is said to be linearly accessible from a subset
A
⊆
X
{\displaystyle A\subseteq X}
if there exists some
a
∈
A
{\displaystyle a\in A}
such that the line segment
[
a
,
x
)
:=
a
+
[
0
,
1
)
(
x
−
a
)
{\displaystyle [a,x):=a+[0,1)(x-a)}
is contained in
A
{\displaystyle A}
.
Necessarily,
A
⊆
acl
A
⊆
acl
acl
A
⊆
A
¯
{\displaystyle A\subseteq \operatorname {acl} A\subseteq \operatorname {acl} \operatorname {acl} A\subseteq {\overline {A}}}
(the last inclusion holds when X is equipped by any vector topology, Hausdorff or not).
The set A is algebraically closed if
A
=
acl
A
{\displaystyle A=\operatorname {acl} A}
.
The set
acl
A
∖
aint
A
{\displaystyle \operatorname {acl} A\setminus \operatorname {aint} A}
is the algebraic boundary of A in X.
== Examples ==
The set
Q
{\displaystyle \mathbb {Q} }
of rational numbers is algebraically closed but
Q
c
{\displaystyle \mathbb {Q} ^{c}}
is not algebraically open
If
A
=
{
(
x
,
y
)
∈
R
2
:
0
<
y
<
x
2
}
⊆
R
2
{\displaystyle A=\{(x,y)\in \mathbb {R} ^{2}:0<y<x^{2}\}\subseteq \mathbb {R} ^{2}}
then
0
∈
(
acl
acl
A
)
∖
acl
A
{\displaystyle 0\in (\operatorname {acl} \operatorname {acl} A)\setminus \operatorname {acl} A}
. In particular, the algebraic closure need not be algebraically closed.
Here,
A
¯
=
acl
acl
A
=
{
(
x
,
y
)
∈
R
2
:
0
≤
y
≤
x
2
}
=
(
acl
A
)
∪
{
0
}
{\displaystyle {\overline {A}}=\operatorname {acl} \operatorname {acl} A=\{(x,y)\in \mathbb {R} ^{2}:0\leq y\leq x^{2}\}=(\operatorname {acl} A)\cup \{0\}}
.
However,
acl
A
=
A
¯
{\displaystyle \operatorname {acl} A={\overline {A}}}
for every finite-dimensional convex set A.
Moreover, a convex set is algebraically closed if and only if its complement is algebraically open.
== See also ==
Algebraic interior
== References ==
== Bibliography ==
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. | Wikipedia/Algebraic_closure_(convex_analysis) |
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame. The two differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula:
E
=
m
c
2
{\displaystyle E=mc^{2}}
. In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula.
The formula defines the energy (E) of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of mass corresponds to an enormous amount of energy.
Rest mass, also called invariant mass, is a fundamental physical property of matter, independent of velocity. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy.
The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics.
Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.
== Description ==
Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared (c2). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by c2, which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with E as the energy released (removed), and m as the change in mass.
In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by c2. For an observer in the rest frame, removing energy is the same as removing mass and the formula m = E/c2 indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by c2.
== Mass in special relativity ==
An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by c2. Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum.
=== Relativistic mass ===
Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by c2, or mrel = E/c2. The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws.
=== Conservation of mass and energy ===
Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy.
=== Massless particles ===
Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation E = hf, where h is the Planck constant and f is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy.
=== Composite systems ===
For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses.
For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by c2) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass.
=== Relation to gravity ===
Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity.
The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the solar eclipse of May 29, 1919. During the eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation.
== Efficiency ==
In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation.
As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang.
Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes.
== Extension for systems in motion ==
Unlike a system's energy in an inertial frame, the relativistic energy (
E
r
e
l
{\displaystyle E_{\rm {rel}}}
) of a system depends on both the rest mass (
m
0
{\displaystyle m_{0}}
) and the total momentum of the system. The extension of Einstein's equation to these systems is given by:
E
r
e
l
2
−
|
p
|
2
c
2
=
m
0
2
c
4
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-|\mathbf {p} |^{2}c^{2}&=m_{0}^{2}c^{4}\\\end{aligned}}}
or
E
r
e
l
2
−
(
p
c
)
2
=
(
m
0
c
2
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}^{2}-(pc)^{2}&=(m_{0}c^{2})^{2}\\\end{aligned}}}
or
E
r
e
l
=
(
m
0
c
2
)
2
+
(
p
c
)
2
{\displaystyle {\begin{aligned}E_{\rm {rel}}={\sqrt {(m_{0}c^{2})^{2}+(pc)^{2}}}\,\!\end{aligned}}}
where the
(
p
c
)
2
{\displaystyle (pc)^{2}}
term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to
E
r
e
l
=
m
c
2
{\displaystyle E_{\rm {rel}}=mc^{2}}
when the momentum term is zero. For photons where
m
0
=
0
{\displaystyle m_{0}=0}
, the equation reduces to
E
r
e
l
=
p
c
{\displaystyle E_{\rm {rel}}=pc}
.
== Low-speed approximation ==
Using the Lorentz factor, γ, the energy–momentum can be rewritten as E = γmc2 and expanded as a power series:
E
=
m
0
c
2
[
1
+
1
2
(
v
c
)
2
+
3
8
(
v
c
)
4
+
5
16
(
v
c
)
6
+
…
]
.
{\displaystyle E=m_{0}c^{2}\left[1+{\frac {1}{2}}\left({\frac {v}{c}}\right)^{2}+{\frac {3}{8}}\left({\frac {v}{c}}\right)^{4}+{\frac {5}{16}}\left({\frac {v}{c}}\right)^{6}+\ldots \right].}
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v/c is small. For low speeds, all but the first two terms can be ignored:
E
≈
m
0
c
2
+
1
2
m
0
v
2
.
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}.}
In classical mechanics, both the m0c2 term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the m0c2 term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields:
E
≈
m
0
c
2
+
1
2
m
0
v
2
(
1
+
3
v
2
4
c
2
)
{\displaystyle E\approx m_{0}c^{2}+{\frac {1}{2}}m_{0}v^{2}\left(1+{\frac {3v^{2}}{4c^{2}}}\right)}
.
The difference between the two approximations is given by
3
v
2
4
c
2
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}}
, a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of 153,454 miles per hour (68,600 m/s). The difference between the approximations for the Parker Solar Probe in 2018 is
3
v
2
4
c
2
≈
3.9
×
10
−
8
{\displaystyle {\tfrac {3v^{2}}{4c^{2}}}\approx 3.9\times 10^{-8}}
, which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about
2.2
×
10
−
5
{\displaystyle 2.2\times 10^{-5}}
.
== Applications ==
=== Application to nuclear physics ===
The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power.
A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by c2), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is detonated in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton (9×1013 joule) nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation.
=== Practical examples ===
Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: E = m. In the SI system (expressing the ratio E/m in joules per kilogram using the value of c in metres per second):
E/m = c2 = (299792458 m/s)2 = 89875517873681764 J/kg (≈ 9.0 × 1016 joules per kilogram).
So the energy equivalent of one kilogram of mass is
89.9 petajoules
25.0 billion kilowatt-hours (or 25,000 GW·h)
21.5 trillion kilocalories (or 21.5 Pcal)
85.2 trillion BTUs (or 0.0852 quads)
or the energy released by combustion of any of the following:
21 500 kilotons of TNT-equivalent energy (or 21.5 Mt)
2630000000 litres or 695000000 US gallons of automotive gasoline
Any time energy is released, the process can be evaluated from an E = mc2 perspective. For instance, the "gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass.
Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged:
A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring.
Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = 1×10−12 g).
A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg.
== History ==
While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps.
=== Developments prior to Einstein ===
Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it.
During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of Е = kmc2, where 0.5 ≤ k ≤ 1. English engineer Samuel Tolver Preston in 1875 and the Italian industrialist and geologist Olinto De Pretto in 1903, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed c. Each of these particles has a kinetic energy of mc2 up to a small numerical factor, giving a mass–energy relation.
In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics.
==== Electromagnetic mass ====
There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass:
m
L
=
m
0
(
1
−
v
2
c
2
)
3
,
m
T
=
m
0
1
−
v
2
c
2
{\displaystyle m_{L}={\frac {m_{0}}{\left({\sqrt {1-{\frac {v^{2}}{c^{2}}}}}\right)^{3}}},\quad m_{T}={\frac {m_{0}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}
,
where
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass
m
e
m
=
E
e
m
c
2
.
{\displaystyle m_{em}={\frac {E_{em}}{c^{2}}}\,.}
By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes.
Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass"
m
0
=
4
3
E
e
m
c
2
{\displaystyle m_{0}={\frac {4}{3}}{\frac {E_{em}}{c^{2}}}}
to the cavity's mass. He argued that this implies mass dependence on temperature as well.
=== Einstein: mass–energy equivalence ===
Einstein did not write the exact formula E = mc2 in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy L by emitting light, its mass diminishes by L/c2. This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."
==== Mass–velocity relationship ====
In developing special relativity, Einstein found that the kinetic energy of a moving body is
E
k
=
m
0
c
2
(
γ
−
1
)
=
m
0
c
2
(
1
1
−
v
2
c
2
−
1
)
,
{\displaystyle E_{k}=m_{0}c^{2}(\gamma -1)=m_{0}c^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right),}
with v the velocity, m0 the rest mass, and γ the Lorentz factor.
He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle:
E
k
=
1
2
m
0
v
2
+
⋯
{\displaystyle E_{k}={\frac {1}{2}}m_{0}v^{2}+\cdots }
Without this second term, there would be an additional contribution in the energy when the particle is not moving.
==== Einstein's view on mass ====
Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on E = mc2, he treated m as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass".
In modern physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection.
==== Einstein's 1905 derivation ====
Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles:
E
k
=
m
c
2
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle E_{k}=mc^{2}\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used V to represent the speed of light in vacuum and L to represent the energy lost by a body in the form of radiation. Consequently, the equation E = mc2 was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy L in the form of radiation, its mass diminishes by L/V2." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of E0 before and E1 after the emission as seen in its rest frame. As seen from a moving frame, E0 becomes H0 and E1 becomes H1. Einstein obtained, in modern notation:
(
H
0
−
E
0
)
−
(
H
1
−
E
1
)
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle \left(H_{0}-E_{0}\right)-\left(H_{1}-E_{1}\right)=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
He then argued that H − E can only differ from the kinetic energy K by an additive constant, which gives
K
0
−
K
1
=
E
(
1
1
−
v
2
c
2
−
1
)
{\displaystyle K_{0}-K_{1}=E\left({\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}-1\right)}
.
Neglecting effects higher than third order in v/c after a Taylor series expansion of the right side of this yields:
K
0
−
K
1
=
E
c
2
v
2
2
.
{\displaystyle K_{0}-K_{1}={\frac {E}{c^{2}}}{\frac {v^{2}}{2}}.}
Einstein concluded that the emission reduces the body's mass by E/c2, and that the mass of a body is a measure of its energy content.
The correctness of Einstein's 1905 derivation of E = mc2 was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons.
==== Relativistic center-of-mass theorem of 1906 ====
Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's E = mc2, because mass conservation appears as a special case of the energy conservation law.
==== Further developments ====
There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy ε of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be ε0 = μV2 (where μ is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula μ = E0/V2, with E0 being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as M = E0 + pV0/c2 in June 1907, where p is the pressure and V0 the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as M0 = E0/c2 and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form M = μ + E0/c2 and concluded: "A mass μ is equivalent, as regards inertia, to a quantity of energy μc2. […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: m = E/c2 and m0 = E0/c2, with E being the relativistic energy (the energy of an object when the object is moving), E0 is the rest energy (the energy when not moving), m is the relativistic mass (the rest mass and the extra mass gained when moving), and m0 is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: ε = Mc2 and ε0 = mc2, with ε being the total energy (rest energy plus kinetic energy) of a moving material point, ε0 its rest energy, M the relativistic mass, and m the invariant mass.
In 1911, German physicist Max von Laue gave a more comprehensive proof of M0 = E0/c2 from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918.
Einstein returned to the topic once again after World War II and this time he wrote E = mc2 in the title of his article intended as an explanation for a general reader by analogy.
==== Alternative version ====
An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass M. If the body is examined in a frame moving with nonrelativistic velocity v, it is no longer at rest and in the moving frame it has momentum P = Mv. Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy E/2. In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity v to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor 1 − v/c. The momentum of the light is its energy divided by c, and it is increased by a factor of v/c. So the right-moving light is carrying an extra momentum ΔP given by:
Δ
P
=
v
c
E
2
c
.
{\displaystyle \Delta P={v \over c}{E \over 2c}.}
The left-moving light carries a little less momentum, by the same amount ΔP. So the total right-momentum in both light pulses is twice ΔP. This is the right-momentum that the object lost.
2
Δ
P
=
v
E
c
2
.
{\displaystyle 2\Delta P=v{E \over c^{2}}.}
The momentum of the object in the moving frame after the emission is reduced to this amount:
P
′
=
M
v
−
2
Δ
P
=
(
M
−
E
c
2
)
v
.
{\displaystyle P'=Mv-2\Delta P=\left(M-{E \over c^{2}}\right)v.}
So the change in the object's mass is equal to the total energy lost divided by c2. Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass.
=== Radioactivity and nuclear energy ===
It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter."
Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine."
This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation E = mc2 became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem.
While E = mc2 is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation E = mc2, plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of E = mc2 allowed them to realize on the spot that the basic fission process was energetically possible.
=== Einstein's equation written ===
According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021.
== See also ==
== Notes ==
== References ==
== External links ==
Einstein on the Inertia of Energy – MathPages
Einstein-on film explaining a mass energy equivalence
Mass and Energy – Conversations About Science with Theoretical Physicist Matt Strassler
The Equivalence of Mass and Energy – Entry in the Stanford Encyclopedia of Philosophy
Merrifield, Michael; Copeland, Ed; Bowley, Roger. "E=mc2 – Mass–Energy Equivalence". Sixty Symbols. Brady Haran for the University of Nottingham. | Wikipedia/Mass–energy_equivalence |
"An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism" is a fundamental publication by George Green in 1828, where he extends previous work of Siméon Denis Poisson on electricity and magnetism. The work in mathematical analysis, notably including what is now universally known as Green's theorem, is of the greatest importance in all branches of mathematical physics. It contains the first exposition of the theory of potential. In physics, Green's theorem is mostly used to solve two-dimensional flow integrals, stating that the sum of fluid outflows at any point inside a volume is equal to the total outflow summed about an enclosing area. In plane geometry, and in particular, area surveying, Green's theorem can be used to determine the area and centroid of plane figures solely by integrating over the perimeter.
It is in this essay that the term 'potential function' first occurs. Herein also his remarkable theorem in pure mathematics, since universally known as Green's theorem, and probably the most important instrument of investigation in the whole range of mathematical physics, made its appearance. We are all now able to understand, in a general way at least, the importance of Green's work, and the progress made since the publication of his essay in 1828. But to fully appreciate his work and subsequent progress one needs to know the outlook for the mathematico-physical sciences as it appeared to Green at this time and to realize his refined sensitiveness in promulgating his discoveries.
== Overview ==
Poisson's electrical and magnetical investigations were generalized and extended in 1828 by George Green. Green's treatment is based on the properties of the function already used by Lagrange, Laplace, and Poisson, which represents the sum of all the electric or magnetic charges in the field, divided by their respective distances from some given point: to this function Green gave the name potential, by which it has always since been known.
In 1828, Green published the paper which is the essay he is most famous for today. When Green published his Essay, it was sold on a subscription basis to 51 people, most of whom were friends and probably could not understand it. The wealthy landowner and mathematician Edward Bromhead bought a copy and encouraged Green to do further work in mathematics. Not believing the offer was sincere, Green did not contact Bromhead for two years.
Upon publishing the work, he first introduced the term 'potential' to denote the result obtained by adding the masses of all the particles of a system, each divided by its distance from a given point; and the properties of this function are first considered and applied to the theories of magnetism and electricity. This was followed by two papers communicated by Sir Edward to the Cambridge Philosophical Society: (1)' On the Laws of the Equilibrium of Fluids analogous to the Electric Fluid ' (12 Nov. 1832); (2)' On the Determination of the Attractions of Ellipsoids of Variable Densities ' (6 May 1833). Both papers display great analytical power, but are rather curious than practically interesting. Green's 1828 essay was neglected by mathematicians till 1846, and before that time most of its important theorems had been rediscovered by Gauss, Chasles, Sturm, and Thomson J. It did influence the work of Lord Kelvin and James Clerk Maxwell.
The self-taught mathematician's essay was one of the greatest advances that were made in the mathematical theory of electricity up to his time. "His researches," as Sir William Thomson has observed, "have led to the elementary proposition which must constitute the legitimate foundation of every perfect mathematical structure that is to be made from the materials furnished in the experimental laws of Coulomb. Not only do they afford a natural and complete explanation of the beautiful quantitative experiments which has been so interesting at all times to practical electricians, but they suggest to the mathematician the simplest and most powerful methods of dealing with problems which, if attacked by the mere force of the old analysis, must have remained forever unsolved."
Near the beginning of the memoir is established the celebrated formula connecting surface and volume integrals, which is now generally called Green's Theorem, and of which Poisson's result on the equivalent surface – and volume – distributions of magnetization is a particular application. By using this theorem to investigate the properties of the potential, Green arrived at many results of remarkable beauty and interest. We need only mention, as an example of the power of his method, the following: — Suppose that there is a hollow conducting shell, bounded by two closed surfaces, and that a number of electrified bodies are placed, some within and some without it; and let the inner surface and interior bodies be called the interior system, and the outer surface and exterior bodies be called the exterior system. Then all the electrical phenomena of the interior system, relative to attractions, repulsions, and densities, will be the same as if there were no exterior system, and the inner surface were a perfect conductor, put in communication with the earth; and all those of the exterior system will be the same as if the interior system did not exist, and the outer surface were a perfect conductor, containing a quantity of electricity equal to the whole of that originally contained in the shell itself and in all the interior bodies. It will be evident that electrostatics had by this time attained a state of development in which further progress could be hoped for only in the mathematical superstructure, unless experiment should unexpectedly bring to light phenomena of an entirely new character.
One of the simplest applications of these theorems was to perfect the theory of the Leyden phial, a result which (if we except the peculiar action of the insulating solid medium, since discovered by Faraday) we owe to his genius. He has also shown how an infinite number of forms of conductors may be invented, so that the distribution of electricity in equilibrium on each may be expressible in finite algebraic terms – an immense stride in the science, when we consider that the distribution of electricity on a single spherical conductor, an uninfluenced ellipsoidal conductor, and two spheres mutually influencing one another, were the only cases solved by Poisson, and indeed the only cases conceived to be solvable by mathematical writers.
== Editions ==
Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, Nottingham, 1828.
== See also ==
Mathematical analysis
Vector calculus
Partial differential equation
== References == | Wikipedia/An_Essay_on_the_Application_of_Mathematical_Analysis_to_the_Theories_of_Electricity_and_Magnetism |
Science in the medieval Islamic world was the science developed and practised during the Islamic Golden Age under the Abbasid Caliphate of Baghdad, the Umayyads of Córdoba, the Abbadids of Seville, the Samanids, the Ziyarids and the Buyids in Persia and beyond, spanning the period roughly between 786 and 1258. Islamic scientific achievements encompassed a wide range of subject areas, especially astronomy, mathematics, and medicine. Other subjects of scientific inquiry included alchemy and chemistry, botany and agronomy, geography and cartography, ophthalmology, pharmacology, physics, and zoology.
Medieval Islamic science had practical purposes as well as the goal of understanding. For example, astronomy was useful for determining the Qibla, the direction in which to pray, botany had practical application in agriculture, as in the works of Ibn Bassal and Ibn al-'Awwam, and geography enabled Abu Zayd al-Balkhi to make accurate maps. Islamic mathematicians such as Al-Khwarizmi, Avicenna and Jamshīd al-Kāshī made advances in algebra, trigonometry, geometry and Arabic numerals. Islamic doctors described diseases like smallpox and measles, and challenged classical Greek medical theory. Al-Biruni, Avicenna and others described the preparation of hundreds of drugs made from medicinal plants and chemical compounds. Islamic physicists such as Ibn Al-Haytham, Al-Bīrūnī and others studied optics and mechanics as well as astronomy, and criticised Aristotle's view of motion.
During the Middle Ages, Islamic science flourished across a wide area around the Mediterranean Sea and further afield, for several centuries, in a wide range of institutions.
== Context and history ==
The Islamic era began in 622. Islamic armies eventually conquered Arabia, Egypt and Mesopotamia, and successfully displaced the Persian and Byzantine Empires from the region within a few decades. Within a century, Islam had reached the area of present-day Portugal in the west and Central Asia in the east. The Islamic Golden Age (roughly between 786 and 1258) spanned the period of the Abbasid Caliphate (750–1258), with stable political structures and flourishing trade. Major religious and cultural works of the Islamic empire were translated into Arabic and occasionally Persian. Islamic culture inherited Greek, Indic, Assyrian and Persian influences. A new common civilisation formed, based on Islam. An era of high culture and innovation ensued, with rapid growth in population and cities. The Arab Agricultural Revolution in the countryside brought more crops and improved agricultural technology, especially irrigation. This supported the larger population and enabled culture to flourish. From the 9th century onwards, scholars such as Al-Kindi translated Indian, Assyrian, Sasanian (Persian) and Greek knowledge, including the works of Aristotle, into Arabic. These translations supported advances by scientists across the Islamic world.
Islamic science survived the initial Christian reconquest of Spain, including the fall of Seville in 1248, as work continued in the eastern centres (such as in Persia). After the completion of the Spanish reconquest in 1492, the Islamic world went into an economic and cultural decline. The Abbasid caliphate was followed by the Ottoman Empire (c. 1299–1922), centred in Turkey, and the Safavid Empire (1501–1736), centred in Persia, where work in the arts and sciences continued.
== Fields of inquiry ==
Medieval Islamic scientific achievements encompassed a wide range of subject areas, especially mathematics, astronomy, and medicine. Other subjects of scientific inquiry included physics, alchemy and chemistry, ophthalmology, and geography and cartography.
=== Alchemy and chemistry ===
The early Islamic period saw the development of theoretical frameworks in alchemy and chemistry, laying the foundation for later advancements in both fields. The sulfur-mercury theory of metals, first found in Sirr al-khalīqa ("The Secret of Creation", c. 750–850, falsely attributed to Apollonius of Tyana), and in the writings attributed to Jabir ibn Hayyan (written c. 850–950), remained the basis of theories of metallic composition until the 18th century. The Emerald Tablet, a cryptic text that all later alchemists up to and including Isaac Newton saw as the foundation of their art, first occurs in the Sirr al-khalīqa and in one of the works attributed to Jabir. In practical chemistry, the works of Jabir, and those of the Persian alchemist and physician Abu Bakr al-Razi (c. 865–925), contain the earliest systematic classifications of chemical substances. Alchemists were also interested in artificially creating such substances. Jabir describes the synthesis of ammonium chloride (sal ammoniac) from organic substances, and Abu Bakr al-Razi experimented with the heating of ammonium chloride, vitriol, and other salts, which would eventually lead to the discovery of the mineral acids by 13th-century Latin alchemists such as pseudo-Geber.
=== Astronomy and cosmology ===
Astronomy became a major discipline within Islamic science. Astronomers devoted effort both towards understanding the nature of the cosmos and to practical purposes. One application involved determining the Qibla, the direction to face during prayer. Another was astrology, predicting events affecting human life and selecting suitable times for actions such as going to war or founding a city. Al-Battani (850–922) accurately determined the length of the solar year. He contributed to the Tables of Toledo, used by astronomers to predict the movements of the sun, moon and planets across the sky. Copernicus (1473–1543) later used some of Al-Battani's astronomic tables.
Al-Zarqali (1028–1087) developed a more accurate astrolabe, used for centuries afterwards. He constructed a water clock in Toledo, discovered that the Sun's apogee moves slowly relative to the fixed stars, and obtained a good estimate of its motion for its rate of change. Nasir al-Din al-Tusi (1201–1274) wrote an important revision to Ptolemy's 2nd-century celestial model. When Tusi became Helagu's astrologer, he was given an observatory and gained access to Chinese techniques and observations. He developed trigonometry as a separate field, and compiled the most accurate astronomical tables available up to that time.
=== Botany and agronomy ===
The study of the natural world extended to a detailed examination of plants. The work done proved directly useful in the unprecedented growth of pharmacology across the Islamic world. Al-Dinawari (815–896) popularised botany in the Islamic world with his six-volume Kitab al-Nabat (Book of Plants). Only volumes 3 and 5 have survived, with part of volume 6 reconstructed from quoted passages. The surviving text describes 637 plants in alphabetical order from the letters sin to ya, so the whole book must have covered several thousand kinds of plants. Al-Dinawari described the phases of plant growth and the production of flowers and fruit. The thirteenth century encyclopedia compiled by Zakariya al-Qazwini (1203–1283) – ʿAjā'ib al-makhlūqāt (The Wonders of Creation) – contained, among many other topics, both realistic botany and fantastic accounts. For example, he described trees which grew birds on their twigs in place of leaves, but which could only be found in the far-distant British Isles. The use and cultivation of plants was documented in the 11th century by Muhammad bin Ibrāhīm Ibn Bassāl of Toledo in his book Dīwān al-filāha (The Court of Agriculture), and by Ibn al-'Awwam al-Ishbīlī (also called Abū l-Khayr al-Ishbīlī) of Seville in his 12th century book Kitāb al-Filāha (Treatise on Agriculture). Ibn Bassāl had travelled widely across the Islamic world, returning with a detailed knowledge of agronomy that fed into the Arab Agricultural Revolution. His practical and systematic book describes over 180 plants and how to propagate and care for them. It covered leaf- and root-vegetables, herbs, spices and trees.
=== Geography and cartography ===
The spread of Islam across Western Asia and North Africa encouraged an unprecedented growth in trade and travel by land and sea as far away as Southeast Asia, China, much of Africa, Scandinavia and even Iceland. Geographers worked to compile increasingly accurate maps of the known world, starting from many existing but fragmentary sources. Abu Zayd al-Balkhi (850–934), founder of the Balkhī school of cartography in Baghdad, wrote an atlas called Figures of the Regions (Suwar al-aqalim).
Al-Biruni (973–1048) measured the radius of the earth using a new method. It involved observing the height of a mountain at Nandana (now in Pakistan). Al-Idrisi (1100–1166) drew a map of the world for Roger, the Norman King of Sicily (ruled 1105–1154). He also wrote the Tabula Rogeriana (Book of Roger), a geographic study of the peoples, climates, resources and industries of the whole of the world known at that time. The Ottoman admiral Piri Reis (c. 1470–1553) made a map of the New World and West Africa in 1513. He made use of maps from Greece, Portugal, Muslim sources, and perhaps one made by Christopher Columbus. He represented a part of a major tradition of Ottoman cartography.
=== Mathematics ===
Islamic mathematicians gathered, organised and clarified the mathematics they inherited from ancient Egypt, Greece, India, Mesopotamia and Persia, and went on to make innovations of their own. Islamic mathematics covered algebra, geometry and arithmetic. Algebra was mainly used for recreation: it had few practical applications at that time. Geometry was studied at different levels. Some texts contain practical geometrical rules for surveying and for measuring figures. Theoretical geometry was a necessary prerequisite for understanding astronomy and optics, and it required years of concentrated work. Early in the Abbasid caliphate (founded 750), soon after the foundation of Baghdad in 762, some mathematical knowledge was assimilated by al-Mansur's group of scientists from the pre-Islamic Persian tradition in astronomy. Astronomers from India were invited to the court of the caliph in the late eighth century; they explained the rudimentary trigonometrical techniques used in Indian astronomy. Ancient Greek works such as Ptolemy's Almagest and Euclid's Elements were translated into Arabic. By the second half of the ninth century, Islamic mathematicians were already making contributions to the most sophisticated parts of Greek geometry. Islamic mathematics reached its apogee in the Eastern part of the Islamic world between the tenth and twelfth centuries. Most medieval Islamic mathematicians wrote in Arabic, others in Persian.
Al-Khwarizmi (8th–9th centuries) was instrumental in the adoption of the Hindu–Arabic numeral system and the development of algebra, introduced methods of simplifying equations, and used Euclidean geometry in his proofs. He was the first to treat algebra as an independent discipline in its own right, and presented the first systematic solution of linear and quadratic equations.: 14
Ibn Ishaq al-Kindi (801–873) worked on cryptography for the Abbasid Caliphate, and gave the first known recorded explanation of cryptanalysis and the first description of the method of frequency analysis.
Avicenna (c. 980–1037) contributed to mathematical techniques such as casting out nines. Thābit ibn Qurra (835–901) calculated the solution to a chessboard problem involving an exponential series.
Al-Farabi (c. 870–950) attempted to describe, geometrically, the repeating patterns popular in Islamic decorative motifs in his book Spiritual Crafts and Natural Secrets in the Details of Geometrical Figures. Omar Khayyam (1048–1131), known in the West as a poet, calculated the length of the year to within 5 decimal places, and found geometric solutions to all 13 forms of cubic equations, developing some quadratic equations still in use. Jamshīd al-Kāshī (c. 1380–1429) is credited with several theorems of trigonometry, including the law of cosines, also known as Al-Kashi's Theorem. He has been credited with the invention of decimal fractions, and with a method like Horner's to calculate roots. He calculated π correctly to 17 significant figures.
Sometime around the seventh century, Islamic scholars adopted the Hindu–Arabic numeral system, describing their use in a standard type of text fī l-ḥisāb al hindī, (On the numbers of the Indians). A distinctive Western Arabic variant of the Eastern Arabic numerals began to emerge around the 10th century in the Maghreb and Al-Andalus (sometimes called ghubar numerals, though the term is not always accepted), which are the direct ancestor of the modern Arabic numerals used throughout the world.
=== Medicine ===
Islamic society paid careful attention to medicine, following a hadith enjoining the preservation of good health. Its physicians inherited knowledge and traditional medical beliefs from the civilisations of classical Greece, Rome, Syria, Persia and India. These included the writings of Hippocrates such as on the theory of the four humours, and the theories of Galen. al-Razi (c. 865–925) identified smallpox and measles, and recognized fever as a part of the body's defenses. He wrote a 23-volume compendium of Chinese, Indian, Persian, Syriac and Greek medicine. al-Razi questioned the classical Greek medical theory of how the four humours regulate life processes. He challenged Galen's work on several fronts, including the treatment of bloodletting, arguing that it was effective.
al-Zahrawi (936–1013) was a surgeon whose most important surviving work is referred to as al-Tasrif (Medical Knowledge). It is a 30-volume set mainly discussing medical symptoms, treatments, and pharmacology. The last volume, on surgery, describes surgical instruments, supplies, and pioneering procedures. Avicenna (c. 980–1037) wrote the major medical textbook, The Canon of Medicine. Ibn al-Nafis (1213–1288) wrote an influential book on medicine; it largely replaced Avicenna's Canon in the Islamic world. He wrote commentaries on Galen and on Avicenna's works. One of these commentaries, discovered in 1924, described the circulation of blood through the lungs.
=== Optics and ophthalmology ===
Optics developed rapidly in this period. By the ninth century, there were works on physiological, geometrical and physical optics. Topics covered included mirror reflection.
Hunayn ibn Ishaq (809–873) wrote the book Ten Treatises on the Eye; this remained influential in the West until the 17th century.
Abbas ibn Firnas (810–887) developed lenses for magnification and the improvement of vision.
Ibn Sahl (c. 940–1000) discovered the law of refraction known as Snell's law. He used the law to produce the first Aspheric lenses that focused light without geometric aberrations.
In the eleventh century Ibn al-Haytham (Alhazen, 965–1040) rejected the Greek ideas about vision, whether the Aristotelian tradition that held that the form of the perceived object entered the eye (but not its matter), or that of Euclid and Ptolemy which held that the eye emitted a ray. Al-Haytham proposed in his Book of Optics that vision occurs by way of light rays forming a cone with its vertex at the center of the eye. He suggested that light was reflected from different surfaces in different directions, thus causing objects to look different. He argued further that the mathematics of reflection and refraction needed to be consistent with the anatomy of the eye. He was also an early proponent of the scientific method, the concept that a hypothesis must be proved by experiments based on confirmable procedures or mathematical evidence, five centuries before Renaissance scientists.
=== Pharmacology ===
Advances in botany and chemistry in the Islamic world encouraged developments in pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915) promoted the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris provides instructions for preparing "simples" from which were compounded the complex drugs then used. Sabur Ibn Sahl (died 869) was the first physician to describe a large variety of drugs and remedies for ailments. Al-Muwaffaq, in the 10th century, wrote The foundations of the true properties of Remedies, describing chemicals such as arsenious oxide and silicic acid. He distinguished between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also of lead compounds. Al-Biruni (973–1050) wrote the Kitab al-Saydalah (The Book of Drugs), describing in detail the properties of drugs, the role of pharmacy and the duties of the pharmacist. Ibn Sina (Avicenna) described 700 preparations, their properties, their mode of action and their indications. He devoted a whole volume to simples in The Canon of Medicine. Works by Masawaih al-Mardini (c. 925–1015) and by Ibn al-Wafid (1008–1074) were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by Mesue the Younger (died 1015) and as the Medicamentis simplicibus by Abenguefit (c. 997 – 1074) respectively. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Mardini under the title De Veneris. Ibn al-Baytar (1197–1248), in his Al-Jami fi al-Tibb, described a thousand simples and drugs based directly on Mediterranean plants collected along the entire coast between Syria and Spain, for the first time exceeding the coverage provided by Dioscorides in classical times. Islamic physicians such as Ibn Sina described clinical trials for determining the efficacy of medical drugs and substances.
=== Physics ===
The fields of physics studied in this period, apart from optics and astronomy which are described separately, are aspects of mechanics: statics, dynamics, kinematics and motion. In the sixth century John Philoponus (c. 490 – c. 570) rejected the Aristotelian view of motion. He argued instead that an object acquires an inclination to move when it has a motive power impressed on it. In the eleventh century Ibn Sina adopted roughly the same idea, namely that a moving object has force which is dissipated by external agents like air resistance. Ibn Sina distinguished between "force" and "inclination" (mayl); he claimed that an object gained mayl when the object is in opposition to its natural motion. He concluded that continuation of motion depends on the inclination that is transferred to the object, and that the object remains in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon. That view accords with Newton's first law of motion, on inertia. As a non-Aristotelian suggestion, it was essentially abandoned until it was described as "impetus" by Jean Buridan (c. 1295–1363), who was likely influenced by Ibn Sina's Book of Healing.
In the Shadows, Abū Rayḥān al-Bīrūnī (973–1048) describes non-uniform motion as the result of acceleration. Ibn-Sina's theory of mayl tried to relate the velocity and weight of a moving object, a precursor of the concept of momentum. Aristotle's theory of motion stated that a constant force produces a uniform motion; Abu'l-Barakāt al-Baghdādī (c. 1080 – 1164/5) disagreed, arguing that velocity and acceleration are two different things, and that force is proportional to acceleration, not to velocity.
The Banu Musa brothers, Jafar-Muhammad, Ahmad and al-Hasan (c. early 9th century) invented automated devices described in their Book of Ingenious Devices. Advances on the subject were also made by al-Jazari and Ibn Ma'ruf.
=== Zoology ===
Many classical works, including those of Aristotle, were transmitted from Greek to Syriac, then to Arabic, then to Latin in the Middle Ages. Aristotle's zoology remained dominant in its field for two thousand years. The Kitāb al-Hayawān (كتاب الحيوان, English: Book of Animals) is a 9th-century Arabic translation of History of Animals: 1–10, On the Parts of Animals: 11–14, and Generation of Animals: 15–19.
The book was mentioned by Al-Kindī (died 850), and commented on by Avicenna (Ibn Sīnā) in his The Book of Healing. Avempace (Ibn Bājja) and Averroes (Ibn Rushd) commented on and criticised On the Parts of Animals and Generation of Animals.
== Significance ==
Muslim scientists helped in laying the foundations for an experimental science with their contributions to the scientific method and their empirical, experimental and quantitative approach to scientific inquiry. In a more general sense, the positive achievement of Islamic science was simply to flourish, for centuries, in a wide range of institutions from observatories to libraries, madrasas to hospitals and courts, both at the height of the Islamic golden age and for some centuries afterwards. It did not lead to a scientific revolution like that in Early modern Europe, but such external comparisons are probably to be rejected as imposing "chronologically and culturally alien standards" on a successful medieval culture.
== See also ==
== References ==
== Notes ==
== Sources ==
Linton, Christopher M. (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge University Press. ISBN 978-0-521-82750-8.
Masood, Ehsan (2009). Science and Islam: A History. Icon Books. ISBN 978-1-785-78202-2.
McClellan, James E. III; Dorn, Harold, eds. (2006). Science and Technology in World History (2 ed.). Johns Hopkins. ISBN 978-0-8018-8360-6.
Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 3. Routledge. ISBN 978-0-415-12410-2.
Turner, Howard R. (1997). Science in Medieval Islam: An Illustrated Introduction. University of Texas Press. ISBN 978-0-292-78149-8.
== Further reading ==
Al-Daffa, Ali Abdullah; Stroyls, J.J. (1984). Studies in the exact sciences in medieval Islam. Wiley. ISBN 978-0-471-90320-8.
Hogendijk, Jan P.; Sabra, Abdelhamid I. (2003). The Enterprise of Science in Islam: New Perspectives. MIT Press. ISBN 978-0-262-19482-2.
Hill, Donald Routledge (1993). Islamic Science And Engineering. Edinburgh University Press. ISBN 978-0-7486-0455-5.
Huff, Toby (1993). The Rise of Early Modern Science: Islam, China, and the West. Cambridge University Press.
Kennedy, Edward S. (1983). Studies in the Islamic Exact Sciences. Syracuse University Press. ISBN 978-0-8156-6067-5.
Lindberg, D. C.; Shank, M. H., eds. (2013). The Cambridge History of Science. Volume 2: Medieval Science. Cambridge University Press. (chapters 1–5 cover science, mathematics and medicine in Islam)
Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. Vol. 2–3. Routledge. ISBN 978-0-415-02063-3.
Saliba, George (2007). Islamic Science and the Making of the European Renaissance. MIT Press. ISBN 978-0-262-19557-7.
== External links ==
"How Greek Science Passed to the Arabs" by De Lacy O'Leary
Saliba, George. "Whose Science is Arabic Science in Renaissance Europe?".
Habibi, Golareh. is there such a thing as Islamic science? the influence of Islam on the world of science, Science Creative Quarterly. | Wikipedia/Science_in_the_medieval_Islamic_world |
In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates (q, p) → (Q, P) that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations (a useful method for calculating conserved quantities) and Liouville's theorem (itself the basis for classical statistical mechanics).
Since Lagrangian mechanics is based on generalized coordinates, transformations of the coordinates q → Q do not affect the form of Lagrange's equations and, hence, do not affect the form of Hamilton's equations if the momentum is simultaneously changed by a Legendre transformation into
P
i
=
∂
L
∂
Q
˙
i
,
{\displaystyle P_{i}={\frac {\partial L}{\partial {\dot {Q}}_{i}}}\ ,}
where
{
(
P
1
,
Q
1
)
,
(
P
2
,
Q
2
)
,
(
P
3
,
Q
3
)
,
…
}
{\displaystyle \left\{\ (P_{1},Q_{1}),\ (P_{2},Q_{2}),\ (P_{3},Q_{3}),\ \ldots \ \right\}}
are the new co‑ordinates, grouped in canonical conjugate pairs of momenta
P
i
{\displaystyle P_{i}}
and corresponding positions
Q
i
,
{\displaystyle Q_{i},}
for
i
=
1
,
2
,
…
N
,
{\displaystyle i=1,2,\ldots \ N,}
with
N
{\displaystyle N}
being the number of degrees of freedom in both co‑ordinate systems.
Therefore, coordinate transformations (also called point transformations) are a type of canonical transformation. However, the class of canonical transformations is much broader, since the old generalized coordinates, momenta and even time may be combined to form the new generalized coordinates and momenta. Canonical transformations that do not include the time explicitly are called restricted canonical transformations (many textbooks consider only this type).
Modern mathematical descriptions of canonical transformations are considered under the broader topic of symplectomorphism which covers the subject with advanced mathematical prerequisites such as cotangent bundles, exterior derivatives and symplectic manifolds.
== Notation ==
Boldface variables such as q represent a list of N generalized coordinates that need not transform like a vector under rotation and similarly p represents the corresponding generalized momentum, e.g.,
q
≡
(
q
1
,
q
2
,
…
,
q
N
−
1
,
q
N
)
p
≡
(
p
1
,
p
2
,
…
,
p
N
−
1
,
p
N
)
.
{\displaystyle {\begin{aligned}\mathbf {q} &\equiv \left(q_{1},q_{2},\ldots ,q_{N-1},q_{N}\right)\\\mathbf {p} &\equiv \left(p_{1},p_{2},\ldots ,p_{N-1},p_{N}\right).\end{aligned}}}
A dot over a variable or list signifies the time derivative, e.g.,
q
˙
≡
d
q
d
t
{\displaystyle {\dot {\mathbf {q} }}\equiv {\frac {d\mathbf {q} }{dt}}}
and the equalities are read to be satisfied for all coordinates, for example:
p
˙
=
−
∂
f
∂
q
⟺
p
i
˙
=
−
∂
f
∂
q
i
(
i
=
1
,
…
,
N
)
.
{\displaystyle {\dot {\mathbf {p} }}=-{\frac {\partial f}{\partial \mathbf {q} }}\quad \Longleftrightarrow \quad {\dot {p_{i}}}=-{\frac {\partial f}{\partial {q_{i}}}}\quad (i=1,\dots ,N).}
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, e.g.,
p
⋅
q
≡
∑
k
=
1
N
p
k
q
k
.
{\displaystyle \mathbf {p} \cdot \mathbf {q} \equiv \sum _{k=1}^{N}p_{k}q_{k}.}
The dot product (also known as an "inner product") maps the two coordinate lists into one variable representing a single numerical value. The coordinates after transformation are similarly labelled with Q for transformed generalized coordinates and P for transformed generalized momentum.
== Conditions for restricted canonical transformation ==
Restricted canonical transformations are coordinate transformations where transformed coordinates Q and P do not have explicit time dependence, i.e.,
Q
=
Q
(
q
,
p
)
{\textstyle \mathbf {Q} =\mathbf {Q} (\mathbf {q} ,\mathbf {p} )}
and
P
=
P
(
q
,
p
)
{\textstyle \mathbf {P} =\mathbf {P} (\mathbf {q} ,\mathbf {p} )}
. The functional form of Hamilton's equations is
p
˙
=
−
∂
H
∂
q
,
q
˙
=
∂
H
∂
p
{\displaystyle {\begin{aligned}{\dot {\mathbf {p} }}&=-{\frac {\partial H}{\partial \mathbf {q} }}\,,&{\dot {\mathbf {q} }}&={\frac {\partial H}{\partial \mathbf {p} }}\end{aligned}}}
In general, a transformation (q, p) → (Q, P) does not preserve the form of Hamilton's equations but in the absence of time dependence in transformation, some simplifications are possible. Following the formal definition for a canonical transformation, it can be shown that for this type of transformation, the new Hamiltonian (sometimes called the Kamiltonian) can be expressed as:
K
(
Q
,
P
,
t
)
=
H
(
q
(
Q
,
P
)
,
p
(
Q
,
P
)
,
t
)
+
∂
G
∂
t
(
t
)
{\displaystyle K(\mathbf {Q} ,\mathbf {P} ,t)=H(q(\mathbf {Q} ,\mathbf {P} ),p(\mathbf {Q} ,\mathbf {P} ),t)+{\frac {\partial G}{\partial t}}(t)}
where it differs by a partial time derivative of a function known as a generator, which reduces to being only a function of time for restricted canonical transformations.
In addition to leaving the form of the Hamiltonian unchanged, it is also permits the use of the unchanged Hamiltonian in the Hamilton's equations of motion due to the above form as:
P
˙
=
−
∂
K
∂
Q
=
−
(
∂
H
∂
Q
)
Q
,
P
,
t
Q
˙
=
∂
K
∂
P
=
(
∂
H
∂
P
)
Q
,
P
,
t
{\displaystyle {\begin{alignedat}{3}{\dot {\mathbf {P} }}&=-{\frac {\partial K}{\partial \mathbf {Q} }}&&=-\left({\frac {\partial H}{\partial \mathbf {Q} }}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\{\dot {\mathbf {Q} }}&=\,\,\,\,{\frac {\partial K}{\partial \mathbf {P} }}&&=\,\,\,\,\,\left({\frac {\partial H}{\partial \mathbf {P} }}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\\end{alignedat}}}
Although canonical transformations refers to a more general set of transformations of phase space corresponding with less permissive transformations of the Hamiltonian, it provides simpler conditions to obtain results that can be further generalized. All of the following conditions, with the exception of bilinear invariance condition, can be generalized for canonical transformations, including time dependance.
=== Indirect conditions ===
Since restricted transformations have no explicit time dependence (by definition), the time derivative of a new generalized coordinate Qm is
Q
˙
m
=
∂
Q
m
∂
q
⋅
q
˙
+
∂
Q
m
∂
p
⋅
p
˙
=
∂
Q
m
∂
q
⋅
∂
H
∂
p
−
∂
Q
m
∂
p
⋅
∂
H
∂
q
=
{
Q
m
,
H
}
{\displaystyle {\begin{aligned}{\dot {Q}}_{m}&={\frac {\partial Q_{m}}{\partial \mathbf {q} }}\cdot {\dot {\mathbf {q} }}+{\frac {\partial Q_{m}}{\partial \mathbf {p} }}\cdot {\dot {\mathbf {p} }}\\&={\frac {\partial Q_{m}}{\partial \mathbf {q} }}\cdot {\frac {\partial H}{\partial \mathbf {p} }}-{\frac {\partial Q_{m}}{\partial \mathbf {p} }}\cdot {\frac {\partial H}{\partial \mathbf {q} }}\\&=\lbrace Q_{m},H\rbrace \end{aligned}}}
where {⋅, ⋅} is the Poisson bracket.
Similarly for the identity for the conjugate momentum, Pm using the form of the "Kamiltonian" it follows that:
∂
K
(
Q
,
P
,
t
)
∂
P
m
=
∂
K
(
Q
(
q
,
p
)
,
P
(
q
,
p
)
,
t
)
∂
q
⋅
∂
q
∂
P
m
+
∂
K
(
Q
(
q
,
p
)
,
P
(
q
,
p
)
,
t
)
∂
p
⋅
∂
p
∂
P
m
=
∂
H
(
q
,
p
,
t
)
∂
q
⋅
∂
q
∂
P
m
+
∂
H
(
q
,
p
,
t
)
∂
p
⋅
∂
p
∂
P
m
=
∂
H
∂
q
⋅
∂
q
∂
P
m
+
∂
H
∂
p
⋅
∂
p
∂
P
m
{\displaystyle {\begin{aligned}{\frac {\partial K(\mathbf {Q} ,\mathbf {P} ,t)}{\partial P_{m}}}&={\frac {\partial K(\mathbf {Q} (\mathbf {q} ,\mathbf {p} ),\mathbf {P} (\mathbf {q} ,\mathbf {p} ),t)}{\partial \mathbf {q} }}\cdot {\frac {\partial \mathbf {q} }{\partial P_{m}}}+{\frac {\partial K(\mathbf {Q} (\mathbf {q} ,\mathbf {p} ),\mathbf {P} (\mathbf {q} ,\mathbf {p} ),t)}{\partial \mathbf {p} }}\cdot {\frac {\partial \mathbf {p} }{\partial P_{m}}}\\[1ex]&={\frac {\partial H(\mathbf {q} ,\mathbf {p} ,t)}{\partial \mathbf {q} }}\cdot {\frac {\partial \mathbf {q} }{\partial P_{m}}}+{\frac {\partial H(\mathbf {q} ,\mathbf {p} ,t)}{\partial \mathbf {p} }}\cdot {\frac {\partial \mathbf {p} }{\partial P_{m}}}\\[1ex]&={\frac {\partial H}{\partial \mathbf {q} }}\cdot {\frac {\partial \mathbf {q} }{\partial P_{m}}}+{\frac {\partial H}{\partial \mathbf {p} }}\cdot {\frac {\partial \mathbf {p} }{\partial P_{m}}}\end{aligned}}}
Due to the form of the Hamiltonian equations of motion,
P
˙
=
−
∂
K
∂
Q
Q
˙
=
∂
K
∂
P
{\displaystyle {\begin{aligned}{\dot {\mathbf {P} }}&=-{\frac {\partial K}{\partial \mathbf {Q} }}\\{\dot {\mathbf {Q} }}&=\,\,\,\,{\frac {\partial K}{\partial \mathbf {P} }}\end{aligned}}}
if the transformation is canonical, the two derived results must be equal, resulting in the equations:
(
∂
Q
m
∂
p
n
)
q
,
p
=
−
(
∂
q
n
∂
P
m
)
Q
,
P
(
∂
Q
m
∂
q
n
)
q
,
p
=
(
∂
p
n
∂
P
m
)
Q
,
P
{\displaystyle {\begin{aligned}\left({\frac {\partial Q_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} }&=-\left({\frac {\partial q_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} }\\\left({\frac {\partial Q_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} }&=\left({\frac {\partial p_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} }\end{aligned}}}
The analogous argument for the generalized momenta Pm leads to two other sets of equations:
(
∂
P
m
∂
p
n
)
q
,
p
=
(
∂
q
n
∂
Q
m
)
Q
,
P
(
∂
P
m
∂
q
n
)
q
,
p
=
−
(
∂
p
n
∂
Q
m
)
Q
,
P
{\displaystyle {\begin{aligned}\left({\frac {\partial P_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} }&=\left({\frac {\partial q_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} }\\\left({\frac {\partial P_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} }&=-\left({\frac {\partial p_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} }\end{aligned}}}
These are the indirect conditions to check whether a given transformation is canonical.
=== Symplectic condition ===
Sometimes the Hamiltonian relations are represented as:
η
˙
=
J
∇
η
H
{\displaystyle {\dot {\eta }}=J\nabla _{\eta }H}
Where
J
:=
(
0
I
n
−
I
n
0
)
,
{\textstyle J:={\begin{pmatrix}0&I_{n}\\-I_{n}&0\\\end{pmatrix}},}
and
η
=
[
q
1
⋮
q
n
p
1
⋮
p
n
]
{\textstyle \mathbf {\eta } ={\begin{bmatrix}q_{1}\\\vdots \\q_{n}\\p_{1}\\\vdots \\p_{n}\\\end{bmatrix}}}
. Similarly, let
ε
=
[
Q
1
⋮
Q
n
P
1
⋮
P
n
]
{\textstyle \mathbf {\varepsilon } ={\begin{bmatrix}Q_{1}\\\vdots \\Q_{n}\\P_{1}\\\vdots \\P_{n}\\\end{bmatrix}}}
.
From the relation of partial derivatives, converting the
η
˙
=
J
∇
η
H
{\displaystyle {\dot {\eta }}=J\nabla _{\eta }H}
relation in terms of partial derivatives with new variables gives
η
˙
=
J
(
M
T
∇
ε
H
)
{\displaystyle {\dot {\eta }}=J(M^{T}\nabla _{\varepsilon }H)}
where
M
:=
∂
(
Q
,
P
)
∂
(
q
,
p
)
{\textstyle M:={\frac {\partial (\mathbf {Q} ,\mathbf {P} )}{\partial (\mathbf {q} ,\mathbf {p} )}}}
. Similarly for
ε
˙
{\textstyle {\dot {\varepsilon }}}
,
ε
˙
=
M
η
˙
=
M
J
M
T
∇
ε
H
{\displaystyle {\dot {\varepsilon }}=M{\dot {\eta }}=MJM^{T}\nabla _{\varepsilon }H}
Due to form of the Hamiltonian equations for
ε
˙
{\textstyle {\dot {\varepsilon }}}
,
ε
˙
=
J
∇
ε
K
=
J
∇
ε
H
{\displaystyle {\dot {\varepsilon }}=J\nabla _{\varepsilon }K=J\nabla _{\varepsilon }H}
where
∇
ε
K
=
∇
ε
H
{\textstyle \nabla _{\varepsilon }K=\nabla _{\varepsilon }H}
can be used due to the form of Kamiltonian. Equating the two equations gives the symplectic condition as:
M
J
M
T
=
J
{\displaystyle MJM^{T}=J}
The left hand side of the above is called the Poisson matrix of
ε
{\displaystyle \varepsilon }
, denoted as
P
(
ε
)
=
M
J
M
T
{\textstyle {\mathcal {P}}(\varepsilon )=MJM^{T}}
. Similarly, a Lagrange matrix of
η
{\displaystyle \eta }
can be constructed as
L
(
η
)
=
M
T
J
M
{\textstyle {\mathcal {L}}(\eta )=M^{T}JM}
. It can be shown that the symplectic condition is also equivalent to
M
T
J
M
=
J
{\textstyle M^{T}JM=J}
by using the
J
−
1
=
−
J
{\textstyle J^{-1}=-J}
property. The set of all matrices
M
{\textstyle M}
which satisfy symplectic conditions form a symplectic group. The symplectic conditions are equivalent with indirect conditions as they both lead to the equation
ε
˙
=
J
∇
ε
H
{\textstyle {\dot {\varepsilon }}=J\nabla _{\varepsilon }H}
, which is used in both of the derivations.
=== Invariance of the Poisson bracket ===
The Poisson bracket which is defined as:
{
u
,
v
}
η
:=
∑
i
=
1
n
(
∂
u
∂
q
i
∂
v
∂
p
i
−
∂
u
∂
p
i
∂
v
∂
q
i
)
{\displaystyle \{u,v\}_{\eta }:=\sum _{i=1}^{n}\left({\frac {\partial u}{\partial q_{i}}}{\frac {\partial v}{\partial p_{i}}}-{\frac {\partial u}{\partial p_{i}}}{\frac {\partial v}{\partial q_{i}}}\right)}
can be represented in matrix form as:
{
u
,
v
}
η
:=
(
∇
η
u
)
T
J
(
∇
η
v
)
{\displaystyle \{u,v\}_{\eta }:=(\nabla _{\eta }u)^{T}J(\nabla _{\eta }v)}
Hence using partial derivative relations and symplectic condition gives:
{
u
,
v
}
η
=
(
∇
η
u
)
T
J
(
∇
η
v
)
=
(
M
T
∇
ε
u
)
T
J
(
M
T
∇
ε
v
)
=
(
∇
ε
u
)
T
M
J
M
T
(
∇
ε
v
)
=
(
∇
ε
u
)
T
J
(
∇
ε
v
)
=
{
u
,
v
}
ε
{\displaystyle \{u,v\}_{\eta }=(\nabla _{\eta }u)^{T}J(\nabla _{\eta }v)=(M^{T}\nabla _{\varepsilon }u)^{T}J(M^{T}\nabla _{\varepsilon }v)=(\nabla _{\varepsilon }u)^{T}MJM^{T}(\nabla _{\varepsilon }v)=(\nabla _{\varepsilon }u)^{T}J(\nabla _{\varepsilon }v)=\{u,v\}_{\varepsilon }}
The symplectic condition can also be recovered by taking
u
=
ε
i
{\textstyle u=\varepsilon _{i}}
and
v
=
ε
j
{\textstyle v=\varepsilon _{j}}
which shows that
(
M
J
M
T
)
i
j
=
J
i
j
{\textstyle (MJM^{T})_{ij}=J_{ij}}
. Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that
P
i
j
(
ε
)
=
{
ε
i
,
ε
j
}
η
=
(
M
J
M
T
)
i
j
{\textstyle {\mathcal {P}}_{ij}(\varepsilon )=\{\varepsilon _{i},\varepsilon _{j}\}_{\eta }=(MJM^{T})_{ij}}
, which is also the result of explicitly calculating the matrix element by expanding it.
=== Invariance of the Lagrange bracket ===
The Lagrange bracket which is defined as:
[
u
,
v
]
η
:=
∑
i
=
1
n
(
∂
q
i
∂
u
∂
p
i
∂
v
−
∂
p
i
∂
u
∂
q
i
∂
v
)
{\displaystyle [u,v]_{\eta }:=\sum _{i=1}^{n}\left({\frac {\partial q_{i}}{\partial u}}{\frac {\partial p_{i}}{\partial v}}-{\frac {\partial p_{i}}{\partial u}}{\frac {\partial q_{i}}{\partial v}}\right)}
can be represented in matrix form as:
[
u
,
v
]
η
:=
(
∂
η
∂
u
)
T
J
(
∂
η
∂
v
)
{\displaystyle [u,v]_{\eta }:=\left({\frac {\partial \eta }{\partial u}}\right)^{T}J\left({\frac {\partial \eta }{\partial v}}\right)}
Using similar derivation, gives:
[
u
,
v
]
ε
=
(
∂
u
ε
)
T
J
(
∂
v
ε
)
=
(
M
∂
u
η
)
T
J
(
M
∂
v
η
)
=
(
∂
u
η
)
T
M
T
J
M
(
∂
v
η
)
=
(
∂
u
η
)
T
J
(
∂
v
η
)
=
[
u
,
v
]
η
{\displaystyle [u,v]_{\varepsilon }=(\partial _{u}\varepsilon )^{T}\,J\,(\partial _{v}\varepsilon )=(M\,\partial _{u}\eta )^{T}\,J\,(M\,\partial _{v}\eta )=(\partial _{u}\eta )^{T}\,M^{T}JM\,(\partial _{v}\eta )=(\partial _{u}\eta )^{T}\,J\,(\partial _{v}\eta )=[u,v]_{\eta }}
The symplectic condition can also be recovered by taking
u
=
η
i
{\textstyle u=\eta _{i}}
and
v
=
η
j
{\textstyle v=\eta _{j}}
which shows that
(
M
T
J
M
)
i
j
=
J
i
j
{\textstyle (M^{T}JM)_{ij}=J_{ij}}
. Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that
L
i
j
(
η
)
=
[
η
i
,
η
j
]
ε
=
(
M
T
J
M
)
i
j
{\textstyle {\mathcal {L}}_{ij}(\eta )=[\eta _{i},\eta _{j}]_{\varepsilon }=(M^{T}JM)_{ij}}
, which is also the result of explicitly calculating the matrix element by expanding it.
=== Bilinear invariance conditions ===
These set of conditions only apply to restricted canonical transformations or canonical transformations that are independent of time variable.
Consider arbitrary variations of two kinds, in a single pair of generalized coordinate and the corresponding momentum:
d
ε
=
(
d
q
1
,
d
p
1
,
0
,
0
,
…
)
,
δ
ε
=
(
δ
q
1
,
δ
p
1
,
0
,
0
,
…
)
.
{\textstyle d\varepsilon =(dq_{1},dp_{1},0,0,\ldots ),\quad \delta \varepsilon =(\delta q_{1},\delta p_{1},0,0,\ldots ).}
The area of the infinitesimal parallelogram is given by:
δ
a
(
12
)
=
d
q
1
δ
p
1
−
δ
q
1
d
p
1
=
(
δ
ε
)
T
J
d
ε
.
{\textstyle \delta a(12)=dq_{1}\delta p_{1}-\delta q_{1}dp_{1}={(\delta \varepsilon )}^{T}\,J\,d\varepsilon .}
It follows from the
M
T
J
M
=
J
{\textstyle M^{T}JM=J}
symplectic condition that the infinitesimal area is conserved under canonical transformation:
δ
a
(
12
)
=
(
δ
ε
)
T
J
d
ε
=
(
M
δ
η
)
T
J
M
d
η
=
(
δ
η
)
T
M
T
J
M
d
η
=
(
δ
η
)
T
J
d
η
=
δ
A
(
12
)
.
{\textstyle \delta a(12)={(\delta \varepsilon )}^{T}\,J\,d\varepsilon ={(M\delta \eta )}^{T}\,J\,Md\eta ={(\delta \eta )}^{T}\,M^{T}JM\,d\eta ={(\delta \eta )}^{T}\,J\,d\eta =\delta A(12).}
Note that the new coordinates need not be completely oriented in one coordinate momentum plane.
Hence, the condition is more generally stated as an invariance of the form
(
d
ε
)
T
J
δ
ε
{\textstyle {(d\varepsilon )}^{T}\,J\,\delta \varepsilon }
under canonical transformation, expanded as:
∑
δ
q
⋅
d
p
−
δ
p
⋅
d
q
=
∑
δ
Q
⋅
d
P
−
δ
P
⋅
d
Q
{\displaystyle \sum \delta q\cdot dp-\delta p\cdot dq=\sum \delta Q\cdot dP-\delta P\cdot dQ}
If the above is obeyed for any arbitrary variations, it would be only possible if the indirect conditions are met.
The form of the equation,
v
T
J
w
{\textstyle {v}^{T}\,J\,w}
is also known as a symplectic product of the vectors
v
{\textstyle {v}}
and
w
{\textstyle w}
and the bilinear invariance condition can be stated as a local conservation of the symplectic product.
== Liouville's theorem ==
The indirect conditions allow us to prove Liouville's theorem, which states that the volume in phase space is conserved under canonical transformations, i.e.,
∫
d
q
d
p
=
∫
d
Q
d
P
{\displaystyle \int \mathrm {d} \mathbf {q} \,\mathrm {d} \mathbf {p} =\int \mathrm {d} \mathbf {Q} \,\mathrm {d} \mathbf {P} }
By calculus, the latter integral must equal the former times the determinant of Jacobian M
∫
d
Q
d
P
=
∫
det
(
M
)
d
q
d
p
{\displaystyle \int \mathrm {d} \mathbf {Q} \,\mathrm {d} \mathbf {P} =\int \det(M)\,\mathrm {d} \mathbf {q} \,\mathrm {d} \mathbf {p} }
Where
M
:=
∂
(
Q
,
P
)
∂
(
q
,
p
)
{\textstyle M:={\frac {\partial (\mathbf {Q} ,\mathbf {P} )}{\partial (\mathbf {q} ,\mathbf {p} )}}}
Exploiting the "division" property of Jacobians yields
M
≡
∂
(
Q
,
P
)
∂
(
q
,
P
)
/
∂
(
q
,
p
)
∂
(
q
,
P
)
{\displaystyle M\equiv {\frac {\partial (\mathbf {Q} ,\mathbf {P} )}{\partial (\mathbf {q} ,\mathbf {P} )}}\left/{\frac {\partial (\mathbf {q} ,\mathbf {p} )}{\partial (\mathbf {q} ,\mathbf {P} )}}\right.}
Eliminating the repeated variables gives
M
≡
∂
(
Q
)
∂
(
q
)
/
∂
(
p
)
∂
(
P
)
{\displaystyle M\equiv {\frac {\partial (\mathbf {Q} )}{\partial (\mathbf {q} )}}\left/{\frac {\partial (\mathbf {p} )}{\partial (\mathbf {P} )}}\right.}
Application of the indirect conditions above yields
det
(
M
)
=
1
{\displaystyle \operatorname {det} (M)=1}
.
== Generating function approach ==
To guarantee a valid transformation between (q, p, H) and (Q, P, K), we may resort to a direct generating function approach. Both sets of variables must obey Hamilton's principle. That is the action integral over the Lagrangians
L
q
p
=
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
{\displaystyle {\mathcal {L}}_{qp}=\mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)}
and
L
Q
P
=
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
{\displaystyle {\mathcal {L}}_{QP}=\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)}
, obtained from the respective Hamiltonian via an "inverse" Legendre transformation, must be stationary in both cases (so that one can use the Euler–Lagrange equations to arrive at Hamiltonian equations of motion of the designated form; as it is shown for example here):
δ
∫
t
1
t
2
[
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
]
d
t
=
0
δ
∫
t
1
t
2
[
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
]
d
t
=
0
{\displaystyle {\begin{aligned}\delta \int _{t_{1}}^{t_{2}}\left[\mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)\right]dt&=0\\\delta \int _{t_{1}}^{t_{2}}\left[\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)\right]dt&=0\end{aligned}}}
One way for both variational integral equalities to be satisfied is to have
λ
[
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
]
=
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
+
d
G
d
t
{\displaystyle \lambda \left[\mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)\right]=\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {dG}{dt}}}
Lagrangians are not unique: one can always multiply by a constant λ and add a total time derivative dG/dt and yield the same equations of motion (as discussed on Wikibooks). In general, the scaling factor λ is set equal to one; canonical transformations for which λ ≠ 1 are called extended canonical transformations. dG/dt is kept, otherwise the problem would be rendered trivial and there would be not much freedom for the new canonical variables to differ from the old ones.
Here G is a generating function of one old canonical coordinate (q or p), one new canonical coordinate (Q or P) and (possibly) the time t. Thus, there are four basic types of generating functions (although mixtures of these four types can exist), depending on the choice of variables. As will be shown below, the generating function will define a transformation from old to new canonical coordinates, and any such transformation (q, p) → (Q, P) is guaranteed to be canonical.
The various generating functions and its properties tabulated below is discussed in detail:
=== Type 1 generating function ===
The type 1 generating function G1 depends only on the old and new generalized coordinates
G
≡
G
1
(
q
,
Q
,
t
)
{\textstyle G\equiv G_{1}(\mathbf {q} ,\mathbf {Q} ,t)}
. To derive the implicit transformation, we expand the defining equation above
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
=
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
+
∂
G
1
∂
t
+
∂
G
1
∂
q
⋅
q
˙
+
∂
G
1
∂
Q
⋅
Q
˙
{\displaystyle \mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {\partial G_{1}}{\partial t}}+{\frac {\partial G_{1}}{\partial \mathbf {q} }}\cdot {\dot {\mathbf {q} }}+{\frac {\partial G_{1}}{\partial \mathbf {Q} }}\cdot {\dot {\mathbf {Q} }}}
Since the new and old coordinates are each independent, the following 2N + 1 equations must hold
p
=
∂
G
1
∂
q
P
=
−
∂
G
1
∂
Q
K
=
H
+
∂
G
1
∂
t
{\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial G_{1}}{\partial \mathbf {q} }}\\\mathbf {P} &=-{\frac {\partial G_{1}}{\partial \mathbf {Q} }}\\K&=H+{\frac {\partial G_{1}}{\partial t}}\end{aligned}}}
These equations define the transformation (q, p) → (Q, P) as follows: The first set of N equations
p
=
∂
G
1
∂
q
{\textstyle \ \mathbf {p} ={\frac {\ \partial G_{1}\ }{\partial \mathbf {q} }}\ }
define relations between the new generalized coordinates Q and the old canonical coordinates (q, p). Ideally, one can invert these relations to obtain formulae for each Qk as a function of the old canonical coordinates. Substitution of these formulae for the Q coordinates into the second set of N equations
P
=
−
∂
G
1
∂
Q
{\textstyle \mathbf {P} =-{\frac {\partial G_{1}}{\partial \mathbf {Q} }}}
yields analogous formulae for the new generalized momenta P in terms of the old canonical coordinates (q, p). We then invert both sets of formulae to obtain the old canonical coordinates (q, p) as functions of the new canonical coordinates (Q, P). Substitution of the inverted formulae into the final equation
K
=
H
+
∂
G
1
∂
t
{\textstyle K=H+{\frac {\partial G_{1}}{\partial t}}}
yields a formula for K as a function of the new canonical coordinates (Q, P).
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
G
1
≡
q
⋅
Q
{\textstyle G_{1}\equiv \mathbf {q} \cdot \mathbf {Q} }
. This results in swapping the generalized coordinates for the momenta and vice versa
p
=
∂
G
1
∂
q
=
Q
P
=
−
∂
G
1
∂
Q
=
−
q
{\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial G_{1}}{\partial \mathbf {q} }}=\mathbf {Q} \\\mathbf {P} &=-{\frac {\partial G_{1}}{\partial \mathbf {Q} }}=-\mathbf {q} \end{aligned}}}
and K = H. This example illustrates how independent the coordinates and momenta are in the Hamiltonian formulation; they are equivalent variables.
=== Type 2 generating function ===
The type 2 generating function
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
depends only on the old generalized coordinates and the new generalized momenta
G
≡
G
2
(
q
,
P
,
t
)
−
Q
⋅
P
{\textstyle G\equiv G_{2}(\mathbf {q} ,\mathbf {P} ,t)-\mathbf {Q} \cdot \mathbf {P} }
where the
−
Q
⋅
P
{\displaystyle -\mathbf {Q} \cdot \mathbf {P} }
terms represent a Legendre transformation to change the right-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
=
−
Q
⋅
P
˙
−
K
(
Q
,
P
,
t
)
+
∂
G
2
∂
t
+
∂
G
2
∂
q
⋅
q
˙
+
∂
G
2
∂
P
⋅
P
˙
{\displaystyle \mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)=-\mathbf {Q} \cdot {\dot {\mathbf {P} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {\partial G_{2}}{\partial t}}+{\frac {\partial G_{2}}{\partial \mathbf {q} }}\cdot {\dot {\mathbf {q} }}+{\frac {\partial G_{2}}{\partial \mathbf {P} }}\cdot {\dot {\mathbf {P} }}}
Since the old coordinates and new momenta are each independent, the following 2N + 1 equations must hold
p
=
∂
G
2
∂
q
Q
=
∂
G
2
∂
P
K
=
H
+
∂
G
2
∂
t
{\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial G_{2}}{\partial \mathbf {q} }}\\\mathbf {Q} &={\frac {\partial G_{2}}{\partial \mathbf {P} }}\\K&=H+{\frac {\partial G_{2}}{\partial t}}\end{aligned}}}
These equations define the transformation (q, p) → (Q, P) as follows: The first set of N equations
p
=
∂
G
2
∂
q
{\textstyle \mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}}
define relations between the new generalized momenta P and the old canonical coordinates (q, p). Ideally, one can invert these relations to obtain formulae for each Pk as a function of the old canonical coordinates. Substitution of these formulae for the P coordinates into the second set of N equations
Q
=
∂
G
2
∂
P
{\textstyle \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }}}
yields analogous formulae for the new generalized coordinates Q in terms of the old canonical coordinates (q, p). We then invert both sets of formulae to obtain the old canonical coordinates (q, p) as functions of the new canonical coordinates (Q, P). Substitution of the inverted formulae into the final equation
K
=
H
+
∂
G
2
∂
t
{\textstyle K=H+{\frac {\partial G_{2}}{\partial t}}}
yields a formula for K as a function of the new canonical coordinates (Q, P).
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
G
2
≡
g
(
q
;
t
)
⋅
P
{\textstyle G_{2}\equiv \mathbf {g} (\mathbf {q} ;t)\cdot \mathbf {P} }
where g is a set of N functions. This results in a point transformation of the generalized coordinates
Q
=
∂
G
2
∂
P
=
g
(
q
;
t
)
{\textstyle \mathbf {Q} ={\frac {\partial G_{2}}{\partial \mathbf {P} }}=\mathbf {g} (\mathbf {q} ;t)}
.
=== Type 3 generating function ===
The type 3 generating function
G
3
(
p
,
Q
,
t
)
{\displaystyle G_{3}(\mathbf {p} ,\mathbf {Q} ,t)}
depends only on the old generalized momenta and the new generalized coordinates
G
≡
G
3
(
p
,
Q
,
t
)
+
q
⋅
p
{\textstyle G\equiv G_{3}(\mathbf {p} ,\mathbf {Q} ,t)+\mathbf {q} \cdot \mathbf {p} }
where the
q
⋅
p
{\displaystyle \mathbf {q} \cdot \mathbf {p} }
terms represent a Legendre transformation to change the left-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
−
q
⋅
p
˙
−
H
(
q
,
p
,
t
)
=
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
+
∂
G
3
∂
t
+
∂
G
3
∂
p
⋅
p
˙
+
∂
G
3
∂
Q
⋅
Q
˙
{\displaystyle -\mathbf {q} \cdot {\dot {\mathbf {p} }}-H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {\partial G_{3}}{\partial t}}+{\frac {\partial G_{3}}{\partial \mathbf {p} }}\cdot {\dot {\mathbf {p} }}+{\frac {\partial G_{3}}{\partial \mathbf {Q} }}\cdot {\dot {\mathbf {Q} }}}
Since the new and old coordinates are each independent, the following 2N + 1 equations must hold
q
=
−
∂
G
3
∂
p
P
=
−
∂
G
3
∂
Q
K
=
H
+
∂
G
3
∂
t
{\displaystyle {\begin{aligned}\mathbf {q} &=-{\frac {\partial G_{3}}{\partial \mathbf {p} }}\\\mathbf {P} &=-{\frac {\partial G_{3}}{\partial \mathbf {Q} }}\\K&=H+{\frac {\partial G_{3}}{\partial t}}\end{aligned}}}
These equations define the transformation (q, p) → (Q, P) as follows: The first set of N equations
q
=
−
∂
G
3
∂
p
{\textstyle \mathbf {q} =-{\frac {\partial G_{3}}{\partial \mathbf {p} }}}
define relations between the new generalized coordinates Q and the old canonical coordinates (q, p). Ideally, one can invert these relations to obtain formulae for each Qk as a function of the old canonical coordinates. Substitution of these formulae for the Q coordinates into the second set of N equations
P
=
−
∂
G
3
∂
Q
{\textstyle \mathbf {P} =-{\frac {\partial G_{3}}{\partial \mathbf {Q} }}}
yields analogous formulae for the new generalized momenta P in terms of the old canonical coordinates (q, p). We then invert both sets of formulae to obtain the old canonical coordinates (q, p) as functions of the new canonical coordinates (Q, P). Substitution of the inverted formulae into the final equation
K
=
H
+
∂
G
3
∂
t
{\textstyle K=H+{\frac {\partial G_{3}}{\partial t}}}
yields a formula for K as a function of the new canonical coordinates (Q, P).
In practice, this procedure is easier than it sounds, because the generating function is usually simple.
=== Type 4 generating function ===
The type 4 generating function
G
4
(
p
,
P
,
t
)
{\displaystyle G_{4}(\mathbf {p} ,\mathbf {P} ,t)}
depends only on the old and new generalized momenta
G
≡
G
4
(
p
,
P
,
t
)
+
q
⋅
p
−
Q
⋅
P
{\textstyle G\equiv G_{4}(\mathbf {p} ,\mathbf {P} ,t)+\mathbf {q} \cdot \mathbf {p} -\mathbf {Q} \cdot \mathbf {P} }
where the
q
⋅
p
−
Q
⋅
P
{\displaystyle \mathbf {q} \cdot \mathbf {p} -\mathbf {Q} \cdot \mathbf {P} }
terms represent a Legendre transformation to change both sides of the equation below. To derive the implicit transformation, we expand the defining equation above
−
q
⋅
p
˙
−
H
(
q
,
p
,
t
)
=
−
Q
⋅
P
˙
−
K
(
Q
,
P
,
t
)
+
∂
G
4
∂
t
+
∂
G
4
∂
p
⋅
p
˙
+
∂
G
4
∂
P
⋅
P
˙
{\displaystyle -\mathbf {q} \cdot {\dot {\mathbf {p} }}-H(\mathbf {q} ,\mathbf {p} ,t)=-\mathbf {Q} \cdot {\dot {\mathbf {P} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {\partial G_{4}}{\partial t}}+{\frac {\partial G_{4}}{\partial \mathbf {p} }}\cdot {\dot {\mathbf {p} }}+{\frac {\partial G_{4}}{\partial \mathbf {P} }}\cdot {\dot {\mathbf {P} }}}
Since the new and old coordinates are each independent, the following 2N + 1 equations must hold
q
=
−
∂
G
4
∂
p
Q
=
∂
G
4
∂
P
K
=
H
+
∂
G
4
∂
t
{\displaystyle {\begin{aligned}\mathbf {q} &=-{\frac {\partial G_{4}}{\partial \mathbf {p} }}\\\mathbf {Q} &={\frac {\partial G_{4}}{\partial \mathbf {P} }}\\K&=H+{\frac {\partial G_{4}}{\partial t}}\end{aligned}}}
These equations define the transformation (q, p) → (Q, P) as follows: The first set of N equations
q
=
−
∂
G
4
∂
p
{\textstyle \mathbf {q} =-{\frac {\partial G_{4}}{\partial \mathbf {p} }}}
define relations between the new generalized momenta P and the old canonical coordinates (q, p). Ideally, one can invert these relations to obtain formulae for each Pk as a function of the old canonical coordinates. Substitution of these formulae for the P coordinates into the second set of N equations
Q
=
∂
G
4
∂
P
{\textstyle \mathbf {Q} ={\frac {\partial G_{4}}{\partial \mathbf {P} }}}
yields analogous formulae for the new generalized coordinates Q in terms of the old canonical coordinates (q, p). We then invert both sets of formulae to obtain the old canonical coordinates (q, p) as functions of the new canonical coordinates (Q, P). Substitution of the inverted formulae into the final equation
K
=
H
+
∂
G
4
∂
t
{\textstyle K=H+{\frac {\partial G_{4}}{\partial t}}}
yields a formula for K as a function of the new canonical coordinates (Q, P).
=== Limitations on the four types of generating functions ===
Considering
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
as an example, using generating function of second kind:
p
i
=
∂
G
2
∂
q
i
{\textstyle {p}_{i}={\frac {\partial G_{2}}{\partial {q}_{i}}}}
and
Q
i
=
∂
G
2
∂
P
i
{\textstyle {Q}_{i}={\frac {\partial G_{2}}{\partial {P}_{i}}}}
, the first set of equations consisting of variables
p
{\textstyle \mathbf {p} }
,
q
{\textstyle \mathbf {q} }
and
P
{\textstyle \mathbf {P} }
has to be inverted to get
P
(
q
,
p
)
{\textstyle \mathbf {P} (\mathbf {q} ,\mathbf {p} )}
. This process is possible when the matrix defined by
a
i
j
=
∂
p
i
(
q
,
P
)
∂
P
j
{\textstyle a_{ij}={\frac {\partial {p}_{i}(\mathbf {q} ,\mathbf {P} )}{\partial P_{j}}}}
is non-singular using the inverse function theorem, and can be restated as the following relation.
|
∂
2
G
2
∂
P
1
∂
q
1
⋯
∂
2
G
2
∂
P
1
∂
q
n
⋮
⋱
⋮
∂
2
G
2
∂
P
n
∂
q
1
⋯
∂
2
G
2
∂
P
n
∂
q
n
|
≠
0
{\displaystyle \left|{\begin{array}{l l l}{\displaystyle {\frac {\partial ^{2}G_{2}}{\partial P_{1}\partial q_{1}}}}&{\cdots }&{\displaystyle {\frac {\partial ^{2}G_{2}}{\partial P_{1}\partial q_{n}}}}\\{\quad \vdots }&{\ddots }&{\quad \vdots }\\{\displaystyle {\frac {\partial ^{2}G_{2}}{\partial P_{n}\partial q_{1}}}}&{\cdots }&{\displaystyle {\frac {\partial ^{2}G_{2}}{\partial P_{n}\partial q_{n}}}}\end{array}}\right|{\neq 0}}
Hence, restrictions are placed on generating functions to have the matrices:
[
∂
2
G
1
∂
Q
j
∂
q
i
]
{\textstyle \left[{\frac {\partial ^{2}G_{1}}{\partial Q_{j}\partial q_{i}}}\right]}
,
[
∂
2
G
2
∂
P
j
∂
q
i
]
{\textstyle \left[{\frac {\partial ^{2}G_{2}}{\partial P_{j}\partial q_{i}}}\right]}
,
[
∂
2
G
3
∂
p
j
∂
Q
i
]
{\textstyle \left[{\frac {\partial ^{2}G_{3}}{\partial p_{j}\partial Q_{i}}}\right]}
and
[
∂
2
G
4
∂
p
j
∂
P
i
]
{\textstyle \left[{\frac {\partial ^{2}G_{4}}{\partial p_{j}\partial P_{i}}}\right]}
, being non-singular. These conditions also correspond to local invertibility of the coordinates. From these restrictions, it can be stated that type 1 and type 4 generating functions always have a non-singular
[
∂
Q
i
(
q
,
p
)
∂
p
j
]
{\textstyle \left[{\frac {\partial Q_{i}(\mathbf {q} ,\mathbf {p} )}{\partial p_{j}}}\right]}
matrix whereas type 2 and type 3 generating functions always have a non-singular
[
∂
P
i
(
q
,
p
)
∂
p
j
]
{\textstyle \left[{\frac {\partial P_{i}(\mathbf {q} ,\mathbf {p} )}{\partial p_{j}}}\right]}
matrix. Hence, the canonical transformations resulting from these four generating functions alone are not completely general.
=== Generalized use of generating functions ===
In other words, since (Q, P) and (q, p) are each 2N independent functions, it follows that to have generating function of the form
G
1
(
q
,
Q
,
t
)
{\textstyle G_{1}(\mathbf {q} ,\mathbf {Q} ,t)}
and
G
4
(
p
,
P
,
t
)
{\displaystyle G_{4}(\mathbf {p} ,\mathbf {P} ,t)}
or
G
2
(
q
,
P
,
t
)
{\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)}
and
G
3
(
p
,
Q
,
t
)
{\displaystyle G_{3}(\mathbf {p} ,\mathbf {Q} ,t)}
, the corresponding Jacobian matrices
[
∂
Q
i
∂
p
j
]
{\textstyle \left[{\frac {\partial Q_{i}}{\partial p_{j}}}\right]}
and
[
∂
P
i
∂
p
j
]
{\textstyle \left[{\frac {\partial P_{i}}{\partial p_{j}}}\right]}
are restricted to be non singular, ensuring that the generating function is a function of 2N + 1 independent variables. However, as a feature of canonical transformations, it is always possible to choose 2N such independent functions from sets (q, p) or (Q, P), to form a generating function representation of canonical transformations, including the time variable. Hence, it can be proven that every finite canonical transformation can be given as a closed but implicit form that is a variant of the given four simple forms.
== Canonical transformation conditions ==
=== Canonical transformation relations ===
From:
K
=
H
+
∂
G
∂
t
{\displaystyle K=H+{\frac {\partial G}{\partial t}}}
, calculate
∂
(
K
−
H
)
∂
P
{\textstyle {\frac {\partial (K-H)}{\partial P}}}
:
(
∂
(
K
−
H
)
∂
P
)
Q
,
P
,
t
=
∂
K
∂
P
−
∂
H
∂
p
∂
p
∂
P
−
∂
H
∂
q
∂
q
∂
P
−
∂
H
∂
t
(
∂
t
∂
P
)
Q
,
P
,
t
=
Q
˙
+
p
˙
∂
q
∂
P
−
q
˙
∂
p
∂
P
=
∂
Q
∂
t
+
∂
Q
∂
q
⋅
q
˙
+
∂
Q
∂
p
⋅
p
˙
+
p
˙
∂
q
∂
P
−
q
˙
∂
p
∂
P
=
q
˙
(
∂
Q
∂
q
−
∂
p
∂
P
)
+
p
˙
(
∂
q
∂
P
+
∂
Q
∂
p
)
+
∂
Q
∂
t
{\displaystyle {\begin{aligned}\left({\frac {\partial (K-H)}{\partial P}}\right)_{Q,P,t}&={\frac {\partial K}{\partial P}}-{\frac {\partial H}{\partial p}}{\frac {\partial p}{\partial P}}-{\frac {\partial H}{\partial q}}{\frac {\partial q}{\partial P}}-{\frac {\partial H}{\partial t}}\left({\frac {\partial t}{\partial P}}\right)_{Q,P,t}\\&={\dot {Q}}+{\dot {p}}{\frac {\partial q}{\partial P}}-{\dot {q}}{\frac {\partial p}{\partial P}}\\&={\frac {\partial Q}{\partial t}}+{\frac {\partial Q}{\partial q}}\cdot {\dot {q}}+{\frac {\partial Q}{\partial p}}\cdot {\dot {p}}+{\dot {p}}{\frac {\partial q}{\partial P}}-{\dot {q}}{\frac {\partial p}{\partial P}}\\&={\dot {q}}\left({\frac {\partial Q}{\partial q}}-{\frac {\partial p}{\partial P}}\right)+{\dot {p}}\left({\frac {\partial q}{\partial P}}+{\frac {\partial Q}{\partial p}}\right)+{\frac {\partial Q}{\partial t}}\end{aligned}}}
Since the left hand side is
∂
(
K
−
H
)
∂
P
=
∂
∂
P
(
∂
G
∂
t
)
|
Q
,
P
,
t
{\textstyle {\frac {\partial (K-H)}{\partial P}}={\frac {\partial }{\partial P}}\left({\frac {\partial G}{\partial t}}\right){\bigg |}_{Q,P,t}}
which is independent of dynamics of the particles, equating coefficients of
q
˙
{\textstyle {\dot {q}}}
and
p
˙
{\textstyle {\dot {p}}}
to zero, canonical transformation rules are obtained. This step is equivalent to equating the left hand side as
∂
(
K
−
H
)
∂
P
=
∂
Q
∂
t
{\textstyle {\frac {\partial (K-H)}{\partial P}}={\frac {\partial Q}{\partial t}}}
.
Since the left hand side is
∂
(
K
−
H
)
∂
P
=
∂
∂
P
(
∂
G
∂
t
)
|
Q
,
P
,
t
{\textstyle {\frac {\partial (K-H)}{\partial P}}={\frac {\partial }{\partial P}}\left({\frac {\partial G}{\partial t}}\right){\bigg |}_{Q,P,t}}
which is independent of dynamics of the particles, equating coefficients of
q
˙
{\textstyle {\dot {q}}}
and
p
˙
{\textstyle {\dot {p}}}
to zero, canonical transformation rules are obtained. This step is equivalent to equating the left hand side as
∂
(
K
−
H
)
∂
P
=
∂
Q
∂
t
{\textstyle {\frac {\partial (K-H)}{\partial P}}={\frac {\partial Q}{\partial t}}}
.
Similarly:
(
∂
(
K
−
H
)
∂
Q
)
Q
,
P
,
t
=
∂
K
∂
Q
−
∂
H
∂
p
∂
p
∂
Q
−
∂
H
∂
q
∂
q
∂
Q
−
∂
H
∂
t
(
∂
t
∂
Q
)
Q
,
P
,
t
=
−
P
˙
+
p
˙
∂
q
∂
Q
−
q
˙
∂
p
∂
Q
=
−
∂
P
∂
t
−
∂
P
∂
q
⋅
q
˙
−
∂
P
∂
p
⋅
p
˙
+
p
˙
∂
q
∂
Q
−
q
˙
∂
p
∂
Q
=
−
(
q
˙
(
∂
P
∂
q
+
∂
p
∂
Q
)
+
p
˙
(
∂
P
∂
p
−
∂
q
∂
Q
)
+
∂
P
∂
t
)
{\displaystyle {\begin{aligned}\left({\frac {\partial (K-H)}{\partial Q}}\right)_{Q,P,t}&={\frac {\partial K}{\partial Q}}-{\frac {\partial H}{\partial p}}{\frac {\partial p}{\partial Q}}-{\frac {\partial H}{\partial q}}{\frac {\partial q}{\partial Q}}-{\frac {\partial H}{\partial t}}\left({\frac {\partial t}{\partial Q}}\right)_{Q,P,t}\\&=-{\dot {P}}+{\dot {p}}{\frac {\partial q}{\partial Q}}-{\dot {q}}{\frac {\partial p}{\partial Q}}\\&=-{\frac {\partial P}{\partial t}}-{\frac {\partial P}{\partial q}}\cdot {\dot {q}}-{\frac {\partial P}{\partial p}}\cdot {\dot {p}}+{\dot {p}}{\frac {\partial q}{\partial Q}}-{\dot {q}}{\frac {\partial p}{\partial Q}}\\&=-\left({\dot {q}}\left({\frac {\partial P}{\partial q}}+{\frac {\partial p}{\partial Q}}\right)+{\dot {p}}\left({\frac {\partial P}{\partial p}}-{\frac {\partial q}{\partial Q}}\right)+{\frac {\partial P}{\partial t}}\right)\end{aligned}}}
Similarly the canonical transformation rules are obtained by equating the left hand side as
∂
(
K
−
H
)
∂
Q
=
−
∂
P
∂
t
{\textstyle {\frac {\partial (K-H)}{\partial Q}}=-{\frac {\partial P}{\partial t}}}
.
The above two relations can be combined in matrix form as:
J
(
∇
ε
∂
G
∂
t
)
=
∂
ε
∂
t
{\textstyle J\left(\nabla _{\varepsilon }{\frac {\partial G}{\partial t}}\right)={\frac {\partial \varepsilon }{\partial t}}}
(which will also retain same form for extended canonical transformation) where the result
∂
G
∂
t
=
K
−
H
{\textstyle {\frac {\partial G}{\partial t}}=K-H}
, has been used. The canonical transformation relations are hence said to be equivalent to
J
(
∇
ε
∂
G
∂
t
)
=
∂
ε
∂
t
{\textstyle J\left(\nabla _{\varepsilon }{\frac {\partial G}{\partial t}}\right)={\frac {\partial \varepsilon }{\partial t}}}
in this context.
The canonical transformation relations can now be restated to include time dependance:
(
∂
Q
m
∂
p
n
)
q
,
p
,
t
=
−
(
∂
q
n
∂
P
m
)
Q
,
P
,
t
(
∂
Q
m
∂
q
n
)
q
,
p
,
t
=
(
∂
p
n
∂
P
m
)
Q
,
P
,
t
{\displaystyle {\begin{aligned}\left({\frac {\partial Q_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=-\left({\frac {\partial q_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\\left({\frac {\partial Q_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=\left({\frac {\partial p_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\end{aligned}}}
(
∂
P
m
∂
p
n
)
q
,
p
,
t
=
(
∂
q
n
∂
Q
m
)
Q
,
P
,
t
(
∂
P
m
∂
q
n
)
q
,
p
,
t
=
−
(
∂
p
n
∂
Q
m
)
Q
,
P
,
t
{\displaystyle {\begin{aligned}\left({\frac {\partial P_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=\left({\frac {\partial q_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\\left({\frac {\partial P_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=-\left({\frac {\partial p_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\end{aligned}}}
Since
∂
(
K
−
H
)
∂
P
=
∂
Q
∂
t
{\textstyle {\frac {\partial (K-H)}{\partial P}}={\frac {\partial Q}{\partial t}}}
and
∂
(
K
−
H
)
∂
Q
=
−
∂
P
∂
t
{\textstyle {\frac {\partial (K-H)}{\partial Q}}=-{\frac {\partial P}{\partial t}}}
, if Q and P do not explicitly depend on time,
K
=
H
+
∂
G
∂
t
(
t
)
{\textstyle K=H+{\frac {\partial G}{\partial t}}(t)}
can be taken. The analysis of restricted canonical transformations is hence consistent with this generalization.
=== Symplectic condition ===
Applying transformation of co-ordinates formula for
∇
η
H
=
M
T
∇
ε
H
{\displaystyle \nabla _{\eta }H=M^{T}\nabla _{\varepsilon }H}
, in Hamiltonian's equations gives:
η
˙
=
J
∇
η
H
=
J
(
M
T
∇
ε
H
)
{\displaystyle {\dot {\eta }}=J\nabla _{\eta }H=J(M^{T}\nabla _{\varepsilon }H)}
Similarly for
ε
˙
{\textstyle {\dot {\varepsilon }}}
:
ε
˙
=
M
η
˙
+
∂
ε
∂
t
=
M
J
M
T
∇
ε
H
+
∂
ε
∂
t
{\displaystyle {\dot {\varepsilon }}=M{\dot {\eta }}+{\frac {\partial \varepsilon }{\partial t}}=MJM^{T}\nabla _{\varepsilon }H+{\frac {\partial \varepsilon }{\partial t}}}
or:
ε
˙
=
J
∇
ε
K
=
J
∇
ε
H
+
J
∇
ε
(
∂
G
∂
t
)
{\displaystyle {\dot {\varepsilon }}=J\nabla _{\varepsilon }K=J\nabla _{\varepsilon }H+J\nabla _{\varepsilon }\left({\frac {\partial G}{\partial t}}\right)}
Where the last terms of each equation cancel due to
J
(
∇
ε
∂
G
∂
t
)
=
∂
ε
∂
t
{\textstyle J\left(\nabla _{\varepsilon }{\frac {\partial G}{\partial t}}\right)={\frac {\partial \varepsilon }{\partial t}}}
condition from canonical transformations. Hence leaving the symplectic relation:
M
J
M
T
=
J
{\textstyle MJM^{T}=J}
which is also equivalent with the condition
M
T
J
M
=
J
{\textstyle M^{T}JM=J}
. It follows from the above two equations that the symplectic condition implies the equation
J
(
∇
ε
∂
G
∂
t
)
=
∂
ε
∂
t
{\textstyle J\left(\nabla _{\varepsilon }{\frac {\partial G}{\partial t}}\right)={\frac {\partial \varepsilon }{\partial t}}}
, from which the indirect conditions can be recovered. Thus, symplectic conditions and indirect conditions can be said to be equivalent in the context of using generating functions.
=== Invariance of the Poisson and Lagrange brackets ===
Since
P
i
j
(
ε
)
=
{
ε
i
,
ε
j
}
η
=
(
M
J
M
T
)
i
j
=
J
i
j
{\textstyle {\mathcal {P}}_{ij}(\varepsilon )=\{\varepsilon _{i},\varepsilon _{j}\}_{\eta }=(MJM^{T})_{ij}=J_{ij}}
and
L
i
j
(
η
)
=
[
η
i
,
η
j
]
ε
=
(
M
T
J
M
)
i
j
=
J
i
j
{\textstyle {\mathcal {L}}_{ij}(\eta )=[\eta _{i},\eta _{j}]_{\varepsilon }=(M^{T}JM)_{ij}=J_{ij}}
where the symplectic condition is used in the last equalities. Using
{
ε
i
,
ε
j
}
ε
=
[
η
i
,
η
j
]
η
=
J
i
j
{\textstyle \{\varepsilon _{i},\varepsilon _{j}\}_{\varepsilon }=[\eta _{i},\eta _{j}]_{\eta }=J_{ij}}
, the equalities
{
ε
i
,
ε
j
}
η
=
{
ε
i
,
ε
j
}
ε
{\textstyle \{\varepsilon _{i},\varepsilon _{j}\}_{\eta }=\{\varepsilon _{i},\varepsilon _{j}\}_{\varepsilon }}
and
[
η
i
,
η
j
]
ε
=
[
η
i
,
η
j
]
η
{\textstyle [\eta _{i},\eta _{j}]_{\varepsilon }=[\eta _{i},\eta _{j}]_{\eta }}
are obtained which imply the invariance of Poisson and Lagrange brackets.
== Extended canonical transformation ==
=== Canonical transformation relations ===
By solving for:
λ
[
p
⋅
q
˙
−
H
(
q
,
p
,
t
)
]
=
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
+
d
G
d
t
{\displaystyle \lambda \left[\mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t)\right]=\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)+{\frac {dG}{dt}}}
with various forms of generating function, the relation between K and H goes as
∂
G
∂
t
=
K
−
λ
H
{\textstyle {\frac {\partial G}{\partial t}}=K-\lambda H}
instead, which also applies for
λ
=
1
{\textstyle \lambda =1}
case.
All results presented below can also be obtained by replacing
q
→
λ
q
{\textstyle q\rightarrow {\sqrt {\lambda }}q}
,
p
→
λ
p
{\textstyle p\rightarrow {\sqrt {\lambda }}p}
and
H
→
λ
H
{\textstyle H\rightarrow {\lambda }H}
from known solutions, since it retains the form of Hamilton's equations. The extended canonical transformations are hence said to be result of a canonical transformation (
λ
=
1
{\textstyle \lambda =1}
) and a trivial canonical transformation (
λ
≠
1
{\textstyle \lambda \neq 1}
) which has
M
J
M
T
=
λ
J
{\textstyle MJM^{T}=\lambda J}
(for the given example,
M
=
λ
I
{\textstyle M={\sqrt {\lambda }}I}
which satisfies the condition).
Using same steps previously used in previous generalization, with
∂
G
∂
t
=
K
−
λ
H
{\textstyle {\frac {\partial G}{\partial t}}=K-\lambda H}
in the general case, and retaining the equation
J
(
∇
ε
∂
g
∂
t
)
=
∂
ε
∂
t
{\textstyle J\left(\nabla _{\varepsilon }{\frac {\partial g}{\partial t}}\right)={\frac {\partial \varepsilon }{\partial t}}}
, extended canonical transformation partial differential relations are obtained as:
(
∂
Q
m
∂
p
n
)
q
,
p
,
t
=
−
λ
(
∂
q
n
∂
P
m
)
Q
,
P
,
t
(
∂
Q
m
∂
q
n
)
q
,
p
,
t
=
λ
(
∂
p
n
∂
P
m
)
Q
,
P
,
t
{\displaystyle {\begin{aligned}\left({\frac {\partial Q_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=-\lambda \left({\frac {\partial q_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\\left({\frac {\partial Q_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=\lambda \left({\frac {\partial p_{n}}{\partial P_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\end{aligned}}}
(
∂
P
m
∂
p
n
)
q
,
p
,
t
=
λ
(
∂
q
n
∂
Q
m
)
Q
,
P
,
t
(
∂
P
m
∂
q
n
)
q
,
p
,
t
=
−
λ
(
∂
p
n
∂
Q
m
)
Q
,
P
,
t
{\displaystyle {\begin{aligned}\left({\frac {\partial P_{m}}{\partial p_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=\lambda \left({\frac {\partial q_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\\\left({\frac {\partial P_{m}}{\partial q_{n}}}\right)_{\mathbf {q} ,\mathbf {p} ,t}&=-\lambda \left({\frac {\partial p_{n}}{\partial Q_{m}}}\right)_{\mathbf {Q} ,\mathbf {P} ,t}\end{aligned}}}
=== Symplectic condition ===
Following the same steps to derive the symplectic conditions, as:
η
˙
=
J
∇
η
H
=
J
(
M
T
∇
ε
H
)
{\displaystyle {\dot {\eta }}=J\nabla _{\eta }H=J(M^{T}\nabla _{\varepsilon }H)}
and
ε
˙
=
M
η
˙
+
∂
ε
∂
t
=
M
J
M
T
∇
ε
H
+
∂
ε
∂
t
{\displaystyle {\dot {\varepsilon }}=M{\dot {\eta }}+{\frac {\partial \varepsilon }{\partial t}}=MJM^{T}\nabla _{\varepsilon }H+{\frac {\partial \varepsilon }{\partial t}}}
where using
∂
G
∂
t
=
K
−
λ
H
{\textstyle {\frac {\partial G}{\partial t}}=K-\lambda H}
instead gives:
ε
˙
=
J
∇
ε
K
=
λ
J
∇
ε
H
+
J
∇
ε
(
∂
G
∂
t
)
{\displaystyle {\dot {\varepsilon }}=J\nabla _{\varepsilon }K=\lambda J\nabla _{\varepsilon }H+J\nabla _{\varepsilon }\left({\frac {\partial G}{\partial t}}\right)}
The second part of each equation cancel. Hence the condition for extended canonical transformation instead becomes:
M
J
M
T
=
λ
J
{\textstyle MJM^{T}=\lambda J}
.
=== Poisson and Lagrange brackets ===
The Poisson brackets are changed as follows:
{
u
,
v
}
η
=
(
∇
η
u
)
T
J
(
∇
η
v
)
=
(
M
T
∇
ε
u
)
T
J
(
M
T
∇
ε
v
)
=
(
∇
ε
u
)
T
M
J
M
T
(
∇
ε
v
)
=
λ
(
∇
ε
u
)
T
J
(
∇
ε
v
)
=
λ
{
u
,
v
}
ε
{\displaystyle \{u,v\}_{\eta }=(\nabla _{\eta }u)^{T}J(\nabla _{\eta }v)=(M^{T}\nabla _{\varepsilon }u)^{T}J(M^{T}\nabla _{\varepsilon }v)=(\nabla _{\varepsilon }u)^{T}MJM^{T}(\nabla _{\varepsilon }v)=\lambda (\nabla _{\varepsilon }u)^{T}J(\nabla _{\varepsilon }v)=\lambda \{u,v\}_{\varepsilon }}
whereas, the Lagrange brackets are changed as:
[
u
,
v
]
ε
=
(
∂
u
ε
)
T
J
(
∂
v
ε
)
=
(
M
∂
u
η
)
T
J
(
M
∂
v
η
)
=
(
∂
u
η
)
T
M
T
J
M
(
∂
v
η
)
=
λ
(
∂
u
η
)
T
J
(
∂
v
η
)
=
λ
[
u
,
v
]
η
{\displaystyle [u,v]_{\varepsilon }=(\partial _{u}\varepsilon )^{T}\,J\,(\partial _{v}\varepsilon )=(M\,\partial _{u}\eta )^{T}\,J\,(M\,\partial _{v}\eta )=(\partial _{u}\eta )^{T}\,M^{T}JM\,(\partial _{v}\eta )=\lambda (\partial _{u}\eta )^{T}\,J\,(\partial _{v}\eta )=\lambda [u,v]_{\eta }}
Hence, the Poisson bracket scales by the inverse of
λ
{\textstyle \lambda }
whereas the Lagrange bracket scales by a factor of
λ
{\textstyle \lambda }
.
== Infinitesimal canonical transformation ==
Consider the canonical transformation that depends on a continuous parameter
α
{\displaystyle \alpha }
, as follows:
Q
(
q
,
p
,
t
;
α
)
Q
(
q
,
p
,
t
;
0
)
=
q
P
(
q
,
p
,
t
;
α
)
with
P
(
q
,
p
,
t
;
0
)
=
p
{\displaystyle {\begin{aligned}&Q(q,p,t;\alpha )\quad \quad \quad &Q(q,p,t;0)=q\\&P(q,p,t;\alpha )\quad \quad {\text{with}}\quad &P(q,p,t;0)=p\\\end{aligned}}}
For infinitesimal values of
α
{\displaystyle \alpha }
, the corresponding transformations are called as infinitesimal canonical transformations which are also known as differential canonical transformations.
=== Explicit construction ===
Consider the following generating function:
G
2
(
q
,
P
,
t
)
=
q
P
+
α
G
(
q
,
P
,
t
)
{\displaystyle G_{2}(q,P,t)=qP+\alpha G(q,P,t)}
Since for
α
=
0
{\displaystyle \alpha =0}
,
G
2
=
q
P
{\displaystyle G_{2}=qP}
has the resulting canonical transformation,
Q
=
q
{\displaystyle Q=q}
and
P
=
p
{\displaystyle P=p}
, this type of generating function can be used for infinitesimal canonical transformation by restricting
α
{\displaystyle \alpha }
to an infinitesimal value.
From the conditions of generators of second type:
p
=
∂
G
2
∂
q
=
P
+
α
∂
G
∂
q
(
q
,
P
,
t
)
Q
=
∂
G
2
∂
P
=
q
+
α
∂
G
∂
P
(
q
,
P
,
t
)
{\displaystyle {\begin{aligned}{p}&={\frac {\partial G_{2}}{\partial {q}}}=P+\alpha {\frac {\partial G}{\partial {q}}}(q,P,t)\\{Q}&={\frac {\partial G_{2}}{\partial {P}}}=q+\alpha {\frac {\partial G}{\partial {P}}}(q,P,t)\\\end{aligned}}}
Since
P
=
P
(
q
,
p
,
t
;
α
)
{\displaystyle P=P(q,p,t;\alpha )}
, changing the variables of the function
G
{\displaystyle G}
to
G
(
q
,
p
,
t
)
{\displaystyle G(q,p,t)}
and neglecting terms of higher order of
α
{\displaystyle \alpha }
, gives:
p
=
P
+
α
∂
G
∂
q
(
q
,
p
,
t
)
Q
=
q
+
α
∂
G
∂
p
(
q
,
p
,
t
)
{\displaystyle {\begin{aligned}{p}&=P+\alpha {\frac {\partial G}{\partial {q}}}(q,p,t)\\{Q}&=q+\alpha {\frac {\partial G}{\partial p}}(q,p,t)\\\end{aligned}}}
Infinitesimal canonical transformations can also be derived using the matrix form of the symplectic condition. The function
G
(
q
,
p
,
t
)
{\displaystyle G(q,p,t)}
is very significant in infinitesimal canonical transformations and is referred to as the generator of infinitesimal canonical transformation.
=== Active and passive transformations ===
In the active view of transformations, the coordinate system is changed without the physical system changing, whereas in the passive view of transformation, the coordinate system is retained and the physical system is said to undergo transformations.
==== Active view of transformation ====
Thus, using the relations from infinitesimal canonical transformations, the change in the system states under active view of the canonical transformation is said to be:
δ
q
=
α
∂
G
∂
p
(
q
,
p
,
t
)
and
δ
p
=
−
α
∂
G
∂
q
(
q
,
p
,
t
)
,
{\displaystyle {\begin{aligned}&\delta q=\alpha {\frac {\partial G}{\partial p}}(q,p,t)\quad {\text{and}}\quad \delta p=-\alpha {\frac {\partial G}{\partial q}}(q,p,t),\\\end{aligned}}}
or as
δ
η
=
α
J
∇
η
G
{\displaystyle \delta \eta =\alpha J\nabla _{\eta }G}
in matrix form.
For any function
u
(
η
)
{\displaystyle u(\eta )}
, it changes under active view of the transformation according to:
δ
u
=
u
(
η
+
δ
η
)
−
u
(
η
)
=
(
∇
η
u
)
T
δ
η
=
α
(
∇
η
u
)
T
J
(
∇
η
G
)
=
α
{
u
,
G
}
.
{\displaystyle \delta u=u(\eta +\delta \eta )-u(\eta )=(\nabla _{\eta }u)^{T}\delta \eta =\alpha (\nabla _{\eta }u)^{T}J(\nabla _{\eta }G)=\alpha \{u,G\}.}
==== Passive view of transformation ====
Considering the change of Hamiltonians in the passive view, i.e., for a fixed point,
K
(
Q
=
q
0
,
P
=
p
0
,
t
)
−
H
(
q
=
q
0
,
p
=
p
0
,
t
)
=
(
H
(
q
0
′
,
p
0
′
,
t
)
+
∂
G
2
∂
t
)
−
H
(
q
0
,
p
0
,
t
)
=
−
δ
H
+
α
∂
G
∂
t
=
α
(
{
G
,
H
}
+
∂
G
∂
t
)
=
α
d
G
d
t
{\displaystyle K(Q=q_{0},P=p_{0},t)-H(q=q_{0},p=p_{0},t)=\left(H(q_{0}',p_{0}',t)+{\frac {\partial G_{2}}{\partial t}}\right)-H(q_{0},p_{0},t)=-\delta H+\alpha {\frac {\partial G}{\partial t}}=\alpha \left(\{G,H\}+{\frac {\partial G}{\partial t}}\right)=\alpha {\frac {dG}{dt}}}
where
(
q
=
q
0
′
,
p
=
p
0
′
)
{\textstyle (q=q_{0}',p=p_{0}')}
are mapped to the point,
(
Q
=
q
0
,
P
=
p
0
)
{\textstyle (Q=q_{0},P=p_{0})}
by the infinitesimal canonical transformation, and similar change of variables for
G
(
q
,
P
,
t
)
{\displaystyle G(q,P,t)}
to
G
(
q
,
p
,
t
)
{\displaystyle G(q,p,t)}
is considered up-to first order of
α
{\displaystyle \alpha }
. Hence, if the Hamiltonian is invariant for infinitesimal canonical transformations, its generator is a constant of motion.
=== Generators of dynamical symmetry transformations ===
Consider the transformation where the change of coordinates also depends on the generalized velocities.
q
r
→
q
r
+
δ
q
r
δ
q
r
=
ϵ
ϕ
r
(
q
,
q
˙
,
t
)
{\displaystyle {\begin{aligned}q^{r}\to q^{r}+\delta q^{r}\\\delta q^{r}=\epsilon \phi ^{r}(q,{\dot {q}},t)\\\end{aligned}}}
If the above is a dynamical symmetry, then the lagrangian changes by:
δ
L
=
ϵ
d
d
t
F
(
q
,
q
˙
,
t
)
{\displaystyle \delta L=\epsilon {\frac {d}{dt}}F(q,{\dot {q}},t)}
and the new Lagrangian is said to be dynamically equivalent to the old Lagrangian as it ensures the resultant equations of motion being the same. The change in generalized velocity and momentum term can be derived as:
p
=
∂
L
∂
q
˙
,
q
˙
=
d
q
d
t
δ
p
r
=
∂
2
L
∂
q
s
∂
q
˙
r
δ
q
s
+
∂
2
L
∂
q
˙
s
∂
q
˙
r
δ
q
˙
s
,
δ
q
˙
r
=
ϵ
∂
ϕ
r
∂
q
s
q
˙
s
+
ϵ
∂
ϕ
r
∂
q
˙
s
q
¨
s
+
ϵ
∂
ϕ
r
∂
t
{\displaystyle {\begin{aligned}p={\frac {\partial L}{\partial {\dot {q}}}},\quad &{\dot {q}}={\frac {dq}{dt}}\\\delta p_{r}={\frac {\partial ^{2}L}{\partial q^{s}\partial {\dot {q}}^{r}}}\delta q^{s}+{\frac {\partial ^{2}L}{\partial {\dot {q}}^{s}\partial {\dot {q}}^{r}}}\delta {\dot {q}}^{s},\quad &\delta {\dot {q}}^{r}=\epsilon {\frac {\partial \phi ^{r}}{\partial q^{s}}}{\dot {q}}^{s}+\epsilon {\frac {\partial \phi ^{r}}{\partial {\dot {q}}^{s}}}{\ddot {q}}^{s}+\epsilon {\frac {\partial \phi ^{r}}{\partial t}}\\\end{aligned}}}
==== Generator of transformation ====
Using the change in Lagrangian property of a dynamical symmetry:
d
d
t
F
=
∂
F
∂
q
r
q
˙
r
+
∂
F
∂
q
˙
r
q
¨
r
+
∂
F
∂
t
=
δ
L
ϵ
=
(
∂
L
∂
q
r
ϕ
r
+
∂
L
∂
q
˙
r
∂
ϕ
r
∂
t
)
+
p
s
∂
ϕ
s
∂
q
r
q
˙
r
+
p
s
∂
ϕ
s
∂
q
˙
r
q
¨
r
{\displaystyle {\frac {d}{dt}}F={\frac {\partial F}{\partial q^{r}}}{\dot {q}}^{r}+{\frac {\partial F}{\partial {\dot {q}}^{r}}}{\ddot {q}}^{r}+{\frac {\partial F}{\partial t}}={\frac {\delta L}{\epsilon }}=\left({\frac {\partial L}{\partial q^{r}}}\phi ^{r}+{\frac {\partial L}{\partial {\dot {q}}^{r}}}{\frac {\partial \phi ^{r}}{\partial t}}\right)+p_{s}{\frac {\partial \phi ^{s}}{\partial q^{r}}}{\dot {q}}^{r}+p_{s}{\frac {\partial \phi ^{s}}{\partial {\dot {q}}^{r}}}{\ddot {q}}^{r}}
Since the
q
¨
{\displaystyle {\ddot {q}}}
terms appear only once in either side, it's coefficients must be equal for this to be true, giving the relation:
p
s
∂
ϕ
s
∂
q
˙
r
=
∂
F
∂
q
˙
r
{\textstyle p_{s}{\frac {\partial \phi ^{s}}{\partial {\dot {q}}^{r}}}={\frac {\partial F}{\partial {\dot {q}}^{r}}}}
using which, it can be shown that
{
q
r
,
ϵ
(
p
s
ϕ
s
−
F
)
}
=
δ
q
r
,
{
p
r
,
ϵ
(
p
s
ϕ
s
−
F
)
}
=
δ
p
r
+
ϵ
(
∂
L
∂
q
s
−
d
d
t
∂
L
∂
q
˙
s
)
∂
ϕ
s
∂
q
˙
r
{\displaystyle \{q^{r},\epsilon (p_{s}\phi ^{s}-F)\}=\delta q^{r},\quad \{p_{r},\epsilon (p_{s}\phi ^{s}-F)\}=\delta p_{r}+\epsilon \left({\frac {\partial L}{\partial q^{s}}}-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}^{s}}}\right){\frac {\partial \phi ^{s}}{\partial {\dot {q}}^{r}}}}
Hence, the term
p
ϕ
−
F
{\displaystyle p\phi -F}
generates the canonical dynamical symmetry transformation if either the Euler Lagrange relation gives zero, or if
∂
ϕ
s
∂
q
˙
r
=
0
∀
s
,
r
{\displaystyle {\frac {\partial \phi _{s}}{\partial {\dot {q}}^{r}}}=0\,\forall s,r}
which is a infinitesimal point transformation. Note that in the point transformation condition, the quantity generates the transformation regardless of if the Euler Lagrange equations are satisfied and since they do not depend on the dynamics of the problem are said to be a purely kinematic relation.
==== Noether Invariant ====
Using Euler Lagrange relation for the provided Lagrangian, the invariants of motion can be derived as:
δ
L
−
ϵ
d
d
t
F
(
q
,
q
˙
,
t
)
=
ϵ
ϕ
(
∂
∂
q
−
d
d
t
∂
∂
q
˙
)
L
=
0
+
ϵ
d
d
t
(
ϕ
∂
∂
q
˙
L
−
F
)
=
ϵ
d
d
t
(
ϕ
∂
∂
q
˙
L
−
F
)
=
0
{\displaystyle \delta L-\epsilon {\frac {d}{dt}}F(q,{\dot {q}},t)=\epsilon \phi {\cancelto {=0}{\left({\frac {\partial }{\partial q}}-{\frac {d}{dt}}{\frac {\partial }{\partial {\dot {q}}}}\right)L}}+\epsilon {\frac {d}{dt}}\left(\phi {\frac {\partial }{\partial {\dot {q}}}}L-F\right)=\epsilon {\frac {d}{dt}}\left(\phi {\frac {\partial }{\partial {\dot {q}}}}L-F\right)=0}
Hence
(
ϕ
∂
∂
q
˙
L
−
F
)
=
p
ϕ
−
F
{\displaystyle \left(\phi {\frac {\partial }{\partial {\dot {q}}}}L-F\right)=p\phi -F}
is a constant of motion. Hence, the derived Noether invariant also generates the same symmetry transformation as shown previously.
=== Examples of ICT ===
==== Time evolution ====
Taking
G
(
q
,
p
,
t
)
=
H
(
q
,
p
,
t
)
{\displaystyle G(q,p,t)=H(q,p,t)}
and
α
=
d
t
{\displaystyle \alpha =dt}
, then
δ
η
=
(
J
∇
η
H
)
d
t
=
η
˙
d
t
=
d
η
{\displaystyle \delta \eta =(J\nabla _{\eta }H)dt={\dot {\eta }}dt=d\eta }
. Thus the continuous application of such a transformation maps the coordinates
η
(
τ
)
{\displaystyle \eta (\tau )}
to
η
(
τ
+
t
)
{\displaystyle \eta (\tau +t)}
. Hence if the Hamiltonian is time translation invariant, i.e. does not have explicit time dependence, its value is conserved for the motion.
==== Translation ====
Taking
G
(
q
,
p
,
t
)
=
p
k
{\displaystyle G(q,p,t)=p_{k}}
,
δ
p
i
=
0
{\displaystyle \delta p_{i}=0}
and
δ
q
i
=
α
δ
i
k
{\displaystyle \delta q_{i}=\alpha \delta _{ik}}
. Hence, the canonical momentum generates a shift in the corresponding generalized coordinate and if the Hamiltonian is invariant of translation, the momentum is a constant of motion.
==== Rotation ====
Consider an orthogonal system for an N-particle system:
q
=
(
x
1
,
y
1
,
z
1
,
…
,
x
n
,
y
n
,
z
n
)
,
p
=
(
p
1
x
,
p
1
y
,
p
1
z
,
…
,
p
n
x
,
p
n
y
,
p
n
z
)
.
{\displaystyle {\begin{array}{l}{\mathbf {q} =\left(x_{1},y_{1},z_{1},\ldots ,x_{n},y_{n},z_{n}\right),}\\{\mathbf {p} =\left(p_{1x},p_{1y},p_{1z},\ldots ,p_{nx},p_{ny},p_{nz}\right).}\end{array}}}
Choosing the generator to be:
G
=
L
z
=
∑
i
=
1
n
(
x
i
p
i
y
−
y
i
p
i
x
)
{\displaystyle G=L_{z}=\sum _{i=1}^{n}\left(x_{i}p_{iy}-y_{i}p_{ix}\right)}
and the infinitesimal value of
α
=
δ
ϕ
{\displaystyle \alpha =\delta \phi }
, then the change in the coordinates is given for x by:
δ
x
i
=
{
x
i
,
G
}
δ
ϕ
=
∑
j
{
x
i
,
x
j
p
j
y
−
y
j
p
j
x
}
δ
ϕ
=
∑
j
(
{
x
i
,
x
j
p
j
y
}
⏟
=
0
−
{
x
i
,
y
j
p
j
x
}
)
δ
ϕ
=
−
∑
j
y
j
{
x
i
,
p
j
x
}
⏟
=
δ
i
j
δ
ϕ
=
−
y
i
δ
ϕ
{\displaystyle {\begin{array}{c}{\delta x_{i}=\{x_{i},G\}\delta \phi =\displaystyle \sum _{j}\{x_{i},x_{j}p_{jy}-y_{j}p_{jx}\}\delta \phi =\displaystyle \sum _{j}(\underbrace {\{x_{i},x_{j}p_{jy}\}} _{=0}-{\{x_{i},y_{j}p_{jx}\}}})\delta \phi \\{=\displaystyle -\sum _{j}y_{j}\underbrace {\{x_{i},p_{jx}\}} _{=\delta _{ij}}\delta \phi =-y_{i}\delta \phi }\end{array}}}
and similarly for y:
δ
y
i
=
{
y
i
,
G
}
δ
ϕ
=
∑
j
{
y
i
,
x
j
p
j
y
−
y
j
p
j
x
}
δ
ϕ
=
∑
j
(
{
y
i
,
x
j
p
j
y
}
−
{
y
i
,
y
j
p
j
x
}
⏟
=
0
)
δ
ϕ
=
∑
j
x
j
{
y
i
,
p
j
y
}
⏟
=
δ
i
j
δ
ϕ
=
x
i
δ
ϕ
,
{\displaystyle {\begin{array}{c}\delta y_{i}=\{y_{i},G\}\delta \phi =\displaystyle \sum _{j}\{y_{i},x_{j}p_{jy}-y_{j}p_{jx}\}\delta \phi =\displaystyle \sum _{j}(\{y_{i},x_{j}p_{jy}\}-\underbrace {\{y_{i},y_{j}p_{jx}\}} _{=0})\delta \phi \\{=\displaystyle \sum _{j}x_{j}\underbrace {\{y_{i},p_{jy}\}} _{=\delta _{ij}}\delta \phi =x_{i}\delta \phi \,,}\end{array}}}
whereas the z component of all particles is unchanged:
δ
z
i
=
{
z
i
,
G
}
δ
ϕ
=
∑
j
{
z
i
,
x
j
p
j
y
−
y
j
p
j
x
}
δ
ϕ
=
0
{\textstyle \delta z_{i}=\left\{z_{i},G\right\}\delta \phi =\sum _{j}\left\{z_{i},x_{j}p_{jy}-y_{j}p_{jx}\right\}\delta \phi =0}
.
These transformations correspond to rotation about the z axis by angle
δ
ϕ
{\displaystyle \delta \phi }
in its first order approximation. Hence, repeated application of the infinitesimal canonical transformation generates a rotation of system of particles about the z axis. If the Hamiltonian is invariant under rotation about the z axis, the generator, the component of angular momentum along the axis of rotation, is an invariant of motion.
== One parameter subgroup of Canonical transformations ==
Allowing the values of
α
{\displaystyle \alpha }
to take continuous range of values in:
Q
(
q
,
p
,
t
;
α
)
Q
(
q
,
p
,
t
;
0
)
=
q
P
(
q
,
p
,
t
;
α
)
with
P
(
q
,
p
,
t
;
0
)
=
p
{\displaystyle {\begin{aligned}&Q(q,p,t;\alpha )\quad \quad \quad &Q(q,p,t;0)=q\\&P(q,p,t;\alpha )\quad \quad {\text{with}}\quad &P(q,p,t;0)=p\\\end{aligned}}}
which can be expressed as
ϵ
μ
(
η
,
t
;
α
)
{\displaystyle \epsilon ^{\mu }(\eta ,t;\alpha )}
where
ϵ
μ
(
η
,
t
;
0
)
=
η
μ
{\displaystyle \epsilon ^{\mu }(\eta ,t;0)=\eta ^{\mu }}
.
One parameter subgroup of Canonical transformations are those where the generator of the transformation can be used to generate coordinates where
ϵ
μ
(
ϵ
(
η
,
t
;
α
1
)
;
α
2
)
=
ϵ
μ
(
η
,
t
;
α
1
+
α
2
)
{\displaystyle \epsilon ^{\mu }(\epsilon (\eta ,t;\alpha _{1});\alpha _{2})=\epsilon ^{\mu }(\eta ,t;\alpha _{1}+\alpha _{2})}
is satisfied, i.e. composition of two canonical transformations of parameter
α
1
{\displaystyle \alpha _{1}}
and
α
2
{\displaystyle \alpha _{2}}
are the same as that of a single canonical transformation of parameter
α
1
+
α
2
{\displaystyle \alpha _{1}+\alpha _{2}}
.
The condition on the transformations of the one parameter subgroup kind can be expressed equivalently as a differential equation:
δ
ϵ
μ
(
η
,
t
;
α
)
=
δ
α
{
ϵ
ν
,
G
}
=
δ
α
J
μ
ν
∂
G
∂
ϵ
ν
(
ϵ
(
η
,
t
;
α
)
,
t
)
⟹
d
ϵ
μ
(
η
,
t
;
α
)
d
α
=
J
μ
ν
∂
G
∂
ϵ
ν
(
ϵ
(
η
,
t
;
α
)
,
t
)
{\displaystyle \delta \epsilon ^{\mu }(\eta ,t;\alpha )=\delta \alpha \{\epsilon ^{\nu },G\}=\delta \alpha J^{\mu \nu }{\frac {\partial G}{\partial \epsilon ^{\nu }}}(\epsilon (\eta ,t;\alpha ),t)\implies {\frac {d\epsilon ^{\mu }(\eta ,t;\alpha )}{d\alpha }}=J^{\mu \nu }{\frac {\partial G}{\partial \epsilon ^{\nu }}}(\epsilon (\eta ,t;\alpha ),t)}
for all
η
{\displaystyle \eta }
given that the generator has no explicit dependance on
α
{\displaystyle \alpha }
. The conditions
ϵ
μ
(
ϵ
(
η
,
t
;
α
1
)
;
α
2
)
=
ϵ
μ
(
η
,
t
;
α
1
+
α
2
)
{\displaystyle \epsilon ^{\mu }(\epsilon (\eta ,t;\alpha _{1});\alpha _{2})=\epsilon ^{\mu }(\eta ,t;\alpha _{1}+\alpha _{2})}
can be recovered since this equation is trivially satisfied when
α
2
=
0
{\displaystyle \alpha _{2}=0}
which is considered initial values and the differential equations of both sides are of the same form implying the relation due to uniqueness of solutions with given initial values. Hence one parameter subgroups of canonical transformations are extension of infinitesimal canonical transformations to finite values of
α
{\displaystyle \alpha }
by using the same functional form of its generator independent of parameter
α
{\displaystyle \alpha }
.
As a consequence of the generator having no explicit dependance on
α
{\displaystyle \alpha }
, the generator is also implicitly independent of
α
{\displaystyle \alpha }
.
d
G
(
ϵ
(
η
;
α
)
,
t
)
d
α
=
{
G
,
G
}
=
0
,
∀
α
⟹
G
(
ϵ
(
η
;
α
)
,
t
)
=
G
(
η
,
t
)
{\displaystyle {\frac {dG(\epsilon (\eta ;\alpha ),t)}{d\alpha }}=\{G,G\}=0,\,\forall \alpha \implies G(\epsilon (\eta ;\alpha ),t)=G(\eta ,t)}
This can be used to express the differential equation as:
d
ϵ
μ
(
η
,
t
;
α
)
d
α
=
{
ϵ
μ
(
η
,
t
;
α
)
,
G
(
η
,
t
)
}
η
=:
−
G
~
ϵ
μ
{\displaystyle {\frac {d\epsilon ^{\mu }(\eta ,t;\alpha )}{d\alpha }}=\{\epsilon ^{\mu }(\eta ,t;\alpha ),G(\eta ,t)\}_{\eta }=:-{\tilde {G}}\epsilon ^{\mu }}
where the linear differential operator is defined as
G
~
:=
(
∇
η
G
)
T
J
∇
η
{\displaystyle {\tilde {G}}:=(\nabla _{\eta }G)^{T}J\nabla _{\eta }}
.
=== Active view of transformation ===
Upon iteratively solving the differential equation, the solution of the differential equation follows as:
ϵ
(
η
,
t
;
α
)
=
η
+
α
{
η
,
G
(
η
,
t
)
}
+
1
2
!
α
2
{
{
η
,
G
(
η
,
t
)
}
,
G
(
η
,
t
)
}
+
⋯
=
e
−
α
G
~
η
{\displaystyle \epsilon (\eta ,t;\alpha )=\eta +\alpha \{\eta ,G(\eta ,t)\}+{\frac {1}{2!}}\alpha ^{2}\{\{\eta ,G(\eta ,t)\},G(\eta ,t)\}+\cdots =e^{-\alpha {\tilde {G}}}\eta }
Change in function values
d
f
(
ϵ
(
η
;
α
)
,
t
)
d
α
=
{
f
(
ϵ
(
η
;
α
)
,
t
)
,
G
(
η
,
t
)
}
η
=:
−
G
~
f
(
ϵ
(
η
;
α
)
,
t
)
{\displaystyle {\frac {df(\epsilon (\eta ;\alpha ),t)}{d\alpha }}=\{f(\epsilon (\eta ;\alpha ),t),G(\eta ,t)\}_{\eta }=:-{\tilde {G}}f(\epsilon (\eta ;\alpha ),t)}
by taking repeatedly in steps and using
ϵ
(
η
,
t
;
0
)
=
η
{\displaystyle \epsilon (\eta ,t;0)=\eta }
we get similarly
f
(
e
−
α
G
~
η
,
t
)
=
f
(
ϵ
(
η
;
α
)
,
t
)
=
f
(
η
,
t
)
+
α
{
f
(
η
,
t
)
,
G
(
η
,
t
)
}
+
1
2
!
α
2
{
{
f
(
η
,
t
)
,
G
(
η
,
t
)
}
,
G
(
η
,
t
)
}
+
⋯
=
e
−
α
G
~
f
(
η
,
t
)
{\displaystyle f(e^{-\alpha {\tilde {G}}}\eta ,t)=f(\epsilon (\eta ;\alpha ),t)=f(\eta ,t)+\alpha \{f(\eta ,t),G(\eta ,t)\}+{\frac {1}{2!}}\alpha ^{2}\{\{f(\eta ,t),G(\eta ,t)\},G(\eta ,t)\}+\cdots =e^{-\alpha {\tilde {G}}}f(\eta ,t)}
=== Passive view of transformation ===
Change in a function can be invoked by preserving its values on the same physical states in phase space as
f
(
ϵ
,
t
)
=
f
(
ϵ
(
η
;
α
)
,
t
)
=
f
′
(
ϵ
(
η
;
α
+
δ
α
)
,
t
)
=
f
′
(
ϵ
′
,
t
)
{\displaystyle f(\epsilon ,t)=f(\epsilon (\eta ;\alpha ),t)=f'(\epsilon (\eta ;\alpha +\delta \alpha ),t)=f'(\epsilon ',t)}
can be expressed as upto first order as:
δ
′
f
=
f
′
(
ϵ
)
−
f
(
ϵ
)
=
f
′
(
ϵ
)
−
f
′
(
ϵ
′
)
≈
f
(
ϵ
(
η
;
α
−
δ
α
)
)
−
f
(
ϵ
(
η
;
α
)
)
=
−
δ
α
{
f
,
G
}
{\displaystyle \delta 'f=f'(\epsilon )-f(\epsilon )=f'(\epsilon )-f'(\epsilon ')\approx f(\epsilon (\eta ;\alpha -\delta \alpha ))-f(\epsilon (\eta ;\alpha ))=-\delta \alpha \{f,G\}}
Including the change in the function as some explicit dependance on parameter of transformation
α
{\displaystyle \alpha }
, it can be expressed as
f
(
ϵ
,
t
;
α
)
{\displaystyle f(\epsilon ,t;\alpha )}
where it is explicitly dependant on
α
{\displaystyle \alpha }
such that
∂
f
(
ϵ
,
t
;
α
)
∂
α
=
−
{
f
,
G
}
{\displaystyle {\frac {\partial f(\epsilon ,t;\alpha )}{\partial \alpha }}=-\{f,G\}}
which indicates that the function transforms oppositely to that due to the coordinates to preserve well defined mapping from a physical point in phase space to its scalar values. It is also possible that functions transform without needing to preserve its values on the same physical states in phase space. Such as, for example, the Hamiltonian whose explicit dependance on the canonical transformation can be different from the above form, restated from its previous derivation as
∂
H
(
ϵ
,
t
;
α
)
∂
α
=
d
G
d
t
{\displaystyle {\frac {\partial H(\epsilon ,t;\alpha )}{\partial \alpha }}={\frac {dG}{dt}}}
which is similar to previous relation but also accounts for any explicit time dependence of the generator. Hence, if the Hamiltonian is invariant in passive view for infinitesimal canonical transformations, its generator is a constant of motion.
== Motion as canonical transformation ==
Motion itself (or, equivalently, a shift in the time origin) is a canonical transformation. If
Q
(
t
)
≡
q
(
t
+
τ
)
{\displaystyle \mathbf {Q} (t)\equiv \mathbf {q} (t+\tau )}
and
P
(
t
)
≡
p
(
t
+
τ
)
{\displaystyle \mathbf {P} (t)\equiv \mathbf {p} (t+\tau )}
, then Hamilton's principle is automatically satisfied
δ
∫
t
1
t
2
[
P
⋅
Q
˙
−
K
(
Q
,
P
,
t
)
]
d
t
=
δ
∫
t
1
+
τ
t
2
+
τ
[
p
⋅
q
˙
−
H
(
q
,
p
,
t
+
τ
)
]
d
t
=
0
{\displaystyle \delta \int _{t_{1}}^{t_{2}}\left[\mathbf {P} \cdot {\dot {\mathbf {Q} }}-K(\mathbf {Q} ,\mathbf {P} ,t)\right]dt=\delta \int _{t_{1}+\tau }^{t_{2}+\tau }\left[\mathbf {p} \cdot {\dot {\mathbf {q} }}-H(\mathbf {q} ,\mathbf {p} ,t+\tau )\right]dt=0}
since a valid trajectory
(
q
(
t
)
,
p
(
t
)
)
{\displaystyle (\mathbf {q} (t),\mathbf {p} (t))}
should always satisfy Hamilton's principle, regardless of the endpoints.
== Examples ==
The translation
Q
(
q
,
p
)
=
q
+
a
,
P
(
q
,
p
)
=
p
+
b
{\displaystyle \mathbf {Q} (\mathbf {q} ,\mathbf {p} )=\mathbf {q} +\mathbf {a} ,\mathbf {P} (\mathbf {q} ,\mathbf {p} )=\mathbf {p} +\mathbf {b} }
where
a
,
b
{\displaystyle \mathbf {a} ,\mathbf {b} }
are two constant vectors is a canonical transformation. Indeed, the Jacobian matrix is the identity, which is symplectic:
I
T
J
I
=
J
{\displaystyle I^{\text{T}}JI=J}
.
Set
x
=
(
q
,
p
)
{\displaystyle \mathbf {x} =(q,p)}
and
X
=
(
Q
,
P
)
{\displaystyle \mathbf {X} =(Q,P)}
, the transformation
X
(
x
)
=
R
x
{\displaystyle \mathbf {X} (\mathbf {x} )=R\mathbf {x} }
where
R
∈
S
O
(
2
)
{\displaystyle R\in SO(2)}
is a rotation matrix of order 2 is canonical. Keeping in mind that special orthogonal matrices obey
R
T
R
=
I
{\displaystyle R^{\text{T}}R=I}
it's easy to see that the Jacobian is symplectic. However, this example only works in dimension 2:
S
O
(
2
)
{\displaystyle SO(2)}
is the only special orthogonal group in which every matrix is symplectic. Note that the rotation here acts on
(
q
,
p
)
{\displaystyle (q,p)}
and not on
q
{\displaystyle q}
and
p
{\displaystyle p}
independently, so these are not the same as a physical rotation of an orthogonal spatial coordinate system.
The transformation
(
Q
(
q
,
p
)
,
P
(
q
,
p
)
)
=
(
q
+
f
(
p
)
,
p
)
{\displaystyle (Q(q,p),P(q,p))=(q+f(p),p)}
, where
f
(
p
)
{\displaystyle f(p)}
is an arbitrary function of
p
{\displaystyle p}
, is canonical. Jacobian matrix is indeed given by
∂
X
∂
x
=
[
1
f
′
(
p
)
0
1
]
{\displaystyle {\frac {\partial X}{\partial x}}={\begin{bmatrix}1&f'(p)\\0&1\end{bmatrix}}}
which is symplectic.
== Modern mathematical description ==
In mathematical terms, canonical coordinates are any coordinates on the phase space (cotangent bundle) of the system that allow the canonical one-form to be written as
∑
i
p
i
d
q
i
{\displaystyle \sum _{i}p_{i}\,dq^{i}}
up to a total differential (exact form). The change of variable between one set of canonical coordinates and another is a canonical transformation. The index of the generalized coordinates q is written here as a superscript (
q
i
{\displaystyle q^{i}}
), not as a subscript as done above (
q
i
{\displaystyle q_{i}}
). The superscript conveys the contravariant transformation properties of the generalized coordinates, and does not mean that the coordinate is being raised to a power. Further details may be found at the symplectomorphism article.
== History ==
The first major application of the canonical transformation was in 1846, by Charles Delaunay, in the study of the Earth-Moon-Sun system. This work resulted in the publication of a pair of large volumes as Mémoires by the French Academy of Sciences, in 1860 and 1867.
== See also ==
Symplectomorphism
Hamilton–Jacobi equation
Liouville's theorem (Hamiltonian)
Mathieu transformation
Linear canonical transformation
== Notes ==
== References ==
Goldstein, Herbert; Poole, Charles P.; Safko, John L. (2007). Classical mechanics (3rd ed.). Upper Saddle River, N.J: Pearson [u.a.] ISBN 978-0-321-18897-7.
Landau, L. D.; Lifshitz, E. M. (1975) [1939]. Mechanics. Translated by Bell, S. J.; Sykes, J. B. (3rd ed.). Amsterdam: Elsevier. ISBN 978-0-7506-28969.
Giacaglia, Georgio Eugenio Oscare (1972). Perturbation Methods in Non-Linear Systems. New York: Springer-Verlag. ISBN 3-540-90054-3. LCCN 72-87714.
Lanczos, Cornelius (2012-04-24). The Variational Principles of Mechanics. Courier Corporation. ISBN 978-0-486-13470-3.
Lurie, Anatolii I. (2002). Analytical Mechanics (1st ed.). Springer-Verlag Berlin. ISBN 978-3-642-53650-2.
Gupta, Praveen P.; Gupta, Sanjay (2008). Rigid Dynamics (10th ed.). Krishna Prakashan Media.
Johns, Oliver Davis (2005). Analytical Mechanics for Relativity and Quantum Mechanics. Oxford University Press. ISBN 978-0-19-856726-4.
Lemos, Nivaldo A (2018). Analytical mechanics. Cambridge University Press. ISBN 978-1-108-41658-0.
Hand, Louis N.; Finch, Janet D. (1999). Analytical Mechanics (1st ed.). Cambridge University Press. ISBN 978-0521573276.
Sudarshan, E C George; Mukunda, N (2010). Classical Dynamics: A Modern Perspective. Wiley. ISBN 9780471835400. | Wikipedia/Canonical_transformations |
Atomic, molecular, and optical physics (AMO) is the study of matter–matter and light–matter interactions, at the scale of one or a few atoms and energy scales around several electron volts.: 1356 The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories.
== Atomic and molecular physics ==
Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics.
Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 μm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 μm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
== Optical physics ==
Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter, especially its manipulation and control. It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau.
Researchers in optical physics use and develop light sources that span the electromagnetic spectrum from microwaves to X-rays. The field includes the generation and detection of light, linear and nonlinear optical processes, and spectroscopy. Lasers and laser spectroscopy have transformed optical science. Major study in optical physics is also devoted to quantum optics and coherence, and to femtosecond optics. In optical physics, support is also provided in areas such as the nonlinear response of isolated atoms to intense, ultra-short electromagnetic fields, the atom-cavity interaction at high fields, and quantum properties of the electromagnetic field.
Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment.
== History ==
One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century.
Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century.
From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model.
Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics.
=== Classical oscillator model of matter ===
Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly.
: 4–8
=== Early quantum model of matter and light ===
Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900.: 8–9
His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber
k
=
n
π
L
{\displaystyle k={\frac {n\pi }{L}}}
can occur in the box, where n is a positive integer (mathematically denoted by
n
∈
N
1
{\displaystyle \scriptstyle n\in \mathbb {N} _{1}}
). The equation describing these standing waves is given by:
E
=
E
0
sin
(
n
π
L
x
)
{\displaystyle E=E_{0}\sin \left({\frac {n\pi }{L}}x\right)\,\!}
.
where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived.: 4–8, 51–52
In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation.: 9–10
These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model.: 8
Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency
ν
{\displaystyle \nu }
with a photon of energy
h
ν
{\displaystyle h\nu }
. In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation).: 11
== Modern treatments ==
The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger.: 12
There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it.
For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field.: 14 Since the field is treated classically it can not deal with spontaneous emission.: 16 This semi-classical treatment is valid for most systems,: 997 particular those under the action of high intensity laser fields.: 724 The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively.: 997
Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically.: 556 When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails.: 754
Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical.: 871
== Isolated atoms and molecules ==
Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules.
== Electronic configuration ==
Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon.
There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes.
== See also ==
== Notes ==
== References ==
== External links ==
ScienceDirect - Advances In Atomic, Molecular, and Optical Physics
Journal of Physics B: Atomic, Molecular and Optical Physics
=== Institutions ===
American Physical Society - Division of Atomic, Molecular & Optical Physics
European Physical Society - Atomic, Molecular & Optical Physics Division
National Science Foundation - Atomic, Molecular and Optical Physics
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
JILA - Atomic and Molecular Physics
Joint Quantum Institute at University of Maryland and NIST
ORNL Physics Division
Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics,
University of California, Berkeley - Atomic, Molecular and Optical Physics | Wikipedia/Atomic,_molecular,_and_optical_physics |
Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
== Examples of research and development areas ==
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics engineering
Computer science & engineering
Artificial intelligence
Machine learning
Deep learning
Reinforcement learning
Power engineering
Power electronics
Control engineering
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
== See also ==
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
== References == | Wikipedia/Applied_physics |
Mathematical Methods of Classical Mechanics is a textbook by mathematician Vladimir I. Arnold. It was originally written in Russian, and later translated into English by A. Weinstein and K. Vogtmann. It is aimed at graduate students.
== Contents ==
Part I: Newtonian Mechanics
Chapter 1: Experimental Facts
Chapter 2: Investigation of the Equations of Motion
Part II: Lagrangian Mechanics
Chapter 3: Variational Principles
Chapter 4: Lagrangian Mechanics on Manifolds
Chapter 5: Oscillations
Chapter 6: Rigid Bodies
Part III: Hamiltonian Mechanics
Chapter 7: Differential forms
Chapter 8: Symplectic Manifolds
Chapter 9: Canonical Formalism
Chapter 10: Introduction to Perturbation Theory
Appendices
Riemannian curvature
Geodesics of left-invariant metrics on Lie groups and the hydrodynamics of ideal fluids
Symplectic structures on algebraic manifolds
Contact structures
Dynamical systems with symmetries
Normal forms of quadratic Hamiltonians
Normal forms of Hamiltonian systems near stationary points and closed trajectories
Theory of perturbations of conditionally period motion and Kolmogorov's theorem
Poincaré's geometric theorem, its generalizations and applications
Multiplicities of characteristic frequencies, and ellipsoids depending on parameters
Short wave asymptotics
Lagrangian singularities
The Kortweg-de Vries equation
Poisson structures
On elliptic coordinates
Singularities of ray systems
== Russian original and translations ==
The original Russian first edition Математические методы классической механики was published in 1974 by Наука. A second edition was published in 1979, and a third in 1989. The book has since been translated into a number of other languages, including French, German, Japanese and Mandarin.
== Reviews ==
The Bulletin of the American Mathematical Society said, "The [book] under review [...] written by a distinguished mathematician [...is one of] the first textbooks [to] successfully to present to students of mathematics and physics, [sic] classical mechanics in a modern setting."
A book review in the journal Celestial Mechanics said, "In summary, the author has succeeded in producing a mathematical synthesis of the science of dynamics. The book is well presented and beautifully translated [...] Arnold's book is pure poetry; one does not simply read it, one enjoys it."
== See also ==
List of textbooks in classical and quantum mechanics
== References ==
== Bibliography ==
Arnold, Vladimir I. (16 May 1989) [First published in 1974]. Mathematical Methods of Classical Mechanics Математические методы классической механики. Graduate Texts in Mathematics. Vol. 60. Translated by Vogtmann, Karen; Weinstein, Alan D. (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-96890-2. OCLC 18681352. | Wikipedia/Mathematical_Methods_of_Classical_Mechanics |
The symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation.
A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group).
These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems.
Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity.
== As a kind of invariance ==
Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room.
Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve the shape of its surface from any given vantage point.
=== Invariance in force ===
The above ideas lead to the useful idea of invariance when discussing observed physical symmetry; this can be applied to symmetries in forces as well.
For example, an electric field due to an electrically charged wire of infinite length is said to exhibit cylindrical symmetry, because the electric field strength at a given distance r from the wire will have the same magnitude at each point on the surface of a cylinder (whose axis is the wire) with radius r. Rotating the wire about its own axis does not change its position or charge density, hence it will preserve the field. The field strength at a rotated position is the same. This is not true in general for an arbitrary system of charges.
In Newton's theory of mechanics, given two bodies, each with mass m, starting at the origin and moving along the x-axis in opposite directions, one with speed v1 and the other with speed v2 the total kinetic energy of the system (as calculated from an observer at the origin) is 1/2m(v12 + v22) and remains the same if the velocities are interchanged. The total kinetic energy is preserved under a reflection in the y-axis.
The last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the total kinetic energy will be the same if v1 and v2 are interchanged.
== Local and global ==
Symmetries may be broadly classified as global or local. A global symmetry is one that keeps a property invariant for a transformation that is applied simultaneously at all points of spacetime, whereas a local symmetry is one that keeps a property invariant when a possibly different symmetry transformation is applied at each point of spacetime; specifically a local symmetry transformation is parameterised by the spacetime coordinates, whereas a global symmetry is not. This implies that a global symmetry is also a local symmetry. Local symmetries play an important role in physics as they form the basis for gauge theories.
== Continuous ==
The two examples of rotational symmetry described above – spherical and cylindrical – are each instances of continuous symmetry. These are characterised by invariance following a continuous change in the geometry of the system. For example, the wire may be rotated through any angle about its axis and the field strength will be the same on a given cylinder. Mathematically, continuous symmetries are described by transformations that change continuously as a function of their parameterization. An important subclass of continuous symmetries in physics are spacetime symmetries.
=== Spacetime ===
Continuous spacetime symmetries are symmetries involving transformations of space and time. These may be further classified as spatial symmetries, involving only the spatial geometry associated with a physical system; temporal symmetries, involving only changes in time; or spatio-temporal symmetries, involving changes in both space and time.
Time translation: A physical system may have the same features over a certain interval of time Δt; this is expressed mathematically as invariance under the transformation t → t + a for any real parameters t and t + a in the interval. For example, in classical mechanics, a particle solely acted upon by gravity will have gravitational potential energy mgh when suspended from a height h above the Earth's surface. Assuming no change in the height of the particle, this will be the total gravitational potential energy of the particle at all times. In other words, by considering the state of the particle at some time t0 and also at t0 + a, the particle's total gravitational potential energy will be preserved.
Spatial translation: These spatial symmetries are represented by transformations of the form r→ → r→ + a→ and describe those situations where a property of the system does not change with a continuous change in location. For example, the temperature in a room may be independent of where the thermometer is located in the room.
Spatial rotation: These spatial symmetries are classified as proper rotations and improper rotations. The former are just the 'ordinary' rotations; mathematically, they are represented by square matrices with unit determinant. The latter are represented by square matrices with determinant −1 and consist of a proper rotation combined with a spatial reflection (inversion). For example, a sphere has proper rotational symmetry. Other types of spatial rotations are described in the article Rotation symmetry.
Poincaré transformations: These are spatio-temporal symmetries which preserve distances in Minkowski spacetime, i.e. they are isometries of Minkowski space. They are studied primarily in special relativity. Those isometries that leave the origin fixed are called Lorentz transformations and give rise to the symmetry known as Lorentz covariance.
Projective symmetries: These are spatio-temporal symmetries which preserve the geodesic structure of spacetime. They may be defined on any smooth manifold, but find many applications in the study of exact solutions in general relativity.
Inversion transformations: These are spatio-temporal symmetries which generalise Poincaré transformations to include other conformal one-to-one transformations on the space-time coordinates. Lengths are not invariant under inversion transformations but there is a cross-ratio on four points that is invariant.
Mathematically, spacetime symmetries are usually described by smooth vector fields on a smooth manifold. The underlying local diffeomorphisms associated with the vector fields correspond more directly to the physical symmetries, but the vector fields themselves are more often used when classifying the symmetries of the physical system.
Some of the most important vector fields are Killing vector fields which are those spacetime symmetries that preserve the underlying metric structure of a manifold. In rough terms, Killing vector fields preserve the distance between any two points of the manifold and often go by the name of isometries.
== Discrete ==
A discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges.
Time reversal: Many laws of physics describe real phenomena when the direction of time is reversed. Mathematically, this is represented by the transformation,
t
→
−
t
{\displaystyle t\,\rightarrow -t}
. For example, Newton's second law of motion still holds if, in the equation
F
=
m
r
¨
{\displaystyle F\,=m{\ddot {r}}}
,
t
{\displaystyle t}
is replaced by
−
t
{\displaystyle -t}
. This may be illustrated by recording the motion of an object thrown up vertically (neglecting air resistance) and then playing it back. The object will follow the same parabolic trajectory through the air, whether the recording is played normally or in reverse. Thus, position is symmetric with respect to the instant that the object is at its maximum height.
Spatial inversion: These are represented by transformations of the form
r
→
→
−
r
→
{\displaystyle {\vec {r}}\,\rightarrow -{\vec {r}}}
and indicate an invariance property of a system when the coordinates are 'inverted'. Stated another way, these are symmetries between a certain object and its mirror image.
Glide reflection: These are represented by a composition of a translation and a reflection. These symmetries occur in some crystals and in some planar symmetries, known as wallpaper symmetries.
=== C, P, and T ===
The Standard Model of particle physics has three related natural near-symmetries. These state that the universe in which we live should be indistinguishable from one where a certain type of change is introduced.
C-symmetry (charge symmetry), a universe where every particle is replaced with its antiparticle.
P-symmetry (parity symmetry), a universe where everything is mirrored along the three physical axes. This excludes weak interactions as demonstrated by Chien-Shiung Wu.
T-symmetry (time reversal symmetry), a universe where the direction of time is reversed. T-symmetry is counterintuitive (the future and the past are not symmetrical) but explained by the fact that the Standard Model describes local properties, not global ones like entropy. To properly reverse the direction of time, one would have to put the Big Bang and the resulting low-entropy state in the "future". Since we perceive the "past" ("future") as having lower (higher) entropy than the present, the inhabitants of this hypothetical time-reversed universe would perceive the future in the same way as we perceive the past, and vice versa.
These symmetries are near-symmetries because each is broken in the present-day universe. However, the Standard Model predicts that the combination of the three (that is, the simultaneous application of all three transformations) must be a symmetry, called CPT symmetry. CP violation, the violation of the combination of C- and P-symmetry, is necessary for the presence of significant amounts of baryonic matter in the universe. CP violation is a fruitful area of current research in particle physics.
=== Supersymmetry ===
A type of symmetry known as supersymmetry has been used to try to make theoretical advances in the Standard Model. Supersymmetry is based on the idea that there is another physical symmetry beyond those already developed in the Standard Model, specifically a symmetry between bosons and fermions. Supersymmetry asserts that each type of boson has, as a supersymmetric partner, a fermion, called a superpartner, and vice versa. Supersymmetry has not yet been experimentally verified: no known particle has the correct properties to be a superpartner of any other known particle. Currently LHC is preparing for a run which tests supersymmetry.
== Generalized symmetries ==
Generalized symmetries encompass a number of recently recognized generalizations of the concept of a global symmetry. These include higher form symmetries, higher group symmetries, non-invertible symmetries, and subsystem symmetries.
== Mathematics of physical symmetry ==
The transformations describing physical symmetries typically form a mathematical group. Group theory is an important area of mathematics for physicists.
Continuous symmetries are specified mathematically by continuous groups (called Lie groups). Many physical symmetries are isometries and are specified by symmetry groups. Sometimes this term is used for more general types of symmetries. The set of all proper rotations (about any angle) through any axis of a sphere form a Lie group called the special orthogonal group SO(3). (The '3' refers to the three-dimensional space of an ordinary sphere.) Thus, the symmetry group of the sphere with proper rotations is SO(3). Any rotation preserves distances on the surface of the ball. The set of all Lorentz transformations form a group called the Lorentz group (this may be generalised to the Poincaré group).
Discrete groups describe discrete symmetries. For example, the symmetries of an equilateral triangle are characterized by the symmetric group S3.
A type of physical theory based on local symmetries is called a gauge theory and the symmetries natural to such a theory are called gauge symmetries. Gauge symmetries in the Standard Model, used to describe three of the fundamental interactions, are based on the SU(3) × SU(2) × U(1) group. (Roughly speaking, the symmetries of the SU(3) group describe the strong force, the SU(2) group describes the weak interaction and the U(1) group describes the electromagnetic force.)
Also, the reduction by symmetry of the energy functional under the action by a group and spontaneous symmetry breaking of transformations of symmetric groups appear to elucidate topics in particle physics (for example, the unification of electromagnetism and the weak force in physical cosmology).
=== Conservation laws and symmetry ===
The symmetry properties of a physical system are intimately related to the conservation laws characterizing that system. Noether's theorem gives a precise description of this relation. The theorem states that each continuous symmetry of a physical system implies that some physical property of that system is conserved. Conversely, each conserved quantity has a corresponding symmetry. For example, spatial translation symmetry (i.e. homogeneity of space) gives rise to conservation of (linear) momentum, and temporal translation symmetry (i.e. homogeneity of time) gives rise to conservation of energy.
The following table summarizes some fundamental symmetries and the associated conserved quantity.
== Mathematics ==
Continuous symmetries in physics preserve transformations. One can specify a symmetry by showing how a very small transformation affects various particle fields. The commutator of two of these infinitesimal transformations is equivalent to a third infinitesimal transformation of the same kind hence they form a Lie algebra.
A general coordinate transformation described as the general field
h
(
x
)
{\displaystyle h(x)}
(also known as a diffeomorphism) has the infinitesimal effect on a scalar
ϕ
(
x
)
{\displaystyle \phi (x)}
, spinor
ψ
(
x
)
{\displaystyle \psi (x)}
or vector field
A
(
x
)
{\displaystyle A(x)}
that can be expressed (using the Einstein summation convention):
δ
ϕ
(
x
)
=
h
μ
(
x
)
∂
μ
ϕ
(
x
)
{\displaystyle \delta \phi (x)=h^{\mu }(x)\partial _{\mu }\phi (x)}
δ
ψ
α
(
x
)
=
h
μ
(
x
)
∂
μ
ψ
α
(
x
)
+
∂
μ
h
ν
(
x
)
σ
μ
ν
α
β
ψ
β
(
x
)
{\displaystyle \delta \psi ^{\alpha }(x)=h^{\mu }(x)\partial _{\mu }\psi ^{\alpha }(x)+\partial _{\mu }h_{\nu }(x)\sigma _{\mu \nu }^{\alpha \beta }\psi ^{\beta }(x)}
δ
A
μ
(
x
)
=
h
ν
(
x
)
∂
ν
A
μ
(
x
)
+
A
ν
(
x
)
∂
μ
h
ν
(
x
)
{\displaystyle \delta A_{\mu }(x)=h^{\nu }(x)\partial _{\nu }A_{\mu }(x)+A_{\nu }(x)\partial _{\mu }h^{\nu }(x)}
Without gravity only the Poincaré symmetries are preserved which restricts
h
(
x
)
{\displaystyle h(x)}
to be of the form:
h
μ
(
x
)
=
M
μ
ν
x
ν
+
P
μ
{\displaystyle h^{\mu }(x)=M^{\mu \nu }x_{\nu }+P^{\mu }}
where M is an antisymmetric matrix (giving the Lorentz and rotational symmetries) and P is a general vector (giving the translational symmetries). Other symmetries affect multiple fields simultaneously. For example, local gauge transformations apply to both a vector and spinor field:
δ
ψ
α
(
x
)
=
λ
(
x
)
.
τ
α
β
ψ
β
(
x
)
{\displaystyle \delta \psi ^{\alpha }(x)=\lambda (x).\tau ^{\alpha \beta }\psi ^{\beta }(x)}
δ
A
μ
(
x
)
=
∂
μ
λ
(
x
)
,
{\displaystyle \delta A_{\mu }(x)=\partial _{\mu }\lambda (x),}
where
τ
{\displaystyle \tau }
are generators of a particular Lie group. So far the transformations on the right have only included fields of the same type. Supersymmetries are defined according to how the mix fields of different types.
Another symmetry which is part of some theories of physics and not in others is scale invariance which involve Weyl transformations of the following kind:
δ
ϕ
(
x
)
=
Ω
(
x
)
ϕ
(
x
)
{\displaystyle \delta \phi (x)=\Omega (x)\phi (x)}
If the fields have this symmetry then it can be shown that the field theory is almost certainly conformally invariant also. This means that in the absence of gravity h(x) would restricted to the form:
h
μ
(
x
)
=
M
μ
ν
x
ν
+
P
μ
+
D
x
μ
+
K
μ
|
x
|
2
−
2
K
ν
x
ν
x
μ
,
{\displaystyle h^{\mu }(x)=M^{\mu \nu }x_{\nu }+P^{\mu }+Dx_{\mu }+K^{\mu }|x|^{2}-2K^{\nu }x_{\nu }x_{\mu },}
with D generating scale transformations and K generating special conformal transformations. For example, N = 4 supersymmetric Yang–Mills theory has this symmetry while general relativity does not although other theories of gravity such as conformal gravity do. The 'action' of a field theory is an invariant under all the symmetries of the theory. Much of modern theoretical physics is to do with speculating on the various symmetries the Universe may have and finding the invariants to construct field theories as models.
In string theories, since a string can be decomposed into an infinite number of particle fields, the symmetries on the string world sheet is equivalent to special transformations which mix an infinite number of fields.
== See also ==
== References ==
=== General readers ===
=== Technical readers ===
== External links ==
The Feynman Lectures on Physics Vol. I Ch. 52: Symmetry in Physical Laws
Stanford Encyclopedia of Philosophy: "Symmetry"—by K. Brading and E. Castellani.
Pedagogic Aids to Quantum Field Theory Click on link to Chapter 6: Symmetry, Invariance, and Conservation for a simplified, step-by-step introduction to symmetry in physics. | Wikipedia/Symmetry_(physics) |
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems.
== Expectation ==
In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system.
From classical probability theory, we know that the expectation of a random variable X is defined by its distribution DX by
E
(
X
)
=
∫
R
d
λ
D
X
(
λ
)
{\displaystyle \mathbb {E} (X)=\int _{\mathbb {R} }d\lambda \operatorname {D} _{X}(\lambda )}
assuming, of course, that the random variable is integrable or that the random variable is non-negative. Similarly, let A be an observable of a quantum mechanical system. A is given by a densely defined self-adjoint operator on H. The spectral measure of A defined by
E
A
(
U
)
=
∫
U
d
λ
E
(
λ
)
,
{\displaystyle \operatorname {E} _{A}(U)=\int _{U}d\lambda \operatorname {E} (\lambda ),}
uniquely determines A and conversely, is uniquely determined by A. EA is a Boolean homomorphism from the Borel subsets of R into the lattice Q of self-adjoint projections of H. In analogy with probability theory, given a state S, we introduce the distribution of A under S which is the probability measure defined on the Borel subsets of R by
D
A
(
U
)
=
Tr
(
E
A
(
U
)
S
)
.
{\displaystyle \operatorname {D} _{A}(U)=\operatorname {Tr} (\operatorname {E} _{A}(U)S).}
Similarly, the expected value of A is defined in terms of the probability distribution DA by
E
(
A
)
=
∫
R
d
λ
D
A
(
λ
)
.
{\displaystyle \mathbb {E} (A)=\int _{\mathbb {R} }d\lambda \,\operatorname {D} _{A}(\lambda ).}
Note that this expectation is relative to the mixed state S which is used in the definition of DA.
Remark. For technical reasons, one needs to consider separately the positive and negative parts of A defined by the Borel functional calculus for unbounded operators.
One can easily show:
E
(
A
)
=
Tr
(
A
S
)
=
Tr
(
S
A
)
.
{\displaystyle \mathbb {E} (A)=\operatorname {Tr} (AS)=\operatorname {Tr} (SA).}
The trace of an operator A is written as follows:
Tr
(
A
)
=
∑
m
⟨
m
|
A
|
m
⟩
.
{\displaystyle \operatorname {Tr} (A)=\sum _{m}\langle m|A|m\rangle .}
Note that if S is a pure state corresponding to the vector
ψ
{\displaystyle \psi }
, then:
E
(
A
)
=
⟨
ψ
|
A
|
ψ
⟩
.
{\displaystyle \mathbb {E} (A)=\langle \psi |A|\psi \rangle .}
== Von Neumann entropy ==
Of particular significance for describing randomness of a state is the von Neumann entropy of S formally defined by
H
(
S
)
=
−
Tr
(
S
log
2
S
)
.
{\displaystyle \operatorname {H} (S)=-\operatorname {Tr} (S\log _{2}S).}
Actually, the operator S log2 S is not necessarily trace-class. However, if S is a non-negative self-adjoint operator not of trace class we define Tr(S) = +∞. Also note that any density operator S can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form
[
λ
1
0
⋯
0
⋯
0
λ
2
⋯
0
⋯
⋮
⋮
⋱
0
0
λ
n
⋮
⋮
⋱
]
{\displaystyle {\begin{bmatrix}\lambda _{1}&0&\cdots &0&\cdots \\0&\lambda _{2}&\cdots &0&\cdots \\\vdots &\vdots &\ddots &\\0&0&&\lambda _{n}&\\\vdots &\vdots &&&\ddots \end{bmatrix}}}
and we define
H
(
S
)
=
−
∑
i
λ
i
log
2
λ
i
.
{\displaystyle \operatorname {H} (S)=-\sum _{i}\lambda _{i}\log _{2}\lambda _{i}.}
The convention is that
0
log
2
0
=
0
{\displaystyle \;0\log _{2}0=0}
, since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of S.
Remark. It is indeed possible that H(S) = +∞ for some density operator S. In fact T be the diagonal matrix
T
=
[
1
2
(
log
2
2
)
2
0
⋯
0
⋯
0
1
3
(
log
2
3
)
2
⋯
0
⋯
⋮
⋮
⋱
0
0
1
n
(
log
2
n
)
2
⋮
⋮
⋱
]
{\displaystyle T={\begin{bmatrix}{\frac {1}{2(\log _{2}2)^{2}}}&0&\cdots &0&\cdots \\0&{\frac {1}{3(\log _{2}3)^{2}}}&\cdots &0&\cdots \\\vdots &\vdots &\ddots &\\0&0&&{\frac {1}{n(\log _{2}n)^{2}}}&\\\vdots &\vdots &&&\ddots \end{bmatrix}}}
T is non-negative trace class and one can show T log2 T is not trace-class.
In analogy with classical entropy (notice the similarity in the definitions), H(S) measures the amount of randomness in the state S. The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space H is finite-dimensional, entropy is maximized for the states S which in diagonal form have the representation
[
1
n
0
⋯
0
0
1
n
…
0
⋮
⋮
⋱
⋮
0
0
⋯
1
n
]
{\displaystyle {\begin{bmatrix}{\frac {1}{n}}&0&\cdots &0\\0&{\frac {1}{n}}&\dots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &{\frac {1}{n}}\end{bmatrix}}}
For such an S, H(S) = log2 n. The state S is called the maximally mixed state.
Recall that a pure state is one of the form
S
=
|
ψ
⟩
⟨
ψ
|
,
{\displaystyle S=|\psi \rangle \langle \psi |,}
for ψ a vector of norm 1.
For S is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1.
Entropy can be used as a measure of quantum entanglement.
== Gibbs canonical ensemble ==
Consider an ensemble of systems described by a Hamiltonian H with average energy E. If H has pure-point spectrum and the eigenvalues
E
n
{\displaystyle E_{n}}
of H go to +∞ sufficiently fast, e−r H will be a non-negative trace-class operator for every positive r.
The Gibbs canonical ensemble is described by the state
S
=
e
−
β
H
Tr
(
e
−
β
H
)
.
{\displaystyle S={\frac {\mathrm {e} ^{-\beta H}}{\operatorname {Tr} (\mathrm {e} ^{-\beta H})}}.}
Where β is such that the ensemble average of energy satisfies
Tr
(
S
H
)
=
E
{\displaystyle \operatorname {Tr} (SH)=E}
and
Tr
(
e
−
β
H
)
=
∑
n
e
−
β
E
n
=
Z
(
β
)
{\displaystyle \operatorname {Tr} (\mathrm {e} ^{-\beta H})=\sum _{n}\mathrm {e} ^{-\beta E_{n}}=Z(\beta )}
This is called the partition function; it is the quantum mechanical version of the canonical partition function of classical statistical mechanics. The probability that a system chosen at random from the ensemble will be in a state corresponding to energy eigenvalue
E
m
{\displaystyle E_{m}}
is
P
(
E
m
)
=
e
−
β
E
m
∑
n
e
−
β
E
n
.
{\displaystyle {\mathcal {P}}(E_{m})={\frac {\mathrm {e} ^{-\beta E_{m}}}{\sum _{n}\mathrm {e} ^{-\beta E_{n}}}}.}
The Gibbs canonical ensemble maximizes the von Neumann entropy of the state subject to the condition that the average energy is fixed.
== Grand canonical ensemble ==
For open systems where the energy and numbers of particles may fluctuate, the system is described by the grand canonical ensemble, described by the density matrix
ρ
=
e
β
(
∑
i
μ
i
N
i
−
H
)
Tr
(
e
β
(
∑
i
μ
i
N
i
−
H
)
)
.
{\displaystyle \rho ={\frac {\mathrm {e} ^{\beta (\sum _{i}\mu _{i}N_{i}-H)}}{\operatorname {Tr} \left(\mathrm {e} ^{\beta (\sum _{i}\mu _{i}N_{i}-H)}\right)}}.}
where the N1, N2, ... are the particle number operators for the different species of particles that are exchanged with the reservoir. Note that this is a density matrix including many more states (of varying N) compared to the canonical ensemble.
The grand partition function is
Z
(
β
,
μ
1
,
μ
2
,
⋯
)
=
Tr
(
e
β
(
∑
i
μ
i
N
i
−
H
)
)
{\displaystyle {\mathcal {Z}}(\beta ,\mu _{1},\mu _{2},\cdots )=\operatorname {Tr} (\mathrm {e} ^{\beta (\sum _{i}\mu _{i}N_{i}-H)})}
== See also ==
Quantum thermodynamics
Thermal quantum field theory
Stochastic thermodynamics
Abstract Wiener space
== Further reading == | Wikipedia/Quantum_statistical_mechanics |
The Journal of Mathematical Physics is a peer-reviewed journal published monthly by the American Institute of Physics devoted to the publication of papers in mathematical physics. The journal was first published bimonthly beginning in January 1960; it became a monthly publication in 1963. The current editor is Jan Philip Solovej from University of Copenhagen. Its 2018 Impact Factor is 1.355
== Abstracting and indexing ==
This journal is indexed by the following services:
== References ==
== External links ==
Journal of Mathematical Physics online
=== Archival collections ===
Journal of Mathematical Physics referee files 1985-2005, Niels Bohr Library & Archives | Wikipedia/Journal_of_Mathematical_Physics |
This timeline lists significant discoveries in physics and the laws of nature, including experimental discoveries, theoretical proposals that were confirmed experimentally, and theories that have significantly influenced current thinking in modern physics. Such discoveries are often a multi-step, multi-person process. Multiple discovery sometimes occurs when multiple research groups discover the same phenomenon at about the same time, and scientific priority is often disputed. The listings below include some of the most significant people and ideas by date of publication or experiment.
== Antiquity ==
624–546 BCE – Thales of Miletus: Introduced natural philosophy
610–546 BCE – Anaximander: Concept of Earth floating in space
460–370 BCE – Democritus: Atomism via thought experiment
384–322 BCE – Aristotle: Aristotelian physics, earliest effective theory of physics
c. 300 BCE – Euclid: Euclidean geometry
c. 250 BCE – Archimedes: Archimedes' principle
310–230 BCE – Aristarchos: Proposed heliocentricism
276–194 BCE – Eratosthenes: Circumference of the Earth measured
190–150 BCE – Seleucus: Support of heliocentrism based on reasoning
220–150 BCE – Apollonius: and Hipparchus: Invention of Astrolabe
205–86 BCE – Hipparchus or unknown: Antikythera mechanism an analog computer of planetary motions
129 BCE – Hipparchus: Hipparchus star catalog of the entire sky and precession of the equinoxes
60 CE – Hero of Alexandria: Catoptrics: Hero's principle of the shortest path of light
c.150 CE – Ptolemy: Ptolomaic model standardized geocentricism
== Middle Ages ==
500 CE – John Philoponus: Theory of impetus
984 CE – Ibn Sahl: Law of refraction
1010 – Ibn al-Haytham (Alhazen): Optics, finite speed of light
c. 1030 – Ibn Sina (Avicenna): Concept of force
c. 1050 – al-Biruni: Speed of light is much larger than speed of sound
c. 1100 – Al-Baghdadi: Theory of motion with distinction between velocity and acceleration
== 16th century ==
1514 – Nicolaus Copernicus: Heliocentrism
1586 – Simon Stevin: Delft tower experiment
== 17th century ==
1608 – Earliest known telescopes
1609, 1619 – Kepler: Kepler's laws of planetary motion
1610 – Galileo Galilei: discovered the Galilean moons of Jupiter
1613 – Galileo Galilei: Inertia
1621 – Willebrord Snellius: Snell's law
1632 – Galileo Galilei: The Galilean principle (the laws of motion are the same in all inertial frames)
1660 – Blaise Pascal: Pascal's law
1660 – Robert Hooke: Hooke's law
1662 – Robert Boyle: Boyle's law
1663 – Otto von Guericke: first electrostatic generator
1676 – Ole Rømer: Rømer's determination of the speed of light traveling from the moons of Jupiter.
1678 – Christiaan Huygens mathematical wave theory of light, published in his Treatise on Light
1687 – Isaac Newton: Newton's laws of motion, and Newton's law of universal gravitation
== 18th century ==
1738 – Daniel Bernoulli: First model of the kinetic theory of gases
1745–46 – Ewald Georg von Kleist and Pieter van Musschenbroek: discovery of the Leyden jar
1752 – Benjamin Franklin: kite experiment
1760 – Joseph-Louis Lagrange: Lagrangian mechanics
1782 – Antoine Lavoisier: conservation of mass
1785 – Charles-Augustin de Coulomb: Coulomb's inverse-square law for electric charges confirmed
1800 – Alessandro Volta: discovery of voltaic pile
== 19th century ==
1800 - William Herschel: Infrared light
1801 – Thomas Young: Wave theory of light
1801 - Johann Wilhelm Ritter: Ultraviolet light
1803 – John Dalton: Atomic theory of matter
1806 – Thomas Young: Kinetic energy
1814 – Augustin-Jean Fresnel: Wave theory of light, optical interference
1820 – André-Marie Ampère, Jean-Baptiste Biot, and Félix Savart: Evidence for electromagnetic interactions (Biot–Savart law)
1822 – Joseph Fourier: Heat equation
1824 – Nicolas Léonard Sadi Carnot: Ideal gas cycle analysis (Carnot cycle), internal combustion engine
1826 – Ampère's circuital law
1827 – Georg Ohm: Electrical resistance
1831 – Michael Faraday: Faraday's law of induction
1833 – William Rowan Hamilton: Hamiltonian mechanics
1838 – Michael Faraday: Lines of force
1838 – Wilhelm Eduard Weber and Carl Friedrich Gauss: Earth's magnetic field
1842–43 – William Thomson, 1st Baron Kelvin and Julius von Mayer: Conservation of energy
1842 – Christian Doppler: Doppler effect
1845 – Michael Faraday: Faraday rotation (interaction of light and magnetic field)
1847 – Hermann von Helmholtz & James Prescott Joule: Conservation of Energy 2
1850–51 – William Thomson, 1st Baron Kelvin & Rudolf Clausius: Second law of thermodynamics
1857 – Rudolf Clausius: Introduced translational, rotational, and vibrational molecular motions
1857 – Rudolf Clausius: Introduced the concept of mean free path
1860 – James Clerk Maxwell: Introduced statistical mechanics with the Maxwell distribution
1861 – Gustav Kirchhoff: Black body
1861–62 – Maxwell's equations
1863 – Rudolf Clausius: Entropy
1864 – James Clerk Maxwell: A Dynamical Theory of the Electromagnetic Field (electromagnetic radiation)
1867 – James Clerk Maxwell: On the Dynamical Theory of Gases (kinetic theory of gases)
1871–89 – Ludwig Boltzmann & Josiah Willard Gibbs: Statistical mechanics (Boltzmann equation, 1872)
1873 – Maxwell: A Treatise on Electricity and Magnetism
1884 – Boltzmann derives Stefan radiation law
1887 – Michelson–Morley experiment
1887 – Heinrich Rudolf Hertz: Electromagnetic waves
1888 – Johannes Rydberg: Rydberg formula
1889, 1892 – Lorentz-FitzGerald contraction
1893 – Wilhelm Wien: Wien's displacement law for black-body radiation
1895 – Wilhelm Röntgen: X-rays
1896 – Henri Becquerel: Radioactivity
1896 – Pieter Zeeman: Zeeman effect
1897 – J. J. Thomson: Electron discovered
1900 – Max Planck: Formula for black-body radiation – the quanta solution to radiation ultraviolet catastrophe
1900 - Paul Villard: Gamma rays
== 20th century ==
1904 – J. J. Thomson's plum pudding model of the atom 1904
1905 – Albert Einstein: Special relativity, proposes light quantum (later named photon) to explain the photoelectric effect, Brownian motion, Mass–energy equivalence
1908 – Hermann Minkowski: Minkowski space
1911 – Ernest Rutherford: Discovery of the atomic nucleus (Rutherford model)
1911 – Kamerlingh Onnes: Superconductivity
1912 - Victor Francis Hess: Cosmic rays
1913 – Niels Bohr: Bohr model of the atom
1915 – Albert Einstein: General relativity
1915 – Emmy Noether: Noether's theorem relates symmetries to conservation laws.
1916 – Schwarzschild metric modeling gravity outside a large sphere
1917 - Ernest Rutherford: Proton proved
1919 – Arthur Eddington:Light bending confirmed – evidence for general relativity
1919–1926 – Kaluza–Klein theory proposing unification of gravity and electromagnetism
1922 – Alexander Friedmann proposes expanding universe
1922–37 – Friedmann–Lemaître–Robertson–Walker metric cosmological model
1923 – Stern–Gerlach experiment
1923 – Edwin Hubble: Galaxies discovered
1923 – Arthur Compton: Particle nature of photons confirmed by observation of photon momentum
1924 – Bose–Einstein statistics
1924 – Louis de Broglie: De Broglie wave
1925 – Werner Heisenberg: Matrix mechanics
1925–27 – Niels Bohr & Max Planck: Quantum mechanics
1925 – Stellar structure understood
1926 – Fermi-Dirac Statistics
1926 – Erwin Schrödinger: Schrödinger Equation
1927 – Werner Heisenberg: Uncertainty principle
1927 – Georges Lemaître: Big Bang
1927 – Paul Dirac: Dirac equation
1927 – Max Born: Born rule
1928 – Paul Dirac proposes the antiparticle
1929 – Edwin Hubble: Expansion of the universe confirmed
1932 – Carl David Anderson: Antimatter (positrons) discovered
1932 – James Chadwick: Neutron discovered
1933 – Ernst Ruska: Invention of the electron microscope
1935 – Subrahmanyan Chandrasekhar: Chandrasekhar limit for black hole collapse
1937 - Majorana particle, hypothesized as a fermion that is its own antiparticle.
1937 – Muon discovered by Carl David Anderson and Seth Neddermeyer
1938 – Pyotr Kapitsa: Superfluidity discovered
1938 – Otto Hahn, Lise Meitner and Fritz Strassmann Nuclear fission discovered
1938–39 – Stellar fusion explains energy production in stars
1939 – Uranium fission discovered
1941 – Feynman path integral
1944 – Theory of magnetism in 2D: Ising model
1947 – C.F. Powell, Giuseppe Occhialini, César Lattes: Pion discovered
1948 – Richard Feynman, Shinichiro Tomonaga, Julian Schwinger, Freeman Dyson: Quantum electrodynamics
1948 – Invention of the maser and laser by Charles Townes
1948 – Feynman diagrams
1955 - Emilio Segrè and Owen Chamberlain: Antiproton discovered
1956 – Bruce Cork: Antineutron discovered
1956 – Electron neutrino discovered
1956–57 – Parity violation proved by Chien-Shiung Wu
1957 - Many-worlds, also called the relative state formulation or the Everett interpretation.
1957 – BCS theory explaining superconductivity
1959–60 – Role of topology in quantum physics predicted and confirmed
1962 – Murray Gell-Mann and Yuval Ne'eman: SU(3) theory of strong interactions
1962 – Muon neutrino discovered
1963 – Chien-Shiung Wu confirms the conserved vector current theory for weak interactions
1963 – Murray Gell-Mann and George Zweig: Quarks predicted
1964 – Bell's Theorem initiates quantitative study of quantum entanglement
1964 - First black hole, Cygnus X-1, discovered
1964 – CP violation discovered by James Cronin and Val Fitch.
1965 – Arno Penzias and Robert Wilson: Cosmic Microwave Background (CMB) discovered
1967 – Unification of weak interaction and electromagnetism (electroweak theory)
1967 – Solar neutrino problem found
1967 – Pulsars (rotating neutron stars) discovered
1968 – Experimental evidence for quarks found
1968 – Vera Rubin: Dark matter theories
1970–73 – Standard Model of elementary particles invented
1971 – Helium 3 superfluidity
1971–75 – Michael Fisher, Kenneth G. Wilson, and Leo Kadanoff: Renormalization group
1972 – Jacob Bekenstein: Black Hole Entropy suggested
1974 – Stephen Hawking: Black hole radiation (Hawking radiation) predicted
1974 – Charmed quark discovered
1975 – Tau lepton found
1975 – Abraham Pais and Sam Treiman: Introduction of the Standard Model of particle physics term
1977 – Bottom quark found
1977 – Anderson localization recognised (Nobel prize in 1977, Philip W. Anderson, Mott, Van Fleck)
1980 – Strangeness as a signature of quark-gluon plasma predicted
1980 – Richard Feynman proposes quantum computing
1980 – Quantum Hall effect
1981 – Alan Guth Theory of cosmic inflation proposed
1982 – Aspect experiment confirms violations of Bell's inequalities
1981 – Fractional quantum Hall effect discovered
1983 – Simulated annealing
1984 – W and Z bosons directly observed
1984 – First laboratory implementation of quantum cryptography
1987 – High-temperature superconductivity discovered in 1986, awarded Nobel prize in 1987 (J. Georg Bednorz and K. Alexander Müller)
1989–98 – Quantum annealing
1993 – Quantum teleportation of unknown states proposed
1994 – Shor's algorithm discovered, initiating the serious study of quantum computation
1994–97 – Matrix models/M-theory
1995 – Wolfgang Ketterle: Bose–Einstein condensate observed
1995 – Top quark discovered
1995–2000 – Econophysics and Kinetic exchange models of markets
1997 – Juan Maldacena proposed the AdS/CFT correspondence
1998 – Accelerating expansion of the universe discovered by the Supernova Cosmology Project and the High-Z Supernova Search Team
1998 – Atmospheric neutrino oscillation established
1999 – Lene Vestergaard Hau: Slow light experimentally demonstrated
2000 – Quark-gluon plasma found
2000 – Tau neutrino found
== 21st century ==
2001 – Solar neutrino oscillation observed, resolving the solar neutrino problem
2003 – WMAP observations of cosmic microwave background
2004 – Exceptional properties of graphene discovered
2007 – Giant magnetoresistance recognized (Nobel prize, Albert Fert and Peter Grünberg)
2008 – First artificial production of antimatter (positrons), by the LLNL
2008 – 16-year study of stellar orbits around Sagittarius A* provides strong evidence for a supermassive black hole at the centre of the Milky Way galaxy
2009 – Planck begins observations of cosmic microwave background
2012 – Higgs boson found by the Compact Muon Solenoid and ATLAS experiments at the Large Hadron Collider
2015 – Gravitational waves are observed
2016 – Topological order – topological phase transitions and order – recognized (Nobel prize, David J. Thouless, F. Duncan M. Haldane and J. Michael Kosterlitz)
2019 – First image of a black hole
2023 – Experimental evidence of stochastic Gravitational wave background
2023 – First "image" of the Milky Way in neutrinos instead of light
== See also ==
Physics
List of timelines
List of unsolved problems in physics
== References == | Wikipedia/Timeline_of_fundamental_physics_discoveries |
Chemical physics is a branch of physics that studies chemical processes from a physical point of view. It focuses on understanding the physical properties and behavior of chemical systems, using principles from both physics and chemistry. This field investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics.
The United States Department of Education defines chemical physics as "A program that focuses on the scientific study of structural phenomena combining the disciplines of physical chemistry and atomic/molecular physics. Includes instruction in heterogeneous structures, alignment and surface phenomena, quantum theory, mathematical physics, statistical and classical mechanics, chemical kinetics, and laser physics."
== Distinction between chemical physics and physical chemistry ==
While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry as it focuses more on using physical theories to understand and explain chemical phenomena at the microscopic level, such as quantum mechanics, statistical mechanics, and molecular dynamics. Meanwhile, physical chemistry uses a broader range of methods, such as thermodynamics and kinetics, to study the physical nature of chemical processes. On the other hand, physical chemistry deals with the physical properties and behavior of matter in chemical reactions, covering a broader range of topics such as thermodynamics, kinetics, and spectroscopy, and often links the macroscopic and microscopic chemical behavior. The distinction between the two fields still needs to be clarified as both fields share common grounds. Scientists often practice in both fields during their research, as there is significant overlap in the topics and techniques used. Journals like PCCP (Physical Chemistry Chemical Physics) cover research in both areas, highlighting their overlap.
== History ==
The term "chemical physics" in its modern sense was first used by the German scientist A. Eucken, who published "A Course in Chemical Physics" in 1930. Prior to this, in 1927, the publication "Electronic Chemistry" by V. N. Kondrat'ev, N. N. Semenov, and Iu. B. Khariton hinted at the meaning of "chemical physics" through its title. The Institute of Chemical Physics of the Academy of Sciences of the USSR was established in 1931. In the United States, "The Journal of Chemical Physics" has been published since 1933.
In 1964, the General Electric Foundation established the Irving Langmuir Award in Chemical Physics to honor outstanding achievements in the field of chemical physics. Named after the Nobel Laureate Irving Langmuir, the award recognizes significant contributions to understanding chemical phenomena through physics principles, impacting areas such as surface chemistry and quantum mechanics.
== What chemical physicists do ==
Chemical physicists investigate the structure and dynamics of ions, free radicals, polymers, clusters, and molecules. Their research includes studying the quantum mechanical aspects of chemical reactions, solvation processes, and the energy flow within and between molecules, and nanomaterials such as quantum dots. Experiments in chemical physics typically involve using spectroscopic methods to understand hydrogen bonding, electron transfer, the formation and dissolution of chemical bonds, chemical reactions, and the formation of nanoparticles.
The research objectives in the theoretical aspect of chemical physics are to understand how chemical structures and reactions work at the quantum mechanical level. This field also aims to clarify how ions and radicals behave and react in the gas phase and to develop precise approximations that simplify the computation of the physics of chemical phenomena.
Chemical physicists are looking for answers to such questions as:
Can we experimentally test quantum mechanical predictions of the vibrations and rotations of simple molecules? Or even those of complex molecules (such as proteins)?
Can we develop more accurate methods for calculating the electronic structure and properties of molecules?
Can we understand chemical reactions from first principles?
Why do quantum dots start blinking (in a pattern suggesting fractal kinetics) after absorbing photons?
How do chemical reactions really take place?
What is the step-by-step process that occurs when an isolated molecule becomes solvated? Or when a whole ensemble of molecules becomes solvated?
Can we use the properties of negative ions to determine molecular structures, understand the dynamics of chemical reactions, or explain photodissociation?
Why does a stream of soft x-rays knock enough electrons out of the atoms in a xenon cluster to cause the cluster to explode?
== Journals ==
The Journal of Chemical Physics
Journal of Physical Chemistry Letters
Journal of Physical Chemistry A
Journal of Physical Chemistry B
Journal of Physical Chemistry C
Physical Chemistry Chemical Physics
Chemical Physics Letters
Chemical Physics
ChemPhysChem
Molecular Physics (journal)
== See also ==
Intermolecular force
Molecular dynamics
Quantum chemistry
Solid-state physics or Condensed matter physics
Surface science
Van der Waals molecule
== References == | Wikipedia/Chemical_physics |
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits.
The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside.
Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (299792458 m/s). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays.
In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: Gauss's law, Gauss's law for magnetism, Faraday's law, Ampère-Maxwell law)
∇
⋅
E
=
ρ
ε
0
∇
⋅
B
=
0
∇
×
E
=
−
∂
B
∂
t
∇
×
B
=
μ
0
(
J
+
ε
0
∂
E
∂
t
)
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} \,\,\,&={\frac {\rho }{\varepsilon _{0}}}\\\nabla \cdot \mathbf {B} \,\,\,&=0\\\nabla \times \mathbf {E} &=-{\frac {\partial \mathbf {B} }{\partial t}}\\\nabla \times \mathbf {B} &=\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\end{aligned}}}
With
E
{\displaystyle \mathbf {E} }
the electric field,
B
{\displaystyle \mathbf {B} }
the magnetic field,
ρ
{\displaystyle \rho }
the electric charge density and
J
{\displaystyle \mathbf {J} }
the current density.
ε
0
{\displaystyle \varepsilon _{0}}
is the vacuum permittivity and
μ
0
{\displaystyle \mu _{0}}
the vacuum permeability.
The equations have two major variants:
The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale.
The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials.
The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences.
The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation.
Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics.
== History of the equations ==
== Conceptual descriptions ==
=== Gauss's law ===
Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space.
=== Gauss's law for magnetism ===
Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field.
=== Faraday's law ===
The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to the negative curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface.
The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire.
=== Ampère–Maxwell law ===
The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve.
Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space.
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.
== Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) ==
In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see § Alternative formulations).
The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.
=== Key to the notation ===
Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
The equations introduce the electric field, E, a vector field, and the magnetic field, B, a pseudovector field, each generally having a time and location dependence.
The sources are
the total electric charge density (total charge per unit volume), ρ, and
the total electric current density (total current per unit area), J.
The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are:
the permittivity of free space, ε0, and
the permeability of free space, μ0, and
the speed of light,
c
=
(
ε
0
μ
0
)
−
1
/
2
{\displaystyle c=({\varepsilon _{0}\mu _{0}})^{-1/2}}
==== Differential equations ====
In the differential equations,
the nabla symbol, ∇, denotes the three-dimensional gradient operator, del,
the ∇⋅ symbol (pronounced "del dot") denotes the divergence operator,
the ∇× symbol (pronounced "del cross") denotes the curl operator.
==== Integral equations ====
In the integral equations,
Ω is any volume with closed boundary surface ∂Ω, and
Σ is any surface with closed boundary curve ∂Σ,
The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law:
d
d
t
∬
Σ
B
⋅
d
S
=
∬
Σ
∂
B
∂
t
⋅
d
S
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }{\frac {\partial \mathbf {B} }{\partial t}}\cdot \mathrm {d} \mathbf {S} \,,}
Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate.
∫
∂
Ω
{\displaystyle {\vphantom {\int }}_{\scriptstyle \partial \Omega }}
is a surface integral over the boundary surface ∂Ω, with the loop indicating the surface is closed
∭
Ω
{\displaystyle \iiint _{\Omega }}
is a volume integral over the volume Ω,
∮
∂
Σ
{\displaystyle \oint _{\partial \Sigma }}
is a line integral around the boundary curve ∂Σ, with the loop indicating the curve is closed.
∬
Σ
{\displaystyle \iint _{\Sigma }}
is a surface integral over the surface Σ,
The total electric charge Q enclosed in Ω is the volume integral over Ω of the charge density ρ (see the "macroscopic formulation" section below):
Q
=
∭
Ω
ρ
d
V
,
{\displaystyle Q=\iiint _{\Omega }\rho \ \mathrm {d} V,}
where dV is the volume element.
The net magnetic flux ΦB is the surface integral of the magnetic field B passing through a fixed surface, Σ:
Φ
B
=
∬
Σ
B
⋅
d
S
,
{\displaystyle \Phi _{B}=\iint _{\Sigma }\mathbf {B} \cdot \mathrm {d} \mathbf {S} ,}
The net electric flux ΦE is the surface integral of the electric field E passing through Σ:
Φ
E
=
∬
Σ
E
⋅
d
S
,
{\displaystyle \Phi _{E}=\iint _{\Sigma }\mathbf {E} \cdot \mathrm {d} \mathbf {S} ,}
The net electric current I is the surface integral of the electric current density J passing through Σ:
I
=
∬
Σ
J
⋅
d
S
,
{\displaystyle I=\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} ,}
where dS denotes the differential vector element of surface area S, normal to surface Σ. (Vector area is sometimes denoted by A rather than S, but this conflicts with the notation for magnetic vector potential).
=== Formulation in the SI ===
=== Formulation in the Gaussian system ===
The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of ε0 and μ0 into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension.: vii Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units",
the Maxwell equations become:
The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1.
Further changes are possible by absorbing factors of 4π. This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics).
== Relationship between differential and integral formulations ==
The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem.
=== Flux and divergence ===
According to the (purely mathematical) Gauss divergence theorem, the electric flux through the
boundary surface ∂Ω can be rewritten as
∮
∂
Ω
E
⋅
d
S
=
∭
Ω
∇
⋅
E
d
V
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {E} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {E} \,\mathrm {d} V}
The integral version of Gauss's equation can thus be rewritten as
∭
Ω
(
∇
⋅
E
−
ρ
ε
0
)
d
V
=
0
{\displaystyle \iiint _{\Omega }\left(\nabla \cdot \mathbf {E} -{\frac {\rho }{\varepsilon _{0}}}\right)\,\mathrm {d} V=0}
Since Ω is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is
the differential equations formulation of Gauss equation up to a trivial rearrangement.
Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives
∮
∂
Ω
B
⋅
d
S
=
∭
Ω
∇
⋅
B
d
V
=
0.
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {B} \cdot \mathrm {d} \mathbf {S} =\iiint _{\Omega }\nabla \cdot \mathbf {B} \,\mathrm {d} V=0.}
which is satisfied for all Ω if and only if
∇
⋅
B
=
0
{\displaystyle \nabla \cdot \mathbf {B} =0}
everywhere.
=== Circulation and curl ===
By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve ∂Σ to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e.
∮
∂
Σ
B
⋅
d
ℓ
=
∬
Σ
(
∇
×
B
)
⋅
d
S
,
{\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {\ell }}=\iint _{\Sigma }(\nabla \times \mathbf {B} )\cdot \mathrm {d} \mathbf {S} ,}
Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as
∬
Σ
(
∇
×
B
−
μ
0
(
J
+
ε
0
∂
E
∂
t
)
)
⋅
d
S
=
0.
{\displaystyle \iint _{\Sigma }\left(\nabla \times \mathbf {B} -\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)\cdot \mathrm {d} \mathbf {S} =0.}
Since Σ can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied.
The equivalence of Faraday's law in differential and integral form follows likewise.
The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.
== Charge conservation ==
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:
0
=
∇
⋅
(
∇
×
B
)
=
∇
⋅
(
μ
0
(
J
+
ε
0
∂
E
∂
t
)
)
=
μ
0
(
∇
⋅
J
+
ε
0
∂
∂
t
∇
⋅
E
)
=
μ
0
(
∇
⋅
J
+
∂
ρ
∂
t
)
{\displaystyle 0=\nabla \cdot (\nabla \times \mathbf {B} )=\nabla \cdot \left(\mu _{0}\left(\mathbf {J} +\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}\right)\right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +\varepsilon _{0}{\frac {\partial }{\partial t}}\nabla \cdot \mathbf {E} \right)=\mu _{0}\left(\nabla \cdot \mathbf {J} +{\frac {\partial \rho }{\partial t}}\right)}
i.e.,
∂
ρ
∂
t
+
∇
⋅
J
=
0.
{\displaystyle {\frac {\partial \rho }{\partial t}}+\nabla \cdot \mathbf {J} =0.}
By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
d
d
t
Q
Ω
=
d
d
t
∭
Ω
ρ
d
V
=
−
{\displaystyle {\frac {d}{dt}}Q_{\Omega }={\frac {d}{dt}}\iiint _{\Omega }\rho \mathrm {d} V=-}
∮
∂
Ω
J
⋅
d
S
=
−
I
∂
Ω
.
{\displaystyle {\vphantom {\oint }}_{\scriptstyle \partial \Omega }\mathbf {J} \cdot {\rm {d}}\mathbf {S} =-I_{\partial \Omega }.}
In particular, in an isolated system the total charge is conserved.
== Vacuum equations, electromagnetic waves and speed of light ==
In a region with no charges (ρ = 0) and no currents (J = 0), such as in vacuum, Maxwell's equations reduce to:
∇
⋅
E
=
0
,
∇
×
E
+
∂
B
∂
t
=
0
,
∇
⋅
B
=
0
,
∇
×
B
−
μ
0
ε
0
∂
E
∂
t
=
0.
{\displaystyle {\begin{aligned}\nabla \cdot \mathbf {E} &=0,&\nabla \times \mathbf {E} +{\frac {\partial \mathbf {B} }{\partial t}}=0,\\\nabla \cdot \mathbf {B} &=0,&\nabla \times \mathbf {B} -\mu _{0}\varepsilon _{0}{\frac {\partial \mathbf {E} }{\partial t}}=0.\end{aligned}}}
Taking the curl (∇×) of the curl equations, and using the curl of the curl identity we obtain
μ
0
ε
0
∂
2
E
∂
t
2
−
∇
2
E
=
0
,
μ
0
ε
0
∂
2
B
∂
t
2
−
∇
2
B
=
0.
{\displaystyle {\begin{aligned}\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\\mu _{0}\varepsilon _{0}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
The quantity
μ
0
ε
0
{\displaystyle \mu _{0}\varepsilon _{0}}
has the dimension (T/L)2. Defining
c
=
(
μ
0
ε
0
)
−
1
/
2
{\displaystyle c=(\mu _{0}\varepsilon _{0})^{-1/2}}
, the equations above have the form of the standard wave equations
1
c
2
∂
2
E
∂
t
2
−
∇
2
E
=
0
,
1
c
2
∂
2
B
∂
t
2
−
∇
2
B
=
0.
{\displaystyle {\begin{aligned}{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {E} }{\partial t^{2}}}-\nabla ^{2}\mathbf {E} =0,\\{\frac {1}{c^{2}}}{\frac {\partial ^{2}\mathbf {B} }{\partial t^{2}}}-\nabla ^{2}\mathbf {B} =0.\end{aligned}}}
Already during Maxwell's lifetime, it was found that the known values for
ε
0
{\displaystyle \varepsilon _{0}}
and
μ
0
{\displaystyle \mu _{0}}
give
c
≈
2.998
×
10
8
m/s
{\displaystyle c\approx 2.998\times 10^{8}~{\text{m/s}}}
, then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of
μ
0
=
4
π
×
10
−
7
{\displaystyle \mu _{0}=4\pi \times 10^{-7}}
and
c
=
299
792
458
m/s
{\displaystyle c=299\,792\,458~{\text{m/s}}}
are defined constants, (which means that by definition
ε
0
=
8.854
187
8...
×
10
−
12
F/m
{\displaystyle \varepsilon _{0}=8.854\,187\,8...\times 10^{-12}~{\text{F/m}}}
) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value.
In materials with relative permittivity, εr, and relative permeability, μr, the phase velocity of light becomes
v
p
=
1
μ
0
μ
r
ε
0
ε
r
,
{\displaystyle v_{\text{p}}={\frac {1}{\sqrt {\mu _{0}\mu _{\text{r}}\varepsilon _{0}\varepsilon _{\text{r}}}}},}
which is usually less than c.
In addition, E and B are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity c.
== Macroscopic formulation ==
The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping.
The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents.: 5
"Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself.
In the macroscopic equations, the influence of bound charge Qb and bound current Ib is incorporated into the displacement field D and the magnetizing field H, while the equations depend only on the free charges Qf and free currents If. This reflects a splitting of the total electric charge Q and current I (and their densities ρ and J) into free and bound parts:
Q
=
Q
f
+
Q
b
=
∭
Ω
(
ρ
f
+
ρ
b
)
d
V
=
∭
Ω
ρ
d
V
,
I
=
I
f
+
I
b
=
∬
Σ
(
J
f
+
J
b
)
⋅
d
S
=
∬
Σ
J
⋅
d
S
.
{\displaystyle {\begin{aligned}Q&=Q_{\text{f}}+Q_{\text{b}}=\iiint _{\Omega }\left(\rho _{\text{f}}+\rho _{\text{b}}\right)\,\mathrm {d} V=\iiint _{\Omega }\rho \,\mathrm {d} V,\\I&=I_{\text{f}}+I_{\text{b}}=\iint _{\Sigma }\left(\mathbf {J} _{\text{f}}+\mathbf {J} _{\text{b}}\right)\cdot \mathrm {d} \mathbf {S} =\iint _{\Sigma }\mathbf {J} \cdot \mathrm {d} \mathbf {S} .\end{aligned}}}
The cost of this splitting is that the additional fields D and H need to be determined through phenomenological constituent equations relating these fields to the electric field E and the magnetic field B, together with the bound charge and current.
See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum;
and the macroscopic equations, dealing with free charge and current, practical to use within materials.
=== Bound charge and current ===
When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization P of the material, its dipole moment per unit volume. If P is uniform, a macroscopic separation of charge is produced only at the surfaces where P enters and leaves the material. For non-uniform P, a charge is also produced in the bulk.
Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization M.
The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of P and M, which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume.
=== Auxiliary fields, polarization and magnetization ===
The definitions of the auxiliary fields are:
D
(
r
,
t
)
=
ε
0
E
(
r
,
t
)
+
P
(
r
,
t
)
,
H
(
r
,
t
)
=
1
μ
0
B
(
r
,
t
)
−
M
(
r
,
t
)
,
{\displaystyle {\begin{aligned}\mathbf {D} (\mathbf {r} ,t)&=\varepsilon _{0}\mathbf {E} (\mathbf {r} ,t)+\mathbf {P} (\mathbf {r} ,t),\\\mathbf {H} (\mathbf {r} ,t)&={\frac {1}{\mu _{0}}}\mathbf {B} (\mathbf {r} ,t)-\mathbf {M} (\mathbf {r} ,t),\end{aligned}}}
where P is the polarization field and M is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density ρb and bound current density Jb in terms of polarization P and magnetization M are then defined as
ρ
b
=
−
∇
⋅
P
,
J
b
=
∇
×
M
+
∂
P
∂
t
.
{\displaystyle {\begin{aligned}\rho _{\text{b}}&=-\nabla \cdot \mathbf {P} ,\\\mathbf {J} _{\text{b}}&=\nabla \times \mathbf {M} +{\frac {\partial \mathbf {P} }{\partial t}}.\end{aligned}}}
If we define the total, bound, and free charge and current density by
ρ
=
ρ
b
+
ρ
f
,
J
=
J
b
+
J
f
,
{\displaystyle {\begin{aligned}\rho &=\rho _{\text{b}}+\rho _{\text{f}},\\\mathbf {J} &=\mathbf {J} _{\text{b}}+\mathbf {J} _{\text{f}},\end{aligned}}}
and use the defining relations above to eliminate D, and H, the "macroscopic" Maxwell's equations reproduce the "microscopic" equations.
=== Constitutive relations ===
In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field D and the electric field E, as well as the magnetizing field H and the magnetic field B. Equivalently, we have to specify the dependence of the polarization P (hence the bound charge) and the magnetization M (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description.: 44–45
For materials without polarization and magnetization, the constitutive relations are (by definition): 2
D
=
ε
0
E
,
H
=
1
μ
0
B
,
{\displaystyle \mathbf {D} =\varepsilon _{0}\mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu _{0}}}\mathbf {B} ,}
where ε0 is the permittivity of free space and μ0 the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal.
An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization.
More generally, for linear materials the constitutive relations are: 44–45
D
=
ε
E
,
H
=
1
μ
B
,
{\displaystyle \mathbf {D} =\varepsilon \mathbf {E} ,\quad \mathbf {H} ={\frac {1}{\mu }}\mathbf {B} ,}
where ε is the permittivity and μ the permeability of the material. For the displacement field D the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field
H
{\displaystyle \mathbf {H} }
, however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however.
For homogeneous materials, ε and μ are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time).: 463
For isotropic materials, ε and μ are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors.: 421 : 463
Materials are generally dispersive, so ε and μ depend on the frequency of any incident EM waves.: 625 : 397
Even more generally, in the case of non-linear materials (see for example nonlinear optics), D and P are not necessarily proportional to E, similarly H or M is not necessarily proportional to B. In general D and H depend on both E and B, on location and time, and possibly other physical quantities.
In applications one also has to describe how the free currents and charge density behave in terms of E and B possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form
J
f
=
σ
E
.
{\displaystyle \mathbf {J} _{\text{f}}=\sigma \mathbf {E} .}
== Alternative formulations ==
Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential φ and the vector potential A. Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect).
Each table describes one formalism. See the main article for details of each formulation.
The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well.
Each table below describes one formalism.
In the tensor calculus formulation, the electromagnetic tensor Fαβ is an antisymmetric covariant order 2 tensor; the four-potential, Aα, is a covariant vector; the current, Jα, is a vector; the square brackets, [ ], denote antisymmetrization of indices; ∂α is the partial derivative with respect to the coordinate, xα. In Minkowski space coordinates are chosen with respect to an inertial frame; (xα) = (ct, x, y, z), so that the metric tensor used to raise and lower indices is ηαβ = diag(1, −1, −1, −1). The d'Alembert operator on Minkowski space is ◻ = ∂α∂α as in the vector formulation. In general spacetimes, the coordinate system xα is arbitrary, the covariant derivative ∇α, the Ricci tensor, Rαβ and raising and lowering of indices are defined by the Lorentzian metric, gαβ and the d'Alembert operator is defined as ◻ = ∇α∇α. The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line.
In the differential form formulation on arbitrary space times, F = 1/2Fαβdxα ∧ dxβ is the electromagnetic tensor considered as a 2-form, A = Aαdxα is the potential 1-form,
J
=
−
J
α
⋆
d
x
α
{\displaystyle J=-J_{\alpha }{\star }\mathrm {d} x^{\alpha }}
is the current 3-form, d is the exterior derivative, and
⋆
{\displaystyle {\star }}
is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star
⋆
{\displaystyle {\star }}
depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator
◻
=
(
−
⋆
d
⋆
d
−
d
⋆
d
⋆
)
{\displaystyle \Box =(-{\star }\mathrm {d} {\star }\mathrm {d} -\mathrm {d} {\star }\mathrm {d} {\star })}
is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact.
Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used.
== Solutions ==
Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow.
As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe, or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator).
Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create.
Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics.
== Overdetermination of Maxwell's equations ==
Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of E and B) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles.
This explanation was first introduced by Julius Adams Stratton in 1941.
Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account.
Both identities
∇
⋅
∇
×
B
≡
0
,
∇
⋅
∇
×
E
≡
0
{\displaystyle \nabla \cdot \nabla \times \mathbf {B} \equiv 0,\nabla \cdot \nabla \times \mathbf {E} \equiv 0}
, which reduce eight equations to six independent ones, are the true reason of overdetermination.
Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws.
For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing.
== Maxwell's equations as the classical limit of QED ==
Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED).
Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances.
Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut.
== Variations ==
Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well.
=== Magnetic monopoles ===
Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.: 273–275
== See also ==
== Explanatory notes ==
== References ==
== Further reading ==
Imaeda, K. (1995), "Biquaternionic Formulation of Maxwell's Equations and their Solutions", in Ablamowicz, Rafał; Lounesto, Pertti (eds.), Clifford Algebras and Spinor Structures, Springer, pp. 265–280, doi:10.1007/978-94-015-8422-7_16, ISBN 978-90-481-4525-6
=== Historical publications ===
On Faraday's Lines of Force – 1855/56. Maxwell's first paper (Part 1 & 2) – Compiled by Blaze Labs Research (PDF).
On Physical Lines of Force – 1861. Maxwell's 1861 paper describing magnetic lines of force – Predecessor to 1873 Treatise.
James Clerk Maxwell, "A Dynamical Theory of the Electromagnetic Field", Philosophical Transactions of the Royal Society of London 155, 459–512 (1865). (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.)
A Dynamical Theory Of The Electromagnetic Field – 1865. Maxwell's 1865 paper describing his 20 equations, link from Google Books.
J. Clerk Maxwell (1873), "A Treatise on Electricity and Magnetism":
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 1 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Maxwell, J. C., "A Treatise on Electricity And Magnetism" – Volume 2 – 1873 – Posner Memorial Collection – Carnegie Mellon University.
Developments before the theory of relativity
Larmor Joseph (1897). "On a dynamical theory of the electric and luminiferous medium. Part 3, Relations with material media" . Phil. Trans. R. Soc. 190: 205–300.
Lorentz Hendrik (1899). "Simplified theory of electrical and optical phenomena in moving systems" . Proc. Acad. Science Amsterdam. I: 427–443.
Lorentz Hendrik (1904). "Electromagnetic phenomena in a system moving with any velocity less than that of light" . Proc. Acad. Science Amsterdam. IV: 669–678.
Henri Poincaré (1900) "La théorie de Lorentz et le Principe de Réaction" (in French), Archives Néerlandaises, V, 253–278.
Henri Poincaré (1902) "La Science et l'Hypothèse" (in French).
Henri Poincaré (1905) "Sur la dynamique de l'électron" (in French), Comptes Rendus de l'Académie des Sciences, 140, 1504–1508.
Catt, Walton and Davidson. "The History of Displacement Current" Archived 2008-05-06 at the Wayback Machine. Wireless World, March 1979.
== External links ==
"Maxwell equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
maxwells-equations.com — An intuitive tutorial of Maxwell's equations.
The Feynman Lectures on Physics Vol. II Ch. 18: The Maxwell Equations
Wikiversity Page on Maxwell's Equations
=== Modern treatments ===
Electromagnetism (ch. 11), B. Crowell, Fullerton College
Lecture series: Relativity and electromagnetism, R. Fitzpatrick, University of Texas at Austin
Electromagnetic waves from Maxwell's equations on Project PHYSNET.
MIT Video Lecture Series (36 × 50 minute lectures) (in .mp4 format) – Electricity and Magnetism Taught by Professor Walter Lewin.
=== Other ===
Silagadze, Z. K. (2002). "Feynman's derivation of Maxwell equations and extra dimensions". Annales de la Fondation Louis de Broglie. 27: 241–256. arXiv:hep-ph/0106235.
Nature Milestones: Photons – Milestone 2 (1861) Maxwell's equations | Wikipedia/Maxwell's_equations |
Methoden der mathematischen Physik (Methods of Mathematical Physics) is a 1924 book, in two volumes totalling around 1000 pages, published under the names of Richard Courant and David Hilbert. It was a comprehensive treatment of the "methods of mathematical physics" of the time. The second volume is devoted to the theory of partial differential equations. It contains presages of the finite element method, on which Courant would work subsequently, and which would eventually become basic to numerical analysis.
The material of the book was worked up from the content of Hilbert's lectures. While Courant played the major editorial role, many at the University of Göttingen were involved in the writing-up, and in that sense it was a collective production.
On its appearance in 1924 it apparently had little direct connection to the quantum theory questions at the centre of the theoretical physics of the time. That changed within two years, since the formulation of the Schrödinger equation made the Hilbert–Courant techniques of immediate relevance to the new wave mechanics.
There was a second edition (1931/7), wartime edition in the USA (1943), and a third German edition (1968). The English version Methods of Mathematical Physics (1953) was revised by Courant, and the second volume had extensive work done on it by the faculty of the Courant Institute. The books quickly gained the reputation as classics, and are among most highly referenced books in advanced mathematical physics courses.
== References ==
Constance Reid (1986) Hilbert-Courant (separate biographies bound as one volume)
Courant, R.; Hilbert, D. (1953), Methods of mathematical physics, vol. I, New York, NY: Interscience Publishers, ISBN 0-471-50447-5, MR 0065391 {{citation}}: ISBN / Date incompatibility (help)
Courant, R.; Hilbert, D. (1962), Methods of mathematical physics, vol. II, New York, NY: Interscience Publishers, doi:10.1002/9783527617234, ISBN 0-471-50439-4, MR 0140802 {{citation}}: ISBN / Date incompatibility (help)
Methoden der mathematischen Physik online reproduction of 1924 German edition. | Wikipedia/Methods_of_Mathematical_Physics |
In philosophy, the philosophy of physics deals with conceptual and interpretational issues in physics, many of which overlap with research done by certain kinds of theoretical physicists. Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, including metaphysics, epistemology, and philosophy of science, while also engaging with the latest developments in theoretical and experimental physics.
Contemporary work focuses on issues at the foundations of the three pillars of modern physics:
Quantum mechanics: Interpretations of quantum theory, including the nature of quantum states, the measurement problem, and the role of observers. Implications of entanglement, nonlocality, and the quantum-classical relationship are also explored.
Relativity: Conceptual foundations of special and general relativity, including the nature of spacetime, simultaneity, causality, and determinism. Compatibility with quantum mechanics, gravitational singularities, and philosophical implications of cosmology are also investigated.
Statistical mechanics: Relationship between microscopic and macroscopic descriptions, interpretation of probability, origin of irreversibility and the arrow of time. Foundations of thermodynamics, role of information theory in understanding entropy, and implications for explanation and reduction in physics.
Other areas of focus include the nature of physical laws, symmetries, and conservation principles; the role of mathematics; and philosophical implications of emerging fields like quantum gravity, quantum information, and complex systems. Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics.
== Philosophy of space and time ==
The existence and nature of space and time (or space-time) are central topics in the philosophy of physics. Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another.
=== Time ===
In classical mechanics, time is taken to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli, one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields". Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom. (ISO 31-1). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity, momentum, energy, and fields.
Both Isaac Newton and Galileo Galilei, as well as most people up until the 20th century, thought that time was the same for everyone everywhere. The modern conception of time is based on Albert Einstein's theory of relativity and Hermann Minkowski's spacetime, in which rates of time run differently in different inertial frames of reference, and space and time are merged into spacetime. Einstein's general relativity as well as the redshift of the light from receding distant galaxies indicate that the entire Universe and possibly space-time itself began about 13.8 billion years ago in the Big Bang. Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment.
=== Space ===
Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (like time and mass), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as the distance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact).
In classical physics, space is a three-dimensional Euclidean space where any position can be described using three coordinates and parameterised by time. Special and general relativity use four-dimensional spacetime rather than three-dimensional space; and currently there are many speculative theories which use more than three spatial dimensions.
== Philosophy of quantum mechanics ==
Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states: the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world.
=== Uncertainty principle ===
The uncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair of conjugate variables, e.g. position and momentum. In the formalism of operator notation, this limit is the evaluation of the commutator of the variables' corresponding operators.
The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics.
=== "Locality" and hidden variables ===
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality, the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light. "Hidden variables" are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell, for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local."
The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".
Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated.
Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman. More advanced experiments, known collectively as Bell tests, have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory.
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.
=== Interpretations of quantum mechanics ===
In March 1927, working in Niels Bohr's institute, Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers of Paul Dirac and Pascual Jordan. He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators.
The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences. Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind.: 85–90
The many-worlds interpretation of quantum mechanics by Hugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims that superposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary of scientific realism, which states that scientific theories aim to give us literally true descriptions of the world.
One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics. Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs, but there is as yet no consensus about whether any of these proofs are successful.
Physicist Roland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts."
== Philosophy of thermal and statistical physics ==
The philosophy of thermal and statistical physics is concerned with the foundational issues and conceptual implications of thermodynamics and statistical mechanics. These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems like irreversibility and entropy. Interest of philosophers in statistical mechanics first arose from the observation of an apparent conflict between the time-reversal symmetry of fundamental physical laws and the irreversibility observed in thermodynamic processes, known as the arrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles.
Another key issue is the interpretation of probability in statistical mechanics, which is primarily concerned with the question of whether probabilities in statistical mechanics are epistemic, reflecting our lack of knowledge about the precise microstate of a system, or ontic, representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective or Bayesian view, holds that probabilities in statistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, the probabilities are not objective features of the world but rather arise from our ignorance. In contrast, the ontic interpretation, also called the objective or frequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws.
== History ==
=== Aristotelian physics ===
Aristotelian physics viewed the universe as a sphere with a center. Matter, composed of the classical elements: earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in the aether such as the Moon, the Sun, planets, or stars circled the center of the universe. Movement is defined as change in place, i.e. space.
=== Newtonian physics ===
The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded in Newtonian physics by Newton's first law of motion.Every body perseveres in its state either of rest or of uniform motion in a straight line, except insofar as it is compelled to change its state by impressed forces.
"Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion. Absolute space being three-dimensional Euclidean space, infinite and without a center. Being "at rest" means being at the same place in absolute space over time. The topology and affine structure of space must permit movement in a straight line at a uniform velocity; thus both space and time must have definite, stable dimensions.
=== Leibniz ===
Gottfried Wilhelm Leibniz, 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton. He devised a new theory of motion (dynamics) based on kinetic energy and potential energy, which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695.
Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense.
He anticipated Albert Einstein by arguing, against Newton, that space, time and motion are relative, not absolute: "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions."
== See also ==
== References ==
== Further reading ==
David Albert, 1994. Quantum Mechanics and Experience. Harvard Univ. Press.
John D. Barrow and Frank J. Tipler, 1986. The Cosmological Anthropic Principle. Oxford Univ. Press.
Beisbart, C. and S. Hartmann, eds., 2011. "Probabilities in Physics". Oxford Univ. Press.
John S. Bell, 2004 (1987), Speakable and Unspeakable in Quantum Mechanics. Cambridge Univ. Press.
David Bohm, 1980. Wholeness and the Implicate Order. Routledge.
Nick Bostrom, 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. Routledge.
Thomas Brody, 1993, Ed. by Luis de la Peña and Peter E. Hodgson The Philosophy Behind Physics Springer ISBN 3-540-55914-0
Harvey Brown, 2005. Physical Relativity. Space-time structure from a dynamical perspective. Oxford Univ. Press.
Butterfield, J., and John Earman, eds., 2007. Philosophy of Physics, Parts A and B. Elsevier.
Craig Callender and Nick Huggett, 2001. Physics Meets Philosophy at the Planck Scale. Cambridge Univ. Press.
David Deutsch, 1997. The Fabric of Reality. London: The Penguin Press.
Bernard d'Espagnat, 1989. Reality and the Physicist. Cambridge Univ. Press. Trans. of Une incertaine réalité; le monde quantique, la connaissance et la durée.
--------, 1995. Veiled Reality. Addison-Wesley.
--------, 2006. On Physics and Philosophy. Princeton Univ. Press.
Roland Omnès, 1994. The Interpretation of Quantum Mechanics. Princeton Univ. Press.
--------, 1999. Quantum Philosophy. Princeton Univ. Press.
Huw Price, 1996. Time's Arrow and Archimedes's Point. Oxford Univ. Press.
Lawrence Sklar, 1992. Philosophy of Physics. Westview Press. ISBN 0-8133-0625-6, ISBN 978-0-8133-0625-4
Victor Stenger, 2000. Timeless Reality. Prometheus Books.
Carl Friedrich von Weizsäcker, 1980. The Unity of Nature. Farrar Straus & Giroux.
Werner Heisenberg, 1971. Physics and Beyond: Encounters and Conversations. Harper & Row (World Perspectives series), 1971.
William Berkson, 1974. Fields of Force. Routledge and Kegan Paul, London. ISBN 0-7100-7626-6
Encyclopædia Britannica, Philosophy of Physics, David Z. Albert
== External links ==
Stanford Encyclopedia of Philosophy:
"Absolute and Relational Theories of Space and Motion"—Nick Huggett and Carl Hoefer
"Being and Becoming in Modern Physics"—Steven Savitt
"Boltzmann's Work in Statistical Physics"—Jos Uffink
"Conventionality of Simultaneity"—Allen Janis
"Early Philosophical Interpretations of General Relativity"—Thomas A. Ryckman
"Everett's Relative-State Formulation of Quantum Mechanics"—Jeffrey A. Barrett
"Experiments in Physics"—Allan Franklin
"Holism and Nonseparability in Physics"—Richard Healey
"Intertheory Relations in Physics"—Robert Batterman
"Naturalism"—David Papineau
"Philosophy of Statistical Mechanics"—Lawrence Sklar
"Physicalism"—Daniel Sojkal
"Quantum Mechanics"—Jenann Ismael
"Reichenbach's Common Cause Principle"—Frank Artzenius
"Structural Realism"—James Ladyman
"Structuralism in Physics"—Heinz-Juergen Schmidt
"Supertasks"—JB Manchak and Bryan Roberts
"Symmetry and Symmetry Breaking"—Katherine Brading and Elena Castellani
"Thermodynamic Asymmetry in Time"—Craig Callender
"Time"—by Ned Markosian
"Time Machines" —John Earman, Chris Wüthrich, and JB Manchak
"Uncertainty principle"—Jan Hilgevoord and Jos Uffink
"The Unity of Science"—Jordi Cat | Wikipedia/Philosophy_of_physics |
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation. The field also studies combinations of elementary particles up to the scale of protons and neutrons, while the study of combinations of protons and neutrons is called nuclear physics.
The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons, and electrons and electron neutrinos. The three fundamental interactions known to be mediated by bosons are electromagnetism, the weak interaction, and the strong interaction.
Quarks cannot exist on their own but form hadrons. Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons. Two baryons, the proton and the neutron, make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond. They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays. Mesons are also produced in cyclotrons or other particle accelerators.
Particles have corresponding antiparticles with the same mass but with opposite electric charges. For example, the antiparticle of the electron is the positron. The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter. Some particles, such as the photon, are their own antiparticle.
These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity, string theory and supersymmetry theory.
Experimental particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider. Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory. The two are closely interrelated: the Higgs boson was postulated theoretically before being confirmed by experiments.
== History ==
The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. The word atom, after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element, but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron. The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons. Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics".
Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the "particle zoo". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance. After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories. This reclassification marked the beginning of modern particle physics.
== Standard Model ==
The current state of the classification of all elementary particles is explained by the Standard Model, which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks. It describes the strong, weak, and electromagnetic fundamental interactions, using mediating gauge bosons. The species of gauge bosons are eight gluons, W−, W+ and Z bosons, and the photon. The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter. Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson.
The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model.
== Subatomic particles ==
Modern particle physics research is focused on subatomic particles, including atomic constituents, such as electrons, protons, and neutrons (protons and neutrons are composite particles called baryons, made of quarks), that are produced by radioactive and scattering processes; such particles are photons, neutrinos, and muons, as well as a wide range of exotic particles. All particles and their interactions observed to date can be described almost entirely by the Standard Model.
Dynamics of particles are also governed by quantum mechanics; they exhibit wave–particle duality, displaying particle-like behaviour under certain experimental conditions and wave-like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles.
=== Quarks and leptons ===
Ordinary matter is made from first-generation quarks (up, down) and leptons (electron, electron neutrino). Collectively, quarks and leptons are called fermions, because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle, where no two particles may occupy the same quantum state. Quarks have fractional elementary electric charge (−1/3 or 2/3) and leptons have whole-numbered electric charge (0 or -1). Quarks also have color charge, which is labeled arbitrarily with no correlation to actual light color as red, green and blue. Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement.
There are three known generations of quarks (up and down, strange and charm, top and bottom) and leptons (electron and its neutrino, muon and its neutrino, tau and its neutrino), with strong indirect evidence that a fourth generation of fermions does not exist.
=== Bosons ===
Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism, the weak interaction, and the strong interaction. Electromagnetism is mediated by the photon, the quanta of light.: 29–30 The weak interaction is mediated by the W and Z bosons. The strong interaction is mediated by the gluon, which can link quarks together to form composite particles. Due to the aforementioned color confinement, gluons are never observed independently. The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism – the gluon and photon are expected to be massless. All bosons have an integer quantum spin (0 and 1) and can have the same quantum state.
=== Antiparticles and color charge ===
Most aforementioned particles have corresponding antiparticles, which compose antimatter. Normal particles have positive lepton or baryon number, and antiparticles have these numbers negative. Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript. For example, the electron and the positron are denoted e− and e+. However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is νe, whereas its antineutrino is νe. When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. Some particles, such as the photon or gluon, have no antiparticles.
Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. The gluon can have eight color charges, which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3)).
=== Composite ===
The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons. Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors. More exotic hadrons can have other types, arrangement or number of quarks (tetraquark, pentaquark).
An atom is made from protons, neutrons and electrons. By modifying the particles inside a normal atom, exotic atoms can be formed. A simple example would be the hydrogen-4.1, which has one of its electrons replaced with a muon.
=== Hypothetical ===
The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem, axions address the strong CP problem, and various other particles are proposed to explain the origins of dark matter and dark energy.
== Experimental laboratories ==
The world's major particle physics laboratories are:
Brookhaven National Laboratory (Long Island, New York, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC), which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider.
Budker Institute of Nuclear Physics (Novosibirsk, Russia). Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006, and VEPP-4, started experiments in 1994. Earlier facilities include the first electron–electron beam–beam collider VEP-1, which conducted experiments from 1964 to 1968; the electron-positron colliders VEPP-2, operated from 1965 to 1974; and, its successor VEPP-2M, performed experiments from 1974 to 2000.
CERN (European Organization for Nuclear Research) (Franco-Swiss border, near Geneva, Switzerland). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped on 2 November 2000 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for the LHC and for fixed-target experiments.
DESY (Deutsches Elektronen-Synchrotron) (Hamburg, Germany). Its main facility was the Hadron Elektron Ring Anlage (HERA), which collided electrons and positrons with protons. The accelerator complex is now focused on the production of synchrotron radiation with PETRA III, FLASH and the European XFEL.
Fermi National Accelerator Laboratory (Fermilab) (Batavia, Illinois, United States). Its main facility until 2011 was the Tevatron, which collided protons and antiprotons and was the highest-energy particle collider on earth until the Large Hadron Collider surpassed it on 29 November 2009.
Institute of High Energy Physics (IHEP) (Beijing, China). IHEP manages a number of China's major particle physics facilities, including the Beijing Electron–Positron Collider II(BEPC II), the Beijing Spectrometer (BES), the Beijing Synchrotron Radiation Facility (BSRF), the International Cosmic-Ray Observatory at Yangbajing in Tibet, the Daya Bay Reactor Neutrino Experiment, the China Spallation Neutron Source, the Hard X-ray Modulation Telescope (HXMT), and the Accelerator-driven Sub-critical System (ADS) as well as the Jiangmen Underground Neutrino Observatory (JUNO).
KEK (Tsukuba, Japan). It is the home of a number of experiments such as the K2K experiment and its successor T2K experiment, a neutrino oscillation experiment and Belle II, an experiment measuring the CP violation of B mesons.
SLAC National Accelerator Laboratory (Menlo Park, California, United States). Its 2-mile-long linear particle accelerator began operating in 1962 and was the basis for numerous electron and positron collision experiments until 2008. Since then the linear accelerator is being used for the Linac Coherent Light Source X-ray laser as well as advanced accelerator design research. SLAC staff continue to participate in developing and building many particle detectors around the world.
== Theory ==
Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics). There are several major interrelated efforts being made in theoretical particle physics today.
One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics. Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory, referring to themselves as phenomenologists. Others make use of lattice field theory and call themselves lattice theorists.
Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall–Sundrum models), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions.
A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything", or "TOE".
There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.
== Practical applications ==
In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging), or used directly in external beam radiotherapy. The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN. Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics.
== Future ==
Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment, among other experiments.
== See also ==
== References ==
== External links == | Wikipedia/Particle_physics |
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
== History ==
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
== Fundamentals ==
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
=== Structure ===
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
==== Atomic structure ====
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
===== Bonding =====
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
===== Crystallography =====
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
==== Nanostructure ====
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
==== Microstructure ====
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
==== Macrostructure ====
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
=== Properties ===
Materials exhibit myriad properties, including the following.
Mechanical properties, see Strength of materials
Chemical properties, see Chemistry
Electrical properties, see Electricity
Thermal properties, see Thermodynamics
Optical properties, see Optics and Photonics
Magnetic properties, see Magnetism
The properties of a material determine its usability and hence its engineering application.
=== Processing ===
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
=== Thermodynamics ===
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.
=== Kinetics ===
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
== Research ==
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.
=== Nanomaterials ===
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
=== Biomaterials ===
A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
=== Electronic, optical, and magnetic ===
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
=== Computational materials science ===
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
== Industry ==
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.
=== Ceramics and glasses ===
Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.
=== Composites ===
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.
Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
=== Polymers ===
Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
=== Metal alloys ===
The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
=== Semiconductors ===
A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
== Relation with other fields ==
Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.
The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.
== Emerging technologies ==
== Subdisciplines ==
The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.
Ceramic engineering
Metallurgy
Polymer science and engineering
Composite engineering
There are additionally broadly applicable, materials independent, endeavors.
Materials characterization (spectroscopy, microscopy, diffraction)
Computational materials science
Materials informatics and selection
There are also relatively broad focuses across materials on specific phenomena and techniques.
Crystallography
Surface science
Tribology
Microelectronics
== Related or interdisciplinary fields ==
Condensed matter physics, solid-state physics and solid-state chemistry
Nanotechnology
Mineralogy
Supramolecular chemistry
Biomaterials science
== Professional societies ==
American Ceramic Society
ASM International
Association for Iron and Steel Technology
Materials Research Society
The Minerals, Metals & Materials Society
== See also ==
== References ==
=== Citations ===
=== Bibliography ===
Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3.
Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8.
Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5.
Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4.
Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3.
González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1.
Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9.
Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1.
Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826.
Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2.
Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9.
Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276.
== Further reading ==
Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007
Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7.
Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8.
Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0.
Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7.
O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3.
== External links ==
MS&T conference organized by the main materials societies
MIT OpenCourseWare for MSE | Wikipedia/Materials_science |
The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions
and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action.
Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.
== History ==
The calculus of variations began with the work of Isaac Newton, such as with Newton's minimal resistance problem, which he formulated and solved in 1685, and later published in his Principia in 1687, which was the first problem in the field to be formulated and correctly solved, and was also one of the most difficult problems tackled by variational methods prior to the twentieth century. This problem was followed by the brachistochrone curve problem raised by Johann Bernoulli (1696), which was similar to one raised by Galileo Galilei in 1638, but he did not solve the problem explicity nor did he use the methods based on calculus. Bernoulli had solved the problem, using the principle of least time in the process, but not calculus of variations, whereas Newton did to solve the problem in 1697, and as a result, he pioneered the field with his work on the two problems. The problem would immediately occupy the attention of Jacob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Joseph-Louis Lagrange was influenced by Euler's work to contribute greatly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum.
Adrien-Marie Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject. To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Pierre Frédéric Sarrus (1842) which was condensed and improved by Augustin-Louis Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), John Hewitt Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Karl Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development.
In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions. Marston Morse applied calculus of variations in what is now called Morse theory. Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. The dynamic programming of Richard Bellman is an alternative to the calculus of variations.
== Extrema ==
The calculus of variations is concerned with the maxima or minima (collectively called extrema) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements
y
{\displaystyle y}
of a given function space defined over a given domain. A functional
J
[
y
]
{\displaystyle J[y]}
is said to have an extremum at the function
f
{\displaystyle f}
if
Δ
J
=
J
[
y
]
−
J
[
f
]
{\displaystyle \Delta J=J[y]-J[f]}
has the same sign for all
y
{\displaystyle y}
in an arbitrarily small neighborhood of
f
.
{\displaystyle f.}
The function
f
{\displaystyle f}
is called an extremal function or extremal. The extremum
J
[
f
]
{\displaystyle J[f]}
is called a local maximum if
Δ
J
≤
0
{\displaystyle \Delta J\leq 0}
everywhere in an arbitrarily small neighborhood of
f
,
{\displaystyle f,}
and a local minimum if
Δ
J
≥
0
{\displaystyle \Delta J\geq 0}
there. For a function space of continuous functions, extrema of corresponding functionals are called strong extrema or weak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.
Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema. An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation.
== Euler–Lagrange equation ==
Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.
Consider the functional
J
[
y
]
=
∫
x
1
x
2
L
(
x
,
y
(
x
)
,
y
′
(
x
)
)
d
x
,
{\displaystyle J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx,}
where
x
1
,
x
2
{\displaystyle x_{1},x_{2}}
are constants,
y
(
x
)
{\displaystyle y(x)}
is twice continuously differentiable,
y
′
(
x
)
=
d
y
d
x
,
{\displaystyle y'(x)={\frac {dy}{dx}},}
L
(
x
,
y
(
x
)
,
y
′
(
x
)
)
{\displaystyle L\left(x,y(x),y'(x)\right)}
is twice continuously differentiable with respect to its arguments
x
,
y
,
{\displaystyle x,y,}
and
y
′
.
{\displaystyle y'.}
If the functional
J
[
y
]
{\displaystyle J[y]}
attains a local minimum at
f
,
{\displaystyle f,}
and
η
(
x
)
{\displaystyle \eta (x)}
is an arbitrary function that has at least one derivative and vanishes at the endpoints
x
1
{\displaystyle x_{1}}
and
x
2
,
{\displaystyle x_{2},}
then for any number
ε
{\displaystyle \varepsilon }
close to 0,
J
[
f
]
≤
J
[
f
+
ε
η
]
.
{\displaystyle J[f]\leq J[f+\varepsilon \eta ]\,.}
The term
ε
η
{\displaystyle \varepsilon \eta }
is called the variation of the function
f
{\displaystyle f}
and is denoted by
δ
f
.
{\displaystyle \delta f.}
Substituting
f
+
ε
η
{\displaystyle f+\varepsilon \eta }
for
y
{\displaystyle y}
in the functional
J
[
y
]
,
{\displaystyle J[y],}
the result is a function of
ε
,
{\displaystyle \varepsilon ,}
Φ
(
ε
)
=
J
[
f
+
ε
η
]
.
{\displaystyle \Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.}
Since the functional
J
[
y
]
{\displaystyle J[y]}
has a minimum for
y
=
f
{\displaystyle y=f}
the function
Φ
(
ε
)
{\displaystyle \Phi (\varepsilon )}
has a minimum at
ε
=
0
{\displaystyle \varepsilon =0}
and thus,
Φ
′
(
0
)
≡
d
Φ
d
ε
|
ε
=
0
=
∫
x
1
x
2
d
L
d
ε
|
ε
=
0
d
x
=
0
.
{\displaystyle \Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.}
Taking the total derivative of
L
[
x
,
y
,
y
′
]
,
{\displaystyle L\left[x,y,y'\right],}
where
y
=
f
+
ε
η
{\displaystyle y=f+\varepsilon \eta }
and
y
′
=
f
′
+
ε
η
′
{\displaystyle y'=f'+\varepsilon \eta '}
are considered as functions of
ε
{\displaystyle \varepsilon }
rather than
x
,
{\displaystyle x,}
yields
d
L
d
ε
=
∂
L
∂
y
d
y
d
ε
+
∂
L
∂
y
′
d
y
′
d
ε
{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}}
and because
d
y
d
ε
=
η
{\displaystyle {\frac {dy}{d\varepsilon }}=\eta }
and
d
y
′
d
ε
=
η
′
,
{\displaystyle {\frac {dy'}{d\varepsilon }}=\eta ',}
d
L
d
ε
=
∂
L
∂
y
η
+
∂
L
∂
y
′
η
′
.
{\displaystyle {\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.}
Therefore,
∫
x
1
x
2
d
L
d
ε
|
ε
=
0
d
x
=
∫
x
1
x
2
(
∂
L
∂
f
η
+
∂
L
∂
f
′
η
′
)
d
x
=
∫
x
1
x
2
∂
L
∂
f
η
d
x
+
∂
L
∂
f
′
η
|
x
1
x
2
−
∫
x
1
x
2
η
d
d
x
∂
L
∂
f
′
d
x
=
∫
x
1
x
2
(
∂
L
∂
f
η
−
η
d
d
x
∂
L
∂
f
′
)
d
x
{\displaystyle {\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}}
where
L
[
x
,
y
,
y
′
]
→
L
[
x
,
f
,
f
′
]
{\displaystyle L\left[x,y,y'\right]\to L\left[x,f,f'\right]}
when
ε
=
0
{\displaystyle \varepsilon =0}
and we have used integration by parts on the second term. The second term on the second line vanishes because
η
=
0
{\displaystyle \eta =0}
at
x
1
{\displaystyle x_{1}}
and
x
2
{\displaystyle x_{2}}
by definition. Also, as previously mentioned the left side of the equation is zero so that
∫
x
1
x
2
η
(
x
)
(
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
)
d
x
=
0
.
{\displaystyle \int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.}
According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
=
0
{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}
which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of
J
[
f
]
{\displaystyle J[f]}
and is denoted
δ
J
{\displaystyle \delta J}
or
δ
f
(
x
)
.
{\displaystyle \delta f(x).}
In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function
f
(
x
)
.
{\displaystyle f(x).}
The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum
J
[
f
]
.
{\displaystyle J[f].}
A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum.
=== Example ===
In order to illustrate this process, consider the problem of finding the extremal function
y
=
f
(
x
)
,
{\displaystyle y=f(x),}
which is the shortest curve that connects two points
(
x
1
,
y
1
)
{\displaystyle \left(x_{1},y_{1}\right)}
and
(
x
2
,
y
2
)
.
{\displaystyle \left(x_{2},y_{2}\right).}
The arc length of the curve is given by
A
[
y
]
=
∫
x
1
x
2
1
+
[
y
′
(
x
)
]
2
d
x
,
{\displaystyle A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,}
with
y
′
(
x
)
=
d
y
d
x
,
y
1
=
f
(
x
1
)
,
y
2
=
f
(
x
2
)
.
{\displaystyle y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.}
Note that assuming y is a function of x loses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes.
The Euler–Lagrange equation will now be used to find the extremal function
f
(
x
)
{\displaystyle f(x)}
that minimizes the functional
A
[
y
]
.
{\displaystyle A[y].}
∂
L
∂
f
−
d
d
x
∂
L
∂
f
′
=
0
{\displaystyle {\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0}
with
L
=
1
+
[
f
′
(
x
)
]
2
.
{\displaystyle L={\sqrt {1+[f'(x)]^{2}}}\,.}
Since
f
{\displaystyle f}
does not appear explicitly in
L
,
{\displaystyle L,}
the first term in the Euler–Lagrange equation vanishes for all
f
(
x
)
{\displaystyle f(x)}
and thus,
d
d
x
∂
L
∂
f
′
=
0
.
{\displaystyle {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.}
Substituting for
L
{\displaystyle L}
and taking the derivative,
d
d
x
f
′
(
x
)
1
+
[
f
′
(
x
)
]
2
=
0
.
{\displaystyle {\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.}
Thus
f
′
(
x
)
1
+
[
f
′
(
x
)
]
2
=
c
,
{\displaystyle {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,}
for some constant
c
{\displaystyle c}
. Then
[
f
′
(
x
)
]
2
1
+
[
f
′
(
x
)
]
2
=
c
2
,
{\displaystyle {\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,}
where
0
≤
c
2
<
1.
{\displaystyle 0\leq c^{2}<1.}
Solving, we get
[
f
′
(
x
)
]
2
=
c
2
1
−
c
2
{\displaystyle [f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}}
which implies that
f
′
(
x
)
=
m
{\displaystyle f'(x)=m}
is a constant and therefore that the shortest curve that connects two points
(
x
1
,
y
1
)
{\displaystyle \left(x_{1},y_{1}\right)}
and
(
x
2
,
y
2
)
{\displaystyle \left(x_{2},y_{2}\right)}
is
f
(
x
)
=
m
x
+
b
with
m
=
y
2
−
y
1
x
2
−
x
1
and
b
=
x
2
y
1
−
x
1
y
2
x
2
−
x
1
{\displaystyle f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}}
and we have thus found the extremal function
f
(
x
)
{\displaystyle f(x)}
that minimizes the functional
A
[
y
]
{\displaystyle A[y]}
so that
A
[
f
]
{\displaystyle A[f]}
is a minimum. The equation for a straight line is
y
=
m
x
+
b
.
{\displaystyle y=mx+b.}
In other words, the shortest distance between two points is a straight line.
== Beltrami's identity ==
In physics problems it may be the case that
∂
L
∂
x
=
0
,
{\displaystyle {\frac {\partial L}{\partial x}}=0,}
meaning the integrand is a function of
f
(
x
)
{\displaystyle f(x)}
and
f
′
(
x
)
{\displaystyle f'(x)}
but
x
{\displaystyle x}
does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity
L
−
f
′
∂
L
∂
f
′
=
C
,
{\displaystyle L-f'{\frac {\partial L}{\partial f'}}=C\,,}
where
C
{\displaystyle C}
is a constant. The left hand side is the Legendre transformation of
L
{\displaystyle L}
with respect to
f
′
(
x
)
.
{\displaystyle f'(x).}
The intuition behind this result is that, if the variable
x
{\displaystyle x}
is actually time, then the statement
∂
L
∂
x
=
0
{\displaystyle {\frac {\partial L}{\partial x}}=0}
implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity.
== Euler–Poisson equation ==
If
S
{\displaystyle S}
depends on higher-derivatives of
y
(
x
)
{\displaystyle y(x)}
, that is, if
S
=
∫
a
b
f
(
x
,
y
(
x
)
,
y
′
(
x
)
,
…
,
y
(
n
)
(
x
)
)
d
x
,
{\displaystyle S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,}
then
y
{\displaystyle y}
must satisfy the Euler–Poisson equation,
∂
f
∂
y
−
d
d
x
(
∂
f
∂
y
′
)
+
⋯
+
(
−
1
)
n
d
n
d
x
n
[
∂
f
∂
y
(
n
)
]
=
0.
{\displaystyle {\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.}
== Du Bois-Reymond's theorem ==
The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral
J
{\displaystyle J}
requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If
L
{\displaystyle L}
has continuous first and second derivatives with respect to all of its arguments, and if
∂
2
L
∂
f
′
2
≠
0
,
{\displaystyle {\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,}
then
f
{\displaystyle f}
has two continuous derivatives, and it satisfies the Euler–Lagrange equation.
== Lavrentiev phenomenon ==
Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior.
However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:
L
[
x
]
=
∫
0
1
(
x
3
−
t
)
2
x
′
6
,
{\displaystyle L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},}
A
=
{
x
∈
W
1
,
1
(
0
,
1
)
:
x
(
0
)
=
0
,
x
(
1
)
=
1
}
.
{\displaystyle {A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.}
Clearly,
x
(
t
)
=
t
1
3
{\displaystyle x(t)=t^{\frac {1}{3}}}
minimizes the functional, but we find any function
x
∈
W
1
,
∞
{\displaystyle x\in W^{1,\infty }}
gives a value bounded away from the infimum.
Examples (in one-dimension) are traditionally manifested across
W
1
,
1
{\displaystyle W^{1,1}}
and
W
1
,
∞
,
{\displaystyle W^{1,\infty },}
but Ball and Mizel procured the first functional that displayed Lavrentiev's Phenomenon across
W
1
,
p
{\displaystyle W^{1,p}}
and
W
1
,
q
{\displaystyle W^{1,q}}
for
1
≤
p
<
q
<
∞
.
{\displaystyle 1\leq p<q<\infty .}
There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals.
Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.
== Functions of several variables ==
For example, if
φ
(
x
,
y
)
{\displaystyle \varphi (x,y)}
denotes the displacement of a membrane above the domain
D
{\displaystyle D}
in the
x
,
y
{\displaystyle x,y}
plane, then its potential energy is proportional to its surface area:
U
[
φ
]
=
∬
D
1
+
∇
φ
⋅
∇
φ
d
x
d
y
.
{\displaystyle U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.}
Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of
D
{\displaystyle D}
; the solutions are called minimal surfaces. The Euler–Lagrange equation for this problem is nonlinear:
φ
x
x
(
1
+
φ
y
2
)
+
φ
y
y
(
1
+
φ
x
2
)
−
2
φ
x
φ
y
φ
x
y
=
0.
{\displaystyle \varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.}
See Courant (1950) for details.
=== Dirichlet's principle ===
It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by
V
[
φ
]
=
1
2
∬
D
∇
φ
⋅
∇
φ
d
x
d
y
.
{\displaystyle V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.}
The functional
V
{\displaystyle V}
is to be minimized among all trial functions
φ
{\displaystyle \varphi }
that assume prescribed values on the boundary of
D
{\displaystyle D}
. If
u
{\displaystyle u}
is the minimizing function and
v
{\displaystyle v}
is an arbitrary smooth function that vanishes on the boundary of
D
{\displaystyle D}
, then the first variation of
V
[
u
+
ε
v
]
{\displaystyle V[u+\varepsilon v]}
must vanish:
d
d
ε
V
[
u
+
ε
v
]
|
ε
=
0
=
∬
D
∇
u
⋅
∇
v
d
x
d
y
=
0.
{\displaystyle \left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.}
Provided that
u
{\displaystyle u}
has two derivatives, we may apply the divergence theorem to obtain
∬
D
∇
⋅
(
v
∇
u
)
d
x
d
y
=
∬
D
∇
u
⋅
∇
v
+
v
∇
⋅
∇
u
d
x
d
y
=
∫
C
v
∂
u
∂
n
d
s
,
{\displaystyle \iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,}
where
C
{\displaystyle C}
is the boundary of
D
,
{\displaystyle D,}
s
{\displaystyle s}
is arclength along
C
{\displaystyle C}
and
∂
u
/
∂
n
{\displaystyle \partial u/\partial n}
is the normal derivative of
u
{\displaystyle u}
on
C
.
{\displaystyle C.}
Since
v
{\displaystyle v}
vanishes on
C
{\displaystyle C}
and the first variation vanishes, the result is
∬
D
v
∇
⋅
∇
u
d
x
d
y
=
0
{\displaystyle \iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0}
for all smooth functions
v
{\displaystyle v}
that vanish on the boundary of
D
{\displaystyle D}
. The proof for the case of one dimensional integrals may be adapted to this case to show that
∇
⋅
∇
u
=
0
{\displaystyle \nabla \cdot \nabla u=0}
in
D
.
{\displaystyle D.}
The difficulty with this reasoning is the assumption that the minimizing function
u
{\displaystyle u}
must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize
W
[
φ
]
=
∫
−
1
1
(
x
φ
′
)
2
d
x
{\displaystyle W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx}
among all functions
φ
{\displaystyle \varphi }
that satisfy
φ
(
−
1
)
=
−
1
{\displaystyle \varphi (-1)=-1}
and
φ
(
1
)
=
1.
{\displaystyle \varphi (1)=1.}
W
{\displaystyle W}
can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makes
W
=
0.
{\displaystyle W=0.}
Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998).
=== Generalization to other boundary value problems ===
A more general expression for the potential energy of a membrane is
V
[
φ
]
=
∬
D
[
1
2
∇
φ
⋅
∇
φ
+
f
(
x
,
y
)
φ
]
d
x
d
y
+
∫
C
[
1
2
σ
(
s
)
φ
2
+
g
(
s
)
φ
]
d
s
.
{\displaystyle V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.}
This corresponds to an external force density
f
(
x
,
y
)
{\displaystyle f(x,y)}
in
D
,
{\displaystyle D,}
an external force
g
(
s
)
{\displaystyle g(s)}
on the boundary
C
,
{\displaystyle C,}
and elastic forces with modulus
σ
(
s
)
{\displaystyle \sigma (s)}
acting on
C
{\displaystyle C}
. The function that minimizes the potential energy with no restriction on its boundary values will be denoted by
u
{\displaystyle u}
. Provided that
f
{\displaystyle f}
and
g
{\displaystyle g}
are continuous, regularity theory implies that the minimizing function
u
{\displaystyle u}
will have two derivatives. In taking the first variation, no boundary condition need be imposed on the increment
v
{\displaystyle v}
. The first variation of
V
[
u
+
ε
v
]
{\displaystyle V[u+\varepsilon v]}
is given by
∬
D
[
∇
u
⋅
∇
v
+
f
v
]
d
x
d
y
+
∫
C
[
σ
u
v
+
g
v
]
d
s
=
0.
{\displaystyle \iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.}
If we apply the divergence theorem, the result is
∬
D
[
−
v
∇
⋅
∇
u
+
v
f
]
d
x
d
y
+
∫
C
v
[
∂
u
∂
n
+
σ
u
+
g
]
d
s
=
0.
{\displaystyle \iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.}
If we first set
v
=
0
{\displaystyle v=0}
on
C
,
{\displaystyle C,}
the boundary integral vanishes, and we conclude as before that
−
∇
⋅
∇
u
+
f
=
0
{\displaystyle -\nabla \cdot \nabla u+f=0}
in
D
{\displaystyle D}
. Then if we allow
v
{\displaystyle v}
to assume arbitrary boundary values, this implies that
u
{\displaystyle u}
must satisfy the boundary condition
∂
u
∂
n
+
σ
u
+
g
=
0
,
{\displaystyle {\frac {\partial u}{\partial n}}+\sigma u+g=0,}
on
C
{\displaystyle C}
. This boundary condition is a consequence of the minimizing property of
u
{\displaystyle u}
: it is not imposed beforehand. Such conditions are called natural boundary conditions.
The preceding reasoning is not valid if
σ
{\displaystyle \sigma }
vanishes identically on
C
.
{\displaystyle C.}
In such a case, we could allow a trial function
φ
≡
c
{\displaystyle \varphi \equiv c}
, where
c
{\displaystyle c}
is a constant. For such a trial function,
V
[
c
]
=
c
[
∬
D
f
d
x
d
y
+
∫
C
g
d
s
]
.
{\displaystyle V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].}
By appropriate choice of
c
{\displaystyle c}
,
V
{\displaystyle V}
can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless
∬
D
f
d
x
d
y
+
∫
C
g
d
s
=
0.
{\displaystyle \iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.}
This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953).
== Eigenvalue problems ==
Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems.
=== Sturm–Liouville problems ===
The Sturm–Liouville eigenvalue problem involves a general quadratic form
Q
[
y
]
=
∫
x
1
x
2
[
p
(
x
)
y
′
(
x
)
2
+
q
(
x
)
y
(
x
)
2
]
d
x
,
{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,}
where
y
{\displaystyle y}
is restricted to functions that satisfy the boundary conditions
y
(
x
1
)
=
0
,
y
(
x
2
)
=
0.
{\displaystyle y(x_{1})=0,\quad y(x_{2})=0.}
Let
R
{\displaystyle R}
be a normalization integral
R
[
y
]
=
∫
x
1
x
2
r
(
x
)
y
(
x
)
2
d
x
.
{\displaystyle R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.}
The functions
p
(
x
)
{\displaystyle p(x)}
and
r
(
x
)
{\displaystyle r(x)}
are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratio
Q
/
R
{\displaystyle Q/R}
among all
y
{\displaystyle y}
satisfying the endpoint conditions, which is equivalent to minimizing
Q
[
y
]
{\displaystyle Q[y]}
under the constraint that
R
[
y
]
{\displaystyle R[y]}
is constant. It is shown below that the Euler–Lagrange equation for the minimizing
u
{\displaystyle u}
is
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
=
0
,
{\displaystyle -(pu')'+qu-\lambda ru=0,}
where
λ
{\displaystyle \lambda }
is the quotient
λ
=
Q
[
u
]
R
[
u
]
.
{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}
It can be shown (see Gelfand and Fomin 1963) that the minimizing
u
{\displaystyle u}
has two derivatives and satisfies the Euler–Lagrange equation. The associated
λ
{\displaystyle \lambda }
will be denoted by
λ
1
{\displaystyle \lambda _{1}}
; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted by
u
1
(
x
)
{\displaystyle u_{1}(x)}
. This variational characterization of eigenvalues leads to the Rayleigh–Ritz method: choose an approximating
u
{\displaystyle u}
as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate.
The next smallest eigenvalue and eigenfunction can be obtained by minimizing
Q
{\displaystyle Q}
under the additional constraint
∫
x
1
x
2
r
(
x
)
u
1
(
x
)
y
(
x
)
d
x
=
0.
{\displaystyle \int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.}
This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem.
The variational problem also applies to more general boundary conditions. Instead of requiring that
y
{\displaystyle y}
vanish at the endpoints, we may not impose any condition at the endpoints, and set
Q
[
y
]
=
∫
x
1
x
2
[
p
(
x
)
y
′
(
x
)
2
+
q
(
x
)
y
(
x
)
2
]
d
x
+
a
1
y
(
x
1
)
2
+
a
2
y
(
x
2
)
2
,
{\displaystyle Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},}
where
a
1
{\displaystyle a_{1}}
and
a
2
{\displaystyle a_{2}}
are arbitrary. If we set
y
=
u
+
ε
v
{\displaystyle y=u+\varepsilon v}
, the first variation for the ratio
Q
/
R
{\displaystyle Q/R}
is
V
1
=
2
R
[
u
]
(
∫
x
1
x
2
[
p
(
x
)
u
′
(
x
)
v
′
(
x
)
+
q
(
x
)
u
(
x
)
v
(
x
)
−
λ
r
(
x
)
u
(
x
)
v
(
x
)
]
d
x
+
a
1
u
(
x
1
)
v
(
x
1
)
+
a
2
u
(
x
2
)
v
(
x
2
)
)
,
{\displaystyle V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),}
where
λ
{\displaystyle \lambda }
is given by the ratio
Q
[
u
]
/
R
[
u
]
{\displaystyle Q[u]/R[u]}
as previously.
After integration by parts,
R
[
u
]
2
V
1
=
∫
x
1
x
2
v
(
x
)
[
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
]
d
x
+
v
(
x
1
)
[
−
p
(
x
1
)
u
′
(
x
1
)
+
a
1
u
(
x
1
)
]
+
v
(
x
2
)
[
p
(
x
2
)
u
′
(
x
2
)
+
a
2
u
(
x
2
)
]
.
{\displaystyle {\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].}
If we first require that
v
{\displaystyle v}
vanish at the endpoints, the first variation will vanish for all such
v
{\displaystyle v}
only if
−
(
p
u
′
)
′
+
q
u
−
λ
r
u
=
0
for
x
1
<
x
<
x
2
.
{\displaystyle -(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.}
If
u
{\displaystyle u}
satisfies this condition, then the first variation will vanish for arbitrary
v
{\displaystyle v}
only if
−
p
(
x
1
)
u
′
(
x
1
)
+
a
1
u
(
x
1
)
=
0
,
and
p
(
x
2
)
u
′
(
x
2
)
+
a
2
u
(
x
2
)
=
0.
{\displaystyle -p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.}
These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization.
=== Eigenvalue problems in several dimensions ===
Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain
D
{\displaystyle D}
with boundary
B
{\displaystyle B}
in three dimensions we may define
Q
[
φ
]
=
∭
D
p
(
X
)
∇
φ
⋅
∇
φ
+
q
(
X
)
φ
2
d
x
d
y
d
z
+
∬
B
σ
(
S
)
φ
2
d
S
,
{\displaystyle Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,}
and
R
[
φ
]
=
∭
D
r
(
X
)
φ
(
X
)
2
d
x
d
y
d
z
.
{\displaystyle R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.}
Let
u
{\displaystyle u}
be the function that minimizes the quotient
Q
[
φ
]
/
R
[
φ
]
{\displaystyle Q[\varphi ]/R[\varphi ]}
,
with no condition prescribed on the boundary
B
.
{\displaystyle B.}
The Euler–Lagrange equation satisfied by
u
{\displaystyle u}
is
−
∇
⋅
(
p
(
X
)
∇
u
)
+
q
(
x
)
u
−
λ
r
(
x
)
u
=
0
,
{\displaystyle -\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,}
where
λ
=
Q
[
u
]
R
[
u
]
.
{\displaystyle \lambda ={\frac {Q[u]}{R[u]}}.}
The minimizing
u
{\displaystyle u}
must also satisfy the natural boundary condition
p
(
S
)
∂
u
∂
n
+
σ
(
S
)
u
=
0
,
{\displaystyle p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,}
on the boundary
B
.
{\displaystyle B.}
This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953).
== Applications ==
=== Optics ===
Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the
x
{\displaystyle x}
-coordinate is chosen as the parameter along the path, and
y
=
f
(
x
)
{\displaystyle y=f(x)}
along the path, then the optical length is given by
A
[
f
]
=
∫
x
0
x
1
n
(
x
,
f
(
x
)
)
1
+
f
′
(
x
)
2
d
x
,
{\displaystyle A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,}
where the refractive index
n
(
x
,
y
)
{\displaystyle n(x,y)}
depends upon the material.
If we try
f
(
x
)
=
f
0
(
x
)
+
ε
f
1
(
x
)
{\displaystyle f(x)=f_{0}(x)+\varepsilon f_{1}(x)}
then the first variation of
A
{\displaystyle A}
(the derivative of
A
{\displaystyle A}
with respect to
ε
{\displaystyle \varepsilon }
) is
δ
A
[
f
0
,
f
1
]
=
∫
x
0
x
1
[
n
(
x
,
f
0
)
f
0
′
(
x
)
f
1
′
(
x
)
1
+
f
0
′
(
x
)
2
+
n
y
(
x
,
f
0
)
f
1
1
+
f
0
′
(
x
)
2
]
d
x
.
{\displaystyle \delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.}
After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation
−
d
d
x
[
n
(
x
,
f
0
)
f
0
′
1
+
f
0
′
2
]
+
n
y
(
x
,
f
0
)
1
+
f
0
′
(
x
)
2
=
0.
{\displaystyle -{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.}
The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.
==== Snell's law ====
There is a discontinuity of the refractive index when light enters or leaves a lens. Let
n
(
x
,
y
)
=
{
n
(
−
)
if
x
<
0
,
n
(
+
)
if
x
>
0
,
{\displaystyle n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}}
where
n
(
−
)
{\displaystyle n_{(-)}}
and
n
(
+
)
{\displaystyle n_{(+)}}
are constants. Then the Euler–Lagrange equation holds as before in the region where
x
<
0
{\displaystyle x<0}
or
x
>
0
{\displaystyle x>0}
, and in fact the path is a straight line there, since the refractive index is constant. At the
x
=
0
{\displaystyle x=0}
,
f
{\displaystyle f}
must be continuous, but
f
′
{\displaystyle f'}
may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form
δ
A
[
f
0
,
f
1
]
=
f
1
(
0
)
[
n
(
−
)
f
0
′
(
0
−
)
1
+
f
0
′
(
0
−
)
2
−
n
(
+
)
f
0
′
(
0
+
)
1
+
f
0
′
(
0
+
)
2
]
.
{\displaystyle \delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].}
The factor multiplying
n
(
−
)
{\displaystyle n_{(-)}}
is the sine of angle of the incident ray with the
x
{\displaystyle x}
axis, and the factor multiplying
n
(
+
)
{\displaystyle n_{(+)}}
is the sine of angle of the refracted ray with the
x
{\displaystyle x}
axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length.
==== Fermat's principle in three dimensions ====
It is expedient to use vector notation: let
X
=
(
x
1
,
x
2
,
x
3
)
,
{\displaystyle X=(x_{1},x_{2},x_{3}),}
let
t
{\displaystyle t}
be a parameter, let
X
(
t
)
{\displaystyle X(t)}
be the parametric representation of a curve
C
,
{\displaystyle C,}
and let
X
˙
(
t
)
{\displaystyle {\dot {X}}(t)}
be its tangent vector. The optical length of the curve is given by
A
[
C
]
=
∫
t
0
t
1
n
(
X
)
X
˙
⋅
X
˙
d
t
.
{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.}
Note that this integral is invariant with respect to changes in the parametric representation of
C
.
{\displaystyle C.}
The Euler–Lagrange equations for a minimizing curve have the symmetric form
d
d
t
P
=
X
˙
⋅
X
˙
∇
n
,
{\displaystyle {\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,}
where
P
=
n
(
X
)
X
˙
X
˙
⋅
X
˙
.
{\displaystyle P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.}
It follows from the definition that
P
{\displaystyle P}
satisfies
P
⋅
P
=
n
(
X
)
2
.
{\displaystyle P\cdot P=n(X)^{2}.}
Therefore, the integral may also be written as
A
[
C
]
=
∫
t
0
t
1
P
⋅
X
˙
d
t
.
{\displaystyle A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.}
This form suggests that if we can find a function
ψ
{\displaystyle \psi }
whose gradient is given by
P
,
{\displaystyle P,}
then the integral
A
{\displaystyle A}
is given by the difference of
ψ
{\displaystyle \psi }
at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of
ψ
{\displaystyle \psi }
. In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.
===== Connection with the wave equation =====
The wave equation for an inhomogeneous medium is
u
t
t
=
c
2
∇
⋅
∇
u
,
{\displaystyle u_{tt}=c^{2}\nabla \cdot \nabla u,}
where
c
{\displaystyle c}
is the velocity, which generally depends upon
X
{\displaystyle X}
. Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy
φ
t
2
=
c
(
X
)
2
∇
φ
⋅
∇
φ
.
{\displaystyle \varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .}
We may look for solutions in the form
φ
(
t
,
X
)
=
t
−
ψ
(
X
)
.
{\displaystyle \varphi (t,X)=t-\psi (X).}
In that case,
ψ
{\displaystyle \psi }
satisfies
∇
ψ
⋅
∇
ψ
=
n
2
,
{\displaystyle \nabla \psi \cdot \nabla \psi =n^{2},}
where
n
=
1
/
c
{\displaystyle n=1/c}
. According to the theory of first-order partial differential equations, if
P
=
∇
ψ
,
{\displaystyle P=\nabla \psi ,}
then
P
{\displaystyle P}
satisfies
d
P
d
s
=
n
∇
n
,
{\displaystyle {\frac {dP}{ds}}=n\,\nabla n,}
along a system of curves (the light rays) that are given by
d
X
d
s
=
P
.
{\displaystyle {\frac {dX}{ds}}=P.}
These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification
d
s
d
t
=
X
˙
⋅
X
˙
n
.
{\displaystyle {\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.}
We conclude that the function
ψ
{\displaystyle \psi }
is the value of the minimizing integral
A
{\displaystyle A}
as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems.
=== Mechanics ===
In classical mechanics, the action,
S
,
{\displaystyle S,}
is defined as the time integral of the Lagrangian,
L
{\displaystyle L}
. The Lagrangian is the difference of energies,
L
=
T
−
U
,
{\displaystyle L=T-U,}
where
T
{\displaystyle T}
is the kinetic energy of a mechanical system and
U
{\displaystyle U}
its potential energy. Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral
S
=
∫
t
0
t
1
L
(
x
,
x
˙
,
t
)
d
t
{\displaystyle S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt}
is stationary with respect to variations in the path
x
(
t
)
{\displaystyle x(t)}
.
The Euler–Lagrange equations for this system are known as Lagrange's equations:
d
d
t
∂
L
∂
x
˙
=
∂
L
∂
x
,
{\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},}
and they are equivalent to Newton's equations of motion (for such systems).
The conjugate momenta
P
{\displaystyle P}
are defined by
p
=
∂
L
∂
x
˙
.
{\displaystyle p={\frac {\partial L}{\partial {\dot {x}}}}.}
For example, if
T
=
1
2
m
x
˙
2
,
{\displaystyle T={\frac {1}{2}}m{\dot {x}}^{2},}
then
p
=
m
x
˙
.
{\displaystyle p=m{\dot {x}}.}
Hamiltonian mechanics results if the conjugate momenta are introduced in place of
x
˙
{\displaystyle {\dot {x}}}
by a Legendre transformation of the Lagrangian
L
{\displaystyle L}
into the Hamiltonian
H
{\displaystyle H}
defined by
H
(
x
,
p
,
t
)
=
p
x
˙
−
L
(
x
,
x
˙
,
t
)
.
{\displaystyle H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).}
The Hamiltonian is the total energy of the system:
H
=
T
+
U
{\displaystyle H=T+U}
.
Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of
X
{\displaystyle X}
. This function is a solution of the Hamilton–Jacobi equation:
∂
ψ
∂
t
+
H
(
x
,
∂
ψ
∂
x
,
t
)
=
0.
{\displaystyle {\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.}
=== Further applications ===
Further applications of the calculus of variations include the following:
The derivation of the catenary shape
Solution to Newton's minimal resistance problem
Solution to the brachistochrone problem
Solution to the tautochrone problem
Solution to isoperimetric problems
Calculating geodesics
Finding minimal surfaces and solving Plateau's problem
Optimal control
Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics;
Geometric optics, especially Lagrangian and Hamiltonian optics;
Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states;
Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning;
Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity;
Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations;
Total variation denoising, an image processing method for filtering high variance or noisy signals.
== Variations and sufficient condition for a minimum ==
Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The first variation is defined as the linear part of the change in the functional, and the second variation is defined as the quadratic part.
For example, if
J
[
y
]
{\displaystyle J[y]}
is a functional with the function
y
=
y
(
x
)
{\displaystyle y=y(x)}
as its argument, and there is a small change in its argument from
y
{\displaystyle y}
to
y
+
h
,
{\displaystyle y+h,}
where
h
=
h
(
x
)
{\displaystyle h=h(x)}
is a function in the same function space as
y
{\displaystyle y}
, then the corresponding change in the functional is
Δ
J
[
h
]
=
J
[
y
+
h
]
−
J
[
y
]
.
{\displaystyle \Delta J[h]=J[y+h]-J[y].}
The functional
J
[
y
]
{\displaystyle J[y]}
is said to be differentiable if
Δ
J
[
h
]
=
φ
[
h
]
+
ε
‖
h
‖
,
{\displaystyle \Delta J[h]=\varphi [h]+\varepsilon \|h\|,}
where
φ
[
h
]
{\displaystyle \varphi [h]}
is a linear functional,
‖
h
‖
{\displaystyle \|h\|}
is the norm of
h
,
{\displaystyle h,}
and
ε
→
0
{\displaystyle \varepsilon \to 0}
as
‖
h
‖
→
0.
{\displaystyle \|h\|\to 0.}
The linear functional
φ
[
h
]
{\displaystyle \varphi [h]}
is the first variation of
J
[
y
]
{\displaystyle J[y]}
and is denoted by,
δ
J
[
h
]
=
φ
[
h
]
.
{\displaystyle \delta J[h]=\varphi [h].}
The functional
J
[
y
]
{\displaystyle J[y]}
is said to be twice differentiable if
Δ
J
[
h
]
=
φ
1
[
h
]
+
φ
2
[
h
]
+
ε
‖
h
‖
2
,
{\displaystyle \Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},}
where
φ
1
[
h
]
{\displaystyle \varphi _{1}[h]}
is a linear functional (the first variation),
φ
2
[
h
]
{\displaystyle \varphi _{2}[h]}
is a quadratic functional, and
ε
→
0
{\displaystyle \varepsilon \to 0}
as
‖
h
‖
→
0.
{\displaystyle \|h\|\to 0.}
The quadratic functional
φ
2
[
h
]
{\displaystyle \varphi _{2}[h]}
is the second variation of
J
[
y
]
{\displaystyle J[y]}
and is denoted by,
δ
2
J
[
h
]
=
φ
2
[
h
]
.
{\displaystyle \delta ^{2}J[h]=\varphi _{2}[h].}
The second variation
δ
2
J
[
h
]
{\displaystyle \delta ^{2}J[h]}
is said to be strongly positive if
δ
2
J
[
h
]
≥
k
‖
h
‖
2
,
{\displaystyle \delta ^{2}J[h]\geq k\|h\|^{2},}
for all
h
{\displaystyle h}
and for some constant
k
>
0
{\displaystyle k>0}
.
Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated.
== See also ==
== Notes ==
== References ==
== Further reading ==
Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review 59(4) (2017), 703–766.
Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4.
Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968.
Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950.
Dacorogna, Bernard: "Introduction" Introduction to the Calculus of Variations, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0.
Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962.
Forsyth, A.R.: Calculus of Variations, Dover, 1960.
Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987.
Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2
Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998.
Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98.
Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006
Pike, Ralph W. "Chapter 8: Calculus of Variations". Optimization for Engineering Systems. Louisiana State University. Archived from the original on 2007-07-05.
Roubicek, T.: "Calculus of variations". Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588.
Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992.
Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.).
== External links ==
Variational calculus. Encyclopedia of Mathematics.
calculus of variations. PlanetMath.
Calculus of Variations. MathWorld.
Calculus of variations. Example problems.
Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube.
Selected papers on Geodesic Fields. Part I, Part II. | Wikipedia/Variational_calculus |
Classical physics refers to physics theories that are non-quantum or both non-quantum and non-relativistic, depending on the context. In historical discussions, classical physics refers to pre-1900 physics, while modern physics refers to post-1900 physics, which incorporates elements of quantum mechanics and relativity. However, relativity is based on classical field theory rather than quantum field theory and is often categorized with "classical physics."
== Overview ==
Classical theory has at least two distinct meanings in physics. It can include all those areas of physics that do not make use of quantum mechanics, which includes classical mechanics (using any of the Newtonian, Lagrangian, or Hamiltonian formulations), as well as classical electrodynamics and relativity. Alternatively, the term can refer to theories that are neither quantum or relativistic.
Depending on point of view, among the branches of theory sometimes included in classical physics are:: 2
Classical mechanics
Newton's laws of motion
Classical Lagrangian and Hamiltonian formalisms
Classical electrodynamics (Maxwell's equations)
Classical thermodynamics
== Comparison with modern physics ==
In contrast to classical physics, "modern physics" is usually used to focus on those revolutionary changes created by quantum physics and theory of relativity.: 2
A physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid.
In practice, physical objects ranging from those larger than atoms and molecules, to objects in the macroscopic and astronomical realm, can be well-described (understood) with classical mechanics. Beginning at the atomic level and lower, the laws of classical physics break down and generally do not provide a correct description of nature. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales and field strengths large enough that quantum mechanical effects are negligible. Unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist.
From the point of view of classical physics as being non-relativistic physics, the predictions of general and special relativity are significantly different from those of classical theories, particularly concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Traditionally, light was reconciled with classical mechanics by assuming the existence of a stationary medium through which light propagated, the luminiferous aether, which was later shown not to exist.
== Comparison to quantum physics ==
Mathematically, quantum physics equations are those containing the Planck constant. According to the correspondence principle and Ehrenfest's theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects and the classical description will suffice. Decoherence is the field of research concerned with the discovery of how the laws of quantum physics give rise to classical physics.
== See also ==
Glossary of classical physics
Semiclassical physics
== References == | Wikipedia/Classical_physics |
Within the atmospheric sciences, atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, radiation budget, and energy transfer processes in the atmosphere (as well as how these tie into boundary systems such as the oceans). In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics which are highly mathematical and related to physics. It has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. At the dawn of the space age and the introduction of sounding rockets, aeronomy became a subdiscipline concerning the upper layers of the atmosphere, where dissociation and ionization are important.
== Remote sensing ==
Remote sensing is the small or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing device(s) that is not in physical or intimate contact with the object (such as by way of aircraft, spacecraft, satellite, buoy, or ship). In practice, remote sensing is the stand-off collection through the use of a variety of devices for gathering information on a given object or area which gives more information than sensors at individual sites might convey. Thus, Earth observation or weather satellite collection platforms, ocean and atmospheric observing weather buoy platforms, monitoring of a pregnancy via ultrasound, magnetic resonance imaging (MRI), positron-emission tomography (PET), and space probes are all examples of remote sensing. In modern usage, the term generally refers to the use of imaging sensor technologies including but not limited to the use of instruments aboard aircraft and spacecraft, and is distinct from other imaging-related fields such as medical imaging.
There are two kinds of remote sensing. Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding area being observed. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. radar, lidar, and SODAR are examples of active remote sensing techniques used in atmospheric physics where the time delay between emission and return is measured, establishing the location, height, speed and direction of an object.
Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed.
Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation, and national security and overhead, ground-based and stand-off collection on border areas.
== Radiation ==
Atmospheric physicists typically divide radiation into solar radiation (emitted by the sun) and terrestrial radiation (emitted by Earth's surface and atmosphere).
Solar radiation contains variety of wavelengths. Visible light has wavelengths between 0.4 and 0.7 micrometers. Shorter wavelengths are known as the ultraviolet (UV) part of the spectrum, while longer wavelengths are grouped into the infrared portion of the spectrum. Ozone is most effective in absorbing radiation around 0.25 micrometers, where UV-c rays lie in the spectrum. This increases the temperature of the nearby stratosphere. Snow reflects 88% of UV rays, while sand reflects 12%, and water reflects only 4% of incoming UV radiation. The more glancing the angle is between the atmosphere and the sun's rays, the more likely that energy will be reflected or absorbed by the atmosphere.
Terrestrial radiation is emitted at much longer wavelengths than solar radiation. This is because Earth is much colder than the sun. Radiation is emitted by Earth across a range of wavelengths, as formalized in Planck's law. The wavelength of maximum energy is around 10 micrometers.
== Cloud physics ==
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of clouds. Clouds are composed of microscopic droplets of water (warm clouds), tiny crystals of ice, or both (mixed phase clouds). Under suitable conditions, the droplets combine to form precipitation, where they may fall to the earth. The precise mechanics of how a cloud forms and grows is not completely understood, but scientists have developed theories explaining the structure of clouds by studying the microphysics of individual droplets. Advances in radar and satellite technology have also allowed the precise study of clouds on a large scale.
== Atmospheric electricity ==
Atmospheric electricity is the term given to the electrostatics and electrodynamics of the atmosphere (or, more broadly, the atmosphere of any planet). The Earth's surface, the ionosphere, and the atmosphere is known as the global atmospheric electrical circuit. Lightning discharges 30,000 amperes, at up to 100 million volts, and emits light, radio waves, X-rays and even gamma rays. Plasma temperatures in lightning can approach 28,000 kelvins and electron densities may exceed 1024/m3.
== Atmospheric tide ==
The largest-amplitude atmospheric tides are mostly generated in the troposphere and stratosphere when the atmosphere is periodically heated as water vapour and ozone absorb solar radiation during the day. The tides generated are then able to propagate away from these source regions and ascend into the mesosphere and thermosphere. Atmospheric tides can be measured as regular fluctuations in wind, temperature, density and pressure. Although atmospheric tides share much in common with ocean tides they have two key distinguishing features:
i) Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are primarily excited by the Moon's gravitational field. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have longer periods of oscillation related to the lunar day (time between successive lunar transits) of about 24 hours 51 minutes.
ii) Atmospheric tides propagate in an atmosphere where density varies significantly with height. A consequence of this is that their amplitudes naturally increase exponentially as the tide ascends into progressively more rarefied regions of the atmosphere (for an explanation of this phenomenon, see below). In contrast, the density of the oceans varies only slightly with depth and so there the tides do not necessarily vary in amplitude with depth.
Note that although solar heating is responsible for the largest-amplitude atmospheric tides, the gravitational fields of the Sun and Moon also raise tides in the atmosphere, with the lunar gravitational atmospheric tidal effect being significantly greater than its solar counterpart.
At ground level, atmospheric tides can be detected as regular but small oscillations in surface pressure with periods of 24 and 12 hours. Daily pressure maxima occur at 10 a.m. and 10 p.m. local time, while minima occur at 4 a.m. and 4 p.m. local time. The absolute maximum occurs at 10 a.m. while the absolute minimum occurs at 4 p.m. However, at greater heights the amplitudes of the tides can become very large. In the mesosphere (heights of ~ 50 – 100 km) atmospheric tides can reach amplitudes of more than 50 m/s and are often the most significant part of the motion of the atmosphere.
== Aeronomy ==
Aeronomy is the science of the upper region of the atmosphere, where dissociation and ionization are important. The term aeronomy was introduced by Sydney Chapman in 1960. Today, the term also includes the science of the corresponding regions of the atmospheres of other planets. Research in aeronomy requires access to balloons, satellites, and sounding rockets which provide valuable data about this region of the atmosphere. Atmospheric tides play an important role in interacting with both the lower and upper atmosphere. Amongst the phenomena studied are upper-atmospheric lightning discharges, such as luminous events called red sprites, sprite halos, blue jets, and elves.
== Centers of research ==
In the UK, atmospheric studies are underpinned by the Met Office, the Natural Environment Research Council and the Science and Technology Facilities Council. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The US National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. In Belgium, the Belgian Institute for Space Aeronomy studies the atmosphere and outer space. In France, there are several public or private entities researching the atmosphere, as an example météo-France (Météo-France), several laboratories in the national scientific research center (such as the laboratories in the IPSL group).
== See also ==
== References ==
== Further reading ==
J. V. Iribarne, H. R. Cho, Atmospheric Physics, D. Reidel Publishing Company, 1980.
== External links ==
Media related to Atmospheric physics at Wikimedia Commons | Wikipedia/Atmospheric_physics |
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.
Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Many systems and processes can, however, be considered to be in equilibrium locally, thus allowing description by currently known equilibrium thermodynamics. Nevertheless, some natural systems and processes remain beyond the scope of equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost.
The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium. However, it can be done locally, and the macroscopic entropy will then be given by the integral of the locally defined entropy density. It has been found that many systems far outside global equilibrium still obey the concept of local equilibrium.
== Scope ==
=== Difference between equilibrium and non-equilibrium thermodynamics ===
A profound difference separates equilibrium from non-equilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail.
Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, needs its state variables to have a very close connection with those of equilibrium thermodynamics. This conceptual issue is overcome under the assumption of local equilibrium, which entails that the relationships that hold between macroscopic state variables at equilibrium hold locally, also outside equilibrium. Throughout the past decades, the assumption of local equilibrium has been tested, and found to hold, under increasingly extreme conditions, such as in the shock front of violent explosions, on reacting surfaces, and under extreme thermal gradients.
Thus, non-equilibrium thermodynamics provides a consistent framework for modelling not only the initial and final states of a system, but also the evolution of the system in time. Together with the concept of entropy production, this provides a powerful tool in process optimisation, and provides a theoretical foundation for exergy analysis.
=== Non-equilibrium state variables ===
The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. When the system is in local equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables.
Because of the spatial non-uniformity, non-equilibrium state variables that correspond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extensive equilibrium state variables. When the system is in local equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables. In reality, these requirements, although strict, have been shown to be fulfilled even under extreme conditions, such as during phase transitions, at reacting interfaces, and in plasma droplets surrounded by ambient air. There are, however, situations where there are appreciable non-linear effects even at the local scale.
== Overview ==
Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also), time rate of entropy production (Onsager 1931), thermodynamic fields, dissipative structure, and non-linear dynamical structure.
One problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables.
One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics, but they are hardly touched on in the present article.
=== Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions ===
According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored.
=== Local equilibrium thermodynamics ===
The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables.
Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption (see also Keizer (1987)). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed spatial variation from infinitesimal volume element to adjacent infinitesimal volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. While these demands may appear severely constrictive, it has been found that the assumptions of local equilibrium hold for a wide variety of systems, including reacting interfaces, on the surfaces of catalysts, in confined systems such as zeolites, under temperature gradients as large as
10
12
{\displaystyle 10^{12}}
K m
−
1
{\displaystyle ^{-1}}
, and even in shock fronts moving at up to six times the speed of sound.
In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Lars Onsager in the twentieth. These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation.
==== Local equilibrium thermodynamics with materials with "memory" ====
A further extension of local equilibrium thermodynamics is to allow that materials may have "memory", so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state.
=== Extended irreversible thermodynamics ===
Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes.
The formalism is well-suited for describing high-frequency processes and small-length scales materials.
== Basic concepts ==
There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems.
The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure.
Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential.
By definition, the entropy (S) is a function of the collection of extensive quantities
E
i
{\displaystyle E_{i}}
. Each extensive quantity has a conjugate intensive variable
I
i
{\displaystyle I_{i}}
(a restricted definition of intensive variable is used here by comparison to the definition given in this link) so that:
I
i
=
∂
S
∂
E
i
.
{\displaystyle I_{i}={\frac {\partial {S}}{\partial {E_{i}}}}.}
We then define the extended Massieu function as follows:
k
B
M
=
S
−
∑
i
(
I
i
E
i
)
,
{\displaystyle \ k_{\rm {B}}M=S-\sum _{i}(I_{i}E_{i}),}
where
k
B
{\displaystyle \ k_{\rm {B}}}
is the Boltzmann constant, whence
k
B
d
M
=
∑
i
(
E
i
d
I
i
)
.
{\displaystyle \ k_{\rm {B}}\,dM=\sum _{i}(E_{i}\,dI_{i}).}
The independent variables are the intensities.
Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system.
It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not.
== Stationary states, fluctuations, and stability ==
In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process.
If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323). The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system.
If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy.
== Local thermodynamic equilibrium ==
The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium.
=== Ponderable matter ===
Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables.
One can think here of two 'relaxation times' separated by order of magnitude. The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.
=== Milne's definition in terms of radiative equilibrium ===
Edward A. Milne, thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons.
== Entropy in evolving systems ==
It is pointed out by W.T. Grandy Jr, that entropy, though it may be defined for a non-equilibrium system is—when strictly considered—only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.
This point of view shares many points in common with the concept and the use of entropy in continuum thermomechanics, which evolved completely independently of statistical mechanics and maximum-entropy principles.
=== Entropy in non-equilibrium ===
To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables
x
1
,
x
2
,
.
.
.
,
x
n
{\displaystyle x_{1},x_{2},...,x_{n}}
that are used to fix the equilibrium state, as was described above, a set of variables
ξ
1
,
ξ
2
,
…
{\displaystyle \xi _{1},\xi _{2},\ldots }
that are called internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their tending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where
τ
i
=
τ
i
(
T
,
x
1
,
x
2
,
…
,
x
n
)
{\displaystyle \tau _{i}=\tau _{i}(T,x_{1},x_{2},\ldots ,x_{n})}
is a relaxation time of a corresponding variables. It is convenient to consider the initial value
ξ
i
0
{\displaystyle \xi _{i}^{0}}
are equal to zero. The above equation is valid for small deviations from equilibrium; The dynamics of internal variables in general case is considered by Pokrovskii.
Entropy of the system in non-equilibrium is a function of the total set of variables
The essential contribution to the thermodynamics of the non-equilibrium systems was brought by the Nobel Prize winner Ilya Prigogine, when he and his collaborators investigated the systems of chemically reacting substances. The stationary states of such systems exists due to exchange both particles and energy with the environment. In section 8 of the third chapter of his book, Prigogine has specified three contributions to the variation of entropy of the considered system at the given volume and constant temperature
T
{\displaystyle T}
. The increment of entropy
S
{\displaystyle S}
can be calculated according to the formula
The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy
h
α
{\displaystyle h_{\alpha }}
coming into the system with the stream of particles of substances
Δ
N
α
{\displaystyle \Delta N_{\alpha }}
that can be positive or negative,
η
α
=
h
α
−
μ
α
{\displaystyle \eta _{\alpha }=h_{\alpha }-\mu _{\alpha }}
, where
μ
α
{\displaystyle \mu _{\alpha }}
is chemical potential of substance
α
{\displaystyle \alpha }
. The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables
ξ
j
{\displaystyle \xi _{j}}
. In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables
ξ
j
{\displaystyle \xi _{j}}
in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on.
== Flows and forces ==
The fundamental relation of classical equilibrium thermodynamics
d
S
=
1
T
d
U
+
p
T
d
V
−
∑
i
=
1
s
μ
i
T
d
N
i
{\displaystyle dS={\frac {1}{T}}dU+{\frac {p}{T}}dV-\sum _{i=1}^{s}{\frac {\mu _{i}}{T}}dN_{i}}
expresses the change in entropy
d
S
{\displaystyle dS}
of a system as a function of the intensive quantities temperature
T
{\displaystyle T}
, pressure
p
{\displaystyle p}
and
i
t
h
{\displaystyle i^{th}}
chemical potential
μ
i
{\displaystyle \mu _{i}}
and of the differentials of the extensive quantities energy
U
{\displaystyle U}
, volume
V
{\displaystyle V}
and
i
t
h
{\displaystyle i^{th}}
particle number
N
i
{\displaystyle N_{i}}
.
Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities
U
{\displaystyle U}
,
V
{\displaystyle V}
and
N
i
{\displaystyle N_{i}}
and of the intensive macroscopic quantities
T
{\displaystyle T}
,
p
{\displaystyle p}
and
μ
i
{\displaystyle \mu _{i}}
.
For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities.
Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations.
Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities (
J
i
{\displaystyle J_{i}}
) may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities.
In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below.
One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described.
=== Onsager reciprocal relations ===
Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows (
J
i
{\displaystyle J_{i}}
) are small and the thermodynamic forces (
F
i
{\displaystyle F_{i}}
) vary slowly, the rate of creation of entropy
(
σ
)
{\displaystyle (\sigma )}
is linearly related to the flows:
σ
=
∑
i
J
i
∂
F
i
∂
x
i
{\displaystyle \sigma =\sum _{i}J_{i}{\frac {\partial F_{i}}{\partial x_{i}}}}
and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted
L
{\displaystyle L}
:
J
i
=
∑
j
L
i
j
∂
F
j
∂
x
j
{\displaystyle J_{i}=\sum _{j}L_{ij}{\frac {\partial F_{j}}{\partial x_{j}}}}
from which it follows that:
σ
=
∑
i
,
j
L
i
j
∂
F
i
∂
x
i
∂
F
j
∂
x
j
{\displaystyle \sigma =\sum _{i,j}L_{ij}{\frac {\partial F_{i}}{\partial x_{i}}}{\frac {\partial F_{j}}{\partial x_{j}}}}
The second law of thermodynamics requires that the matrix
L
{\displaystyle L}
be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix
L
{\displaystyle L}
is symmetric. This fact is called the Onsager reciprocal relations.
The generalization of the above equations for the rate of creation of entropy was given by Pokrovskii.
== Speculated extremal principles for non-equilibrium processes ==
Until recently, prospects for useful extremal principles in this area have seemed clouded. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997).
There is good experimental evidence that heat convection does not obey extremal principles for time rate of entropy production. Theoretical analysis shows that chemical reactions do not obey extremal principles for the second differential of time rate of entropy production. The development of a general extremal principle seems infeasible in the current state of knowledge.
== Applications ==
Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes.
It is also used to give a description of the dynamics of nanoparticles, which can be out of equilibrium in systems where catalysis and electrochemical conversion is involved.
Also, ideas from non-equilibrium thermodynamics and the informatic theory of entropy have been adapted to describe general economic systems.
== See also ==
== References ==
=== Sources ===
== Further reading ==
== External links ==
Stephan Herminghaus' Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization
Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics - 1992- book by Xavier de Hemptinne.
Nonequilibrium Thermodynamics of Small Systems - PhysicsToday.org
Into the Cool - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory.
"Thermodynamics "beyond" local equilibrium" | Wikipedia/Non-equilibrium_thermodynamics |
Branches of physics include classical mechanics; thermodynamics and statistical mechanics; electromagnetism and photonics; relativity; quantum mechanics, atomic physics, and molecular physics; optics and acoustics; condensed matter physics; high-energy particle physics and nuclear physics; and chaos theory and cosmology; and interdisciplinary fields.
== Classical mechanics ==
Classical mechanics is a model of the physics of forces acting upon bodies; includes sub-fields to describe the behaviors of solids, gases, and fluids. It is often referred to as "Newtonian mechanics" after Isaac Newton and his laws of motion. It also includes the classical approach as given by Hamiltonian and Lagrange methods. It deals with the motion of particles and the general system of particles.
There are many branches of classical mechanics, such as: statics, dynamics, kinematics, continuum mechanics (which includes fluid mechanics), statistical mechanics, etc.
Mechanics: A branch of physics in which we study the object and properties of an object in form of a motion under the action of the force.
== Thermodynamics and statistical mechanics ==
The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost. By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based.
Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat. Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines.
The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work. They also postulate the existence of a quantity named entropy, which can be defined for any system. In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
== Electromagnetism and photonics ==
The study of the behaviours of electrons, electric media, magnets, magnetic fields, and general interactions of light.
== Relativity ==
The special theory of relativity enjoys a relationship with electromagnetism and mechanics; that is, the principle of relativity and the principle of stationary action in mechanics can be used to derive Maxwell's equations, and vice versa.
The theory of special relativity was proposed in 1905 by Albert Einstein in his article "On the Electrodynamics of Moving Bodies". The title of the article refers to the fact that special relativity resolves an inconsistency between Maxwell's equations and classical mechanics. The theory is based on two postulates: (1) that the mathematical forms of the laws of physics are invariant in all inertial systems; and (2) that the speed of light in vacuum is constant and independent of the source or observer. Reconciling the two postulates requires a unification of space and time into the frame-dependent concept of spacetime.
General relativity is the geometrical theory of gravitation published by Albert Einstein in 1915/16. It unifies special relativity, Newton's law of universal gravitation, and the insight that gravitation can be described by the curvature of space and time. In general relativity, the curvature of spacetime is produced by the energy of matter and radiation.
== Quantum mechanics, atomic physics, and molecular physics ==
Quantum mechanics is the branch of physics treating atomic and subatomic systems and their interaction based on the observation that all forms of energy are released in discrete units or bundles called "quanta". Remarkably, quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wave functions. The Schrödinger equation plays the role in quantum mechanics that Newton's laws and conservation of energy serve in classical mechanics—i.e., it predicts the future behavior of a dynamic system—and is a wave equation that is used to solve for wavefunctions.
For example, the light, or electromagnetic radiation emitted or absorbed by an atom has only certain frequencies (or wavelengths), as can be seen from the line spectrum associated with the chemical element represented by that atom. The quantum theory shows that those frequencies correspond to definite energies of the light quanta, or photons, and result from the fact that the electrons of the atom can have only certain allowed energy values, or levels; when an electron changes from one allowed level to another, a quantum of energy is emitted or absorbed whose frequency is directly proportional to the energy difference between the two levels. The photoelectric effect further confirmed the quantization of light.
In 1924, Louis de Broglie proposed that not only do light waves sometimes exhibit particle-like properties, but particles may also exhibit wave-like properties. Two different formulations of quantum mechanics were presented following de Broglie's suggestion. The wave mechanics of Erwin Schrödinger (1926) involves the use of a mathematical entity, the wave function, which is related to the probability of finding a particle at a given point in space. The matrix mechanics of Werner Heisenberg (1925) makes no mention of wave functions or similar concepts but was shown to be mathematically equivalent to Schrödinger's theory. A particularly important discovery of the quantum theory is the uncertainty principle, enunciated by Heisenberg in 1927, which places an absolute theoretical limit on the accuracy of certain measurements; as a result, the assumption by earlier scientists that the physical state of a system could be measured exactly and used to predict future states had to be abandoned. Quantum mechanics was combined with the theory of relativity in the formulation of Paul Dirac. Other developments include quantum statistics, quantum electrodynamics, concerned with interactions between charged particles and electromagnetic fields; and its generalization, quantum field theory.
String Theory
A possible candidate for the theory of everything, this theory combines the theory of general relativity and quantum mechanics to make a single theory. This theory can predict about properties of both small and big objects. This theory is currently under the developmental stage.
== Optics and acoustics ==
Optics is the study of light motions including reflection, refraction, diffraction, and interference.
Acoustics is the branch of physics involving the study of mechanical waves in different mediums.
== Condensed matter physics ==
The study of the physical properties of matter in a condensed phase.
== High-energy particle physics and nuclear physics ==
Particle physics studies the nature of particles, while nuclear physics studies the atomic nuclei.
== Chaos theory ==
Chaos theory represents a multidisciplinary area of study that encompasses both scientific inquiry and mathematics. It examines the essential models and deterministic principles governing dynamical systems that exhibit extreme sensitivity to initial conditions. Previously, these systems were believed to exist in a state of complete randomness and disorder. However, chaos theory posits that beneath the surface of apparent randomness in chaotic complex systems lie underlying patterns, interconnections, continuous feedback mechanisms, repetitions, self-similarity, fractals, and self-organization. The butterfly effect, a fundamental concept in chaos theory, illustrates how a minor alteration in one aspect of a nonlinear system can result in significant variations later on, highlighting a fragile dependence on initial circumstances. This phenomenon is often illustrated by the metaphor that a butterfly flapping its wings in Brazil could potentially influence or avert a tornado in Texas by altering the conditions in its environment.
== Cosmology ==
Cosmology studies how the universe came to be, and its eventual fate. It is studied by physicists and astrophysicists. It also studies about fictional universes people made, how the universes came to be, and their eventual fate and destruction.
== Interdisciplinary fields ==
To the interdisciplinary fields, which define partially sciences of their own, belong e.g. the
agrophysics is a branch of science bordering on agronomy and physics
astrophysics, the physics in the universe, including the properties and interactions of celestial bodies in astronomy
atmospheric physics is the application of physics to the study of the atmosphere
space physics is the study of plasmas as they occur naturally in the Earth's upper atmosphere (aeronomy) and within the Solar System
biophysics, studying the physical interactions of biological processes
chemical physics, the science of physical relations in chemistry
computational physics, the application of computers and numerical methods to physical systems
econophysics, dealing with physical processes and their relations in the science of economy
environmental physics, the branch of physics concerned with the measurement and analysis of interactions between organisms and their environment
engineering physics, the combined discipline of physics and engineering
geophysics, the sciences of physical relations on our planet
mathematical physics, mathematics pertaining to physical problems
medical physics, the application of physics in medicine to prevention, diagnosis, and treatment
physical chemistry, dealing with physical processes and their relations in the science of physical chemistry
physics education, set of methods to teach physics
physical oceanography, is the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters
psychophysics, the science of physical relations in psychology
quantum computing, the study of quantum-mechanical computation systems
sociophysics or social physics, is a field of science which uses mathematical tools inspired by physics to understand the behaviour of human crowds
== Summary ==
The table below lists the core theories along with many of the concepts they employ.
== See also ==
Classical physics
Modern physics
== References == | Wikipedia/Branches_of_physics |
Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
== Overview ==
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
== History ==
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life? by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
== Focus as a subfield ==
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
== See also ==
== References ==
=== Sources ===
== External links ==
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German) | Wikipedia/Biophysics |
In physics, a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics. These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group. The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry. This is the passive transformation point of view. In special relativity the homogeneous and inhomogeneous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations; conversely, the group contraction in the classical limit c → ∞ of Poincaré transformations yields Galilean transformations.
The equations below are only physically valid in a Newtonian framework, and not applicable to coordinate systems moving relative to each other at speeds approaching the speed of light.
Galileo formulated these concepts in his description of uniform motion.
The topic was motivated by his description of the motion of a ball rolling down a ramp, by which he measured the numerical value for the acceleration of gravity near the surface of the Earth.
== Translation ==
Although the transformations are named for Galileo, it is the absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors.
The notation below describes the relationship under the Galilean transformation between the coordinates (x, y, z, t) and (x′, y′, z′, t′) of a single arbitrary event, as measured in two coordinate systems S and S′, in uniform relative motion (velocity v) in their common x and x′ directions, with their spatial origins coinciding at time t = t′ = 0:
x
′
=
x
−
v
t
{\displaystyle x'=x-vt}
y
′
=
y
{\displaystyle y'=y}
z
′
=
z
{\displaystyle z'=z}
t
′
=
t
.
{\displaystyle t'=t.}
Note that the last equation holds for all Galilean transformations up to addition of a constant, and expresses the assumption of a universal time independent of the relative motion of different observers.
In the language of linear algebra, this transformation is considered a shear mapping, and is described with a matrix acting on a vector. With motion parallel to the x-axis, the transformation acts on only two components:
(
x
′
t
′
)
=
(
1
−
v
0
1
)
(
x
t
)
{\displaystyle {\begin{pmatrix}x'\\t'\end{pmatrix}}={\begin{pmatrix}1&-v\\0&1\end{pmatrix}}{\begin{pmatrix}x\\t\end{pmatrix}}}
Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity.
== Galilean transformations ==
The Galilean symmetries can be uniquely written as the composition of a rotation, a translation and a uniform motion of spacetime. Let x represent a point in three-dimensional space, and t a point in one-dimensional time. A general point in spacetime is given by an ordered pair (x, t).
A uniform motion, with velocity v, is given by
(
x
,
t
)
↦
(
x
+
t
v
,
t
)
,
{\displaystyle (\mathbf {x} ,t)\mapsto (\mathbf {x} +t\mathbf {v} ,t),}
where v ∈ R3. A translation is given by
(
x
,
t
)
↦
(
x
+
a
,
t
+
s
)
,
{\displaystyle (\mathbf {x} ,t)\mapsto (\mathbf {x} +\mathbf {a} ,t+s),}
where a ∈ R3 and s ∈ R. A rotation is given by
(
x
,
t
)
↦
(
R
x
,
t
)
,
{\displaystyle (\mathbf {x} ,t)\mapsto (R\mathbf {x} ,t),}
where R : R3 → R3 is an orthogonal transformation.
As a Lie group, the group of Galilean transformations has dimension 10.
== Galilean group ==
Two Galilean transformations G(R, v, a, s) and G(R' , v′, a′, s′) compose to form a third Galilean transformation,
G(R′, v′, a′, s′) ⋅ G(R, v, a, s) = G(R′ R, R′ v + v′, R′ a + a′ + v′ s, s′ + s).
The set of all Galilean transformations Gal(3) forms a group with composition as the group operation.
The group is sometimes represented as a matrix group with spacetime events (x, t, 1) as vectors where t is real and x ∈ R3 is a position in space.
The action is given by
(
R
v
a
0
1
s
0
0
1
)
(
x
t
1
)
=
(
R
x
+
v
t
+
a
t
+
s
1
)
,
{\displaystyle {\begin{pmatrix}R&v&a\\0&1&s\\0&0&1\end{pmatrix}}{\begin{pmatrix}x\\t\\1\end{pmatrix}}={\begin{pmatrix}Rx+vt+a\\t+s\\1\end{pmatrix}},}
where s is real and v, x, a ∈ R3 and R is a rotation matrix.
The composition of transformations is then accomplished through matrix multiplication. Care must be taken in the discussion whether one restricts oneself to the connected component group of the orthogonal transformations.
Gal(3) has named subgroups. The identity component is denoted SGal(3).
Let m represent the transformation matrix with parameters v, R, s, a:
{
m
:
R
=
I
3
}
,
{\displaystyle \{m:R=I_{3}\},}
anisotropic transformations.
{
m
:
s
=
0
}
,
{\displaystyle \{m:s=0\},}
isochronous transformations.
{
m
:
s
=
0
,
v
=
0
}
,
{\displaystyle \{m:s=0,v=0\},}
spatial Euclidean transformations.
G
1
=
{
m
:
s
=
0
,
a
=
0
}
,
{\displaystyle G_{1}=\{m:s=0,a=0\},}
uniformly special transformations / homogeneous transformations, isomorphic to Euclidean transformations.
G
2
=
{
m
:
v
=
0
,
R
=
I
3
}
≅
(
R
4
,
+
)
,
{\displaystyle G_{2}=\{m:v=0,R=I_{3}\}\cong \left(\mathbf {R} ^{4},+\right),}
shifts of origin / translation in Newtonian spacetime.
G
3
=
{
m
:
s
=
0
,
a
=
0
,
v
=
0
}
≅
S
O
(
3
)
,
{\displaystyle G_{3}=\{m:s=0,a=0,v=0\}\cong \mathrm {SO} (3),}
rotations (of reference frame) (see SO(3)), a compact group.
G
4
=
{
m
:
s
=
0
,
a
=
0
,
R
=
I
3
}
≅
(
R
3
,
+
)
,
{\displaystyle G_{4}=\{m:s=0,a=0,R=I_{3}\}\cong \left(\mathbf {R} ^{3},+\right),}
uniform frame motions / boosts.
The parameters s, v, R, a span ten dimensions. Since the transformations depend continuously on s, v, R, a, Gal(3) is a continuous group, also called a topological group.
The structure of Gal(3) can be understood by reconstruction from subgroups. The semidirect product combination (
A
⋊
B
{\displaystyle A\rtimes B}
) of groups is required.
G
2
◃
S
G
a
l
(
3
)
{\displaystyle G_{2}\triangleleft \mathrm {SGal} (3)}
(G2 is a normal subgroup)
S
G
a
l
(
3
)
≅
G
2
⋊
G
1
{\displaystyle \mathrm {SGal} (3)\cong G_{2}\rtimes G_{1}}
G
4
⊴
G
1
{\displaystyle G_{4}\trianglelefteq G_{1}}
G
1
≅
G
4
⋊
G
3
{\displaystyle G_{1}\cong G_{4}\rtimes G_{3}}
S
G
a
l
(
3
)
≅
R
4
⋊
(
R
3
⋊
S
O
(
3
)
)
.
{\displaystyle \mathrm {SGal} (3)\cong \mathbf {R} ^{4}\rtimes (\mathbf {R} ^{3}\rtimes \mathrm {SO} (3)).}
== Origin in group contraction ==
The Lie algebra of the Galilean group is spanned by H, Pi, Ci and Lij (an antisymmetric tensor), subject to commutation relations, where
[
H
,
P
i
]
=
0
{\displaystyle [H,P_{i}]=0}
[
P
i
,
P
j
]
=
0
{\displaystyle [P_{i},P_{j}]=0}
[
L
i
j
,
H
]
=
0
{\displaystyle [L_{ij},H]=0}
[
C
i
,
C
j
]
=
0
{\displaystyle [C_{i},C_{j}]=0}
[
L
i
j
,
L
k
l
]
=
i
[
δ
i
k
L
j
l
−
δ
i
l
L
j
k
−
δ
j
k
L
i
l
+
δ
j
l
L
i
k
]
{\displaystyle [L_{ij},L_{kl}]=i[\delta _{ik}L_{jl}-\delta _{il}L_{jk}-\delta _{jk}L_{il}+\delta _{jl}L_{ik}]}
[
L
i
j
,
P
k
]
=
i
[
δ
i
k
P
j
−
δ
j
k
P
i
]
{\displaystyle [L_{ij},P_{k}]=i[\delta _{ik}P_{j}-\delta _{jk}P_{i}]}
[
L
i
j
,
C
k
]
=
i
[
δ
i
k
C
j
−
δ
j
k
C
i
]
{\displaystyle [L_{ij},C_{k}]=i[\delta _{ik}C_{j}-\delta _{jk}C_{i}]}
[
C
i
,
H
]
=
i
P
i
{\displaystyle [C_{i},H]=iP_{i}\,\!}
[
C
i
,
P
j
]
=
0
.
{\displaystyle [C_{i},P_{j}]=0~.}
H is the generator of time translations (Hamiltonian), Pi is the generator of translations (momentum operator), Ci is the generator of rotationless Galilean transformations (Galileian boosts), and Lij stands for a generator of rotations (angular momentum operator).
This Lie Algebra is seen to be a special classical limit of the algebra of the Poincaré group, in the limit c → ∞. Technically, the Galilean group is a celebrated group contraction of the Poincaré group (which, in turn, is a group contraction of the de Sitter group SO(1,4)).
Formally, renaming the generators of momentum and boost of the latter as in
P0 ↦ H / c
Ki ↦ c ⋅ Ci,
where c is the speed of light (or any unbounded function thereof), the commutation relations (structure constants) in the limit c → ∞ take on the relations of the former.
Generators of time translations and rotations are identified. Also note the group invariants Lmn Lmn and Pi Pi.
In matrix form, for d = 3, one may consider the regular representation (embedded in GL(5; R), from which it could be derived by a single group contraction, bypassing the Poincaré group),
i
H
=
(
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
)
,
{\displaystyle iH=\left({\begin{array}{ccccc}0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&1\\0&0&0&0&0\\\end{array}}\right),\qquad }
i
a
→
⋅
P
→
=
(
0
0
0
0
a
1
0
0
0
0
a
2
0
0
0
0
a
3
0
0
0
0
0
0
0
0
0
0
)
,
{\displaystyle i{\vec {a}}\cdot {\vec {P}}=\left({\begin{array}{ccccc}0&0&0&0&a_{1}\\0&0&0&0&a_{2}\\0&0&0&0&a_{3}\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right),\qquad }
i
v
→
⋅
C
→
=
(
0
0
0
v
1
0
0
0
0
v
2
0
0
0
0
v
3
0
0
0
0
0
0
0
0
0
0
0
)
,
{\displaystyle i{\vec {v}}\cdot {\vec {C}}=\left({\begin{array}{ccccc}0&0&0&v_{1}&0\\0&0&0&v_{2}&0\\0&0&0&v_{3}&0\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right),\qquad }
i
θ
i
ϵ
i
j
k
L
j
k
=
(
0
θ
3
−
θ
2
0
0
−
θ
3
0
θ
1
0
0
θ
2
−
θ
1
0
0
0
0
0
0
0
0
0
0
0
0
0
)
.
{\displaystyle i\theta _{i}\epsilon ^{ijk}L_{jk}=\left({\begin{array}{ccccc}0&\theta _{3}&-\theta _{2}&0&0\\-\theta _{3}&0&\theta _{1}&0&0\\\theta _{2}&-\theta _{1}&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right)~.}
The infinitesimal group element is then
G
(
R
,
v
→
,
a
→
,
s
)
=
1
1
5
+
(
0
θ
3
−
θ
2
v
1
a
1
−
θ
3
0
θ
1
v
2
a
2
θ
2
−
θ
1
0
v
3
a
3
0
0
0
0
s
0
0
0
0
0
)
+
.
.
.
.
{\displaystyle G(R,{\vec {v}},{\vec {a}},s)=1\!\!1_{5}+\left({\begin{array}{ccccc}0&\theta _{3}&-\theta _{2}&v_{1}&a_{1}\\-\theta _{3}&0&\theta _{1}&v_{2}&a_{2}\\\theta _{2}&-\theta _{1}&0&v_{3}&a_{3}\\0&0&0&0&s\\0&0&0&0&0\\\end{array}}\right)+\ ...~.}
== Central extension of the Galilean group ==
One may consider a central extension of the Lie algebra of the Galilean group, spanned by H′, P′i, C′i, L′ij and an operator M:
The so-called Bargmann algebra is obtained by imposing
[
C
i
′
,
P
j
′
]
=
i
M
δ
i
j
{\displaystyle [C'_{i},P'_{j}]=iM\delta _{ij}}
, such that M lies in the center, i.e. commutes with all other operators.
In full, this algebra is given as
[
H
′
,
P
i
′
]
=
0
{\displaystyle [H',P'_{i}]=0\,\!}
[
P
i
′
,
P
j
′
]
=
0
{\displaystyle [P'_{i},P'_{j}]=0\,\!}
[
L
i
j
′
,
H
′
]
=
0
{\displaystyle [L'_{ij},H']=0\,\!}
[
C
i
′
,
C
j
′
]
=
0
{\displaystyle [C'_{i},C'_{j}]=0\,\!}
[
L
i
j
′
,
L
k
l
′
]
=
i
[
δ
i
k
L
j
l
′
−
δ
i
l
L
j
k
′
−
δ
j
k
L
i
l
′
+
δ
j
l
L
i
k
′
]
{\displaystyle [L'_{ij},L'_{kl}]=i[\delta _{ik}L'_{jl}-\delta _{il}L'_{jk}-\delta _{jk}L'_{il}+\delta _{jl}L'_{ik}]\,\!}
[
L
i
j
′
,
P
k
′
]
=
i
[
δ
i
k
P
j
′
−
δ
j
k
P
i
′
]
{\displaystyle [L'_{ij},P'_{k}]=i[\delta _{ik}P'_{j}-\delta _{jk}P'_{i}]\,\!}
[
L
i
j
′
,
C
k
′
]
=
i
[
δ
i
k
C
j
′
−
δ
j
k
C
i
′
]
{\displaystyle [L'_{ij},C'_{k}]=i[\delta _{ik}C'_{j}-\delta _{jk}C'_{i}]\,\!}
[
C
i
′
,
H
′
]
=
i
P
i
′
{\displaystyle [C'_{i},H']=iP'_{i}\,\!}
and finally
[
C
i
′
,
P
j
′
]
=
i
M
δ
i
j
.
{\displaystyle [C'_{i},P'_{j}]=iM\delta _{ij}~.}
where the new parameter
M
{\displaystyle M}
shows up.
This extension and projective representations that this enables is determined by its group cohomology.
== See also ==
Galilean invariance
Representation theory of the Galilean group
Galilei-covariant tensor formulation
Poincaré group
Lorentz group
Lagrangian and Eulerian coordinates
== Notes ==
== References ==
Arnold, V. I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). Springer-Verlag. p. 6. ISBN 0-387-96890-3.
Bargmann, V. (1954). "On Unitary Ray Representations of Continuous Groups". Annals of Mathematics. 2. 59 (1): 1–46. doi:10.2307/1969831. JSTOR 1969831.
Copernicus, Nicolaus; Kepler, Johannes; Galilei, Galileo; Newton, Isaac; Einstein, Albert (2002). Hawking, Stephen (ed.). On the Shoulders of Giants: The Great Works of Physics and Astronomy. Philadelphia, London: Running Press. pp. 515–520. ISBN 0-7624-1348-4.
Galilei, Galileo (1638i). Discorsi e Dimostrazioni Matematiche, intorno á due nuoue scienze (in Italian). Leiden: Elsevier. pp. 191–196.
Galilei, Galileo (1638e). Discourses and Mathematical Demonstrations Relating to Two New Sciences [Discorsi e Dimostrazioni Matematiche Intorno a Due Nuove Scienze]. Translated to English 1914 by Henry Crew and Alfonso de Salvio.
Gilmore, Robert (2006). Lie Groups, Lie Algebras, and Some of Their Applications. Dover Books on Mathematics. Dover Publications. ISBN 0486445291.
Hoffmann, Banesh (1983), Relativity and Its Roots, Scientific American Books, ISBN 0-486-40676-8, Chapter 5, p. 83
Lerner, Lawrence S. (1996), Physics for Scientists and Engineers, vol. 2, Jones and Bertlett Publishers, Inc, ISBN 0-7637-0460-1, Chapter 38 §38.2, p. 1046,1047
Mould, Richard A. (2002), Basic relativity, Springer-Verlag, ISBN 0-387-95210-1, Chapter 2 §2.6, p. 42
Nadjafikhah, Mehdi; Forough, Ahmad-Reza (2009). "Galilean Geometry of Motions" (PDF). Applied Sciences. 11: 91–105.
Serway, Raymond A.; Jewett, John W. (2006), Principles of Physics: A Calculus-based Text (4th ed.), Brooks/Cole - Thomson Learning, Bibcode:2006ppcb.book.....J, ISBN 0-534-49143-X, Chapter 9 §9.1, p. 261 | Wikipedia/Galilean_transformation |
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of water and other liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space, understanding large scale geophysical flows involving oceans/atmosphere and modelling fission weapon detonation.
Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.
Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.
== Equations ==
The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the first law of thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem.
In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.
For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form.
In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state:
p
=
ρ
R
u
T
M
{\displaystyle p={\frac {\rho R_{u}T}{M}}}
where p is pressure, ρ is density, and T is the absolute temperature, while Ru is the gas constant and M is molar mass for a particular gas. A constitutive relation may also be useful.
=== Conservation laws ===
Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow.
Mass continuity (conservation of mass) The rate of change of fluid mass inside a control volume must be equal to the net rate of fluid flow into the volume. Physically, this statement requires that mass is neither created nor destroyed in the control volume, and can be translated into the integral form of the continuity equation:
∂
∂
t
∭
V
ρ
d
V
=
−
{\displaystyle {\frac {\partial }{\partial t}}\iiint _{V}\rho \,dV=-\,{}}
S
{\displaystyle {\scriptstyle S}}
ρ
u
⋅
d
S
{\displaystyle {}\,\rho \mathbf {u} \cdot d\mathbf {S} }
Above, ρ is the fluid density, u is the flow velocity vector, and t is time. The left-hand side of the above expression is the rate of increase of mass within the volume and contains a triple integral over the control volume, whereas the right-hand side contains an integration over the surface of the control volume of mass convected into the system. Mass flow into the system is accounted as positive, and since the normal vector to the surface is opposite to the sense of flow into the system the term is negated. The differential form of the continuity equation is, by the divergence theorem:
∂
ρ
∂
t
+
∇
⋅
(
ρ
u
)
=
0
{\displaystyle \ {\frac {\partial \rho }{\partial t}}+\nabla \cdot (\rho \mathbf {u} )=0}
Conservation of momentum
Newton's second law of motion applied to a control volume, is a statement that any change in momentum of the fluid within that control volume will be due to the net flow of momentum into the volume and the action of external forces acting on the fluid within the volume.
∂
∂
t
∭
V
ρ
u
d
V
=
−
{\displaystyle {\frac {\partial }{\partial t}}\iiint _{\scriptstyle V}\rho \mathbf {u} \,dV=-\,{}}
S
{\displaystyle _{\scriptstyle S}}
(
ρ
u
⋅
d
S
)
u
−
{\displaystyle (\rho \mathbf {u} \cdot d\mathbf {S} )\mathbf {u} -{}}
S
{\displaystyle {\scriptstyle S}}
p
d
S
{\displaystyle {}\,p\,d\mathbf {S} }
+
∭
V
ρ
f
body
d
V
+
F
surf
{\displaystyle \displaystyle {}+\iiint _{\scriptstyle V}\rho \mathbf {f} _{\text{body}}\,dV+\mathbf {F} _{\text{surf}}}
In the above integral formulation of this equation, the term on the left is the net change of momentum within the volume. The first term on the right is the net rate at which momentum is convected into the volume. The second term on the right is the force due to pressure on the volume's surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, and the normal is opposite the direction of the velocity u and pressure forces. The third term on the right is the net acceleration of the mass within the volume due to any body forces (here represented by fbody). Surface forces, such as viscous forces, are represented by Fsurf, the net force due to shear forces acting on the volume surface. The momentum balance can also be written for a moving control volume.
The following is the differential form of the momentum conservation equation. Here, the volume is reduced to an infinitesimally small point, and both surface and body forces are accounted for in one total force, F. For example, F may be expanded into an expression for the frictional and gravitational forces acting at a point in a flow.
D
u
D
t
=
F
−
∇
p
ρ
{\displaystyle {\frac {D\mathbf {u} }{Dt}}=\mathbf {F} -{\frac {\nabla p}{\rho }}}
In aerodynamics, air is assumed to be a Newtonian fluid, which posits a linear relationship between the shear stress (due to internal friction forces) and the rate of strain of the fluid. The equation above is a vector equation in a three-dimensional flow, but it can be expressed as three scalar equations in three coordinate directions. The conservation of momentum equations for the compressible, viscous flow case is called the Navier–Stokes equations.
Conservation of energy
Although energy can be converted from one form to another, the total energy in a closed system remains constant.
ρ
D
h
D
t
=
D
p
D
t
+
∇
⋅
(
k
∇
T
)
+
Φ
{\displaystyle \rho {\frac {Dh}{Dt}}={\frac {Dp}{Dt}}+\nabla \cdot \left(k\nabla T\right)+\Phi }
Above, h is the specific enthalpy, k is the thermal conductivity of the fluid, T is temperature, and Φ is the viscous dissipation function. The viscous dissipation function governs the rate at which the mechanical energy of the flow is converted to heat. The second law of thermodynamics requires that the dissipation term is always positive: viscosity cannot create energy within the control volume. The expression on the left side is a material derivative.
== Classifications ==
=== Compressible versus incompressible flow ===
All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used.
Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, that is,
D
ρ
D
t
=
0
,
{\displaystyle {\frac {\mathrm {D} \rho }{\mathrm {D} t}}=0\,,}
where D/Dt is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.
For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.
=== Newtonian versus non-Newtonian fluids ===
All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions T−1. Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate.
Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants.
=== Inviscid versus viscous versus Stokes flow ===
The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects.
The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number (Re ≪ 1) indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow.
In contrast, high Reynolds numbers (Re ≫ 1) indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression.
This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox.
A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions.
=== Steady versus unsteady flow ===
A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.
Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field U(x, t) is statistically stationary if all statistics are invariant under a shift in time.: 75 This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.
Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.
=== Laminar versus turbulent flow ===
Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.
It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows.
Most flows of interest have Reynolds numbers much too high for DNS to be a viable option,: 344 given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L > 3 m), moving faster than 20 m/s (72 km/h; 45 mph) is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling.
=== Other approximations ===
There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.
The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small.
Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected.
Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid.
The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small.
Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths.
In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics.
== Multidisciplinary types ==
=== Flows according to Mach regimes ===
While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of M = 1 (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately.
=== Reactive versus non-reactive flows ===
Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics.
=== Magnetohydrodynamics ===
Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.
=== Relativistic fluid dynamics ===
Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime.
=== Fluctuating hydrodynamics ===
This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model
thermal fluctuations.
As formulated by Landau and Lifshitz,
a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics
is added to the viscous stress tensor and heat flux.
== Terminology ==
The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.
Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.
=== Characteristic numbers ===
=== Terminology in incompressible fluid dynamics ===
The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.
A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.
=== Terminology in compressible fluid dynamics ===
In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion.
To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference.
Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy".
== Applications ==
== See also ==
List of publications in fluid dynamics
List of fluid dynamicists
== References ==
== Further reading ==
Acheson, D. J. (1990). Elementary Fluid Dynamics. Clarendon Press. ISBN 0-19-859679-0.
Batchelor, G. K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-66396-2.
Chanson, H. (2009). Applied Hydrodynamics: An Introduction to Ideal and Real Fluid Flows. CRC Press, Taylor & Francis Group, Leiden, The Netherlands, 478 pages. ISBN 978-0-415-49271-3.
Clancy, L. J. (1975). Aerodynamics. London: Pitman Publishing Limited. ISBN 0-273-01120-0.
Lamb, Horace (1994). Hydrodynamics (6th ed.). Cambridge University Press. ISBN 0-521-45868-4. Originally published in 1879, the 6th extended edition appeared first in 1932.
Milne-Thompson, L. M. (1968). Theoretical Hydrodynamics (5th ed.). Macmillan. Originally published in 1938.
Shinbrot, M. (1973). Lectures on Fluid Mechanics. Gordon and Breach. ISBN 0-677-01710-3.
Nazarenko, Sergey (2014), Fluid Dynamics via Examples and Solutions, CRC Press (Taylor & Francis group), ISBN 978-1-43-988882-7
Encyclopedia: Fluid dynamics Scholarpedia
== External links ==
National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format)
Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society
List of Fluid Dynamics books | Wikipedia/Hydrodynamics |
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion (change with respect to place), quantitative change (change with respect to size or number), qualitative change, and substantial change ("coming to be" [coming into existence, 'generation'] or "passing away" [no longer existing, 'corruption']). To Aristotle, 'physics' was a broad field including subjects which would now be called the philosophy of mind, sensory experience, memory, anatomy and biology. It constitutes the foundation of the thought underlying many of his works.
Key concepts of Aristotelian physics include the structuring of the cosmos into concentric spheres, with the Earth at the centre and celestial spheres around it. The terrestrial sphere was made of four elements, namely earth, air, fire, and water, subject to change and decay. The celestial spheres were made of a fifth element, an unchangeable aether. Objects made of these elements have natural motions: those of earth and water tend to fall; those of air and fire, to rise. The speed of such motion depends on their weights and the density of the medium. Aristotle argued that a vacuum could not exist as speeds would become infinite.
Aristotle described four causes or explanations of change as seen on earth: the material, formal, efficient, and final causes of things. As regards living things, Aristotle's biology relied on observation of what he considered to be ‘natural kinds’, both those he considered basic and the groups to which he considered these belonged. He did not conduct experiments in the modern sense, but relied on amassing data, observational procedures such as dissection, and making hypotheses about relationships between measurable quantities such as body size and lifespan.
== Methods ==
nature is everywhere the cause of order.
While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump.
In claiming novelty for their doctrines, those natural philosophers who developed the "new science" of the seventeenth century frequently contrasted "Aristotelian" physics with their own. Physics of the former sort, so they claimed, emphasized the qualitative at the expense of the quantitative, neglected mathematics and its proper role in physics (particularly in the analysis of local motion), and relied on such suspect explanatory principles as final causes and "occult" essences. Yet in his Physics Aristotle characterizes physics or the "science of nature" as pertaining to magnitudes (megethê), motion (or "process" or "gradual change" – kinêsis), and time (chronon) (Phys III.4 202b30–1). Indeed, the Physics is largely concerned with an analysis of motion, particularly local motion, and the other concepts that Aristotle believes are requisite to that analysis.
There are clear differences between modern and Aristotelian physics, the main being the use of mathematics, largely absent in Aristotle. Some recent studies, however, have re-evaluated Aristotle's physics, stressing both its empirical validity and its continuity with modern physics.
== Concepts ==
=== Elements and spheres ===
Aristotle divided his universe into "terrestrial spheres" which were "corruptible" and where humans lived, and moving but otherwise unchanging celestial spheres.
Aristotle believed that four classical elements make up everything in the terrestrial spheres: earth, air, fire and water.[a] He also held that the heavens are made of a special weightless and incorruptible (i.e. unchangeable) fifth element called "aether". Aether also has the name "quintessence", meaning, literally, "fifth being".
Aristotle considered heavy matter such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition.
The four classical elements were not invented by Aristotle; they were originated by Empedocles. During the Scientific Revolution, the ancient theory of classical elements was found to be incorrect, and was replaced by the empirically tested concept of chemical elements.
==== Celestial spheres ====
According to Aristotle, the Sun, Moon, planets and stars – are embedded in perfectly concentric "crystal spheres" that rotate eternally at fixed rates. Because the celestial spheres are incapable of any change except rotation, the terrestrial sphere of fire must account for the heat, starlight and occasional meteorites. The lowest, lunar sphere is the only celestial sphere that actually comes in contact with the sublunary orb's changeable, terrestrial matter, dragging the rarefied fire and air along underneath as it rotates. Like Homer's æthere (αἰθήρ) – the "pure air" of Mount Olympus – was the divine counterpart of the air breathed by mortal beings (άήρ, aer). The celestial spheres are composed of the special element aether, eternal and unchanging, the sole capability of which is a uniform circular motion at a given rate (relative to the diurnal motion of the outermost sphere of fixed stars).
The concentric, aetherial, cheek-by-jowl "crystal spheres" that carry the Sun, Moon and stars move eternally with unchanging circular motion. Spheres are embedded within spheres to account for the "wandering stars" (i.e. the planets, which, in comparison with the Sun, Moon and stars, appear to move erratically). Mercury, Venus, Mars, Jupiter, and Saturn are the only planets (including minor planets) which were visible before the invention of the telescope, which is why Neptune and Uranus are not included, nor are any asteroids. Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle model. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of fifty spheres. An unmoved mover is assumed for each sphere, including a "prime mover" for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, being immaterial and dimensionless) but are the final cause of the spheres' motion, i.e. they explain it in a way that's similar to the explanation "the soul is moved by beauty".
=== Terrestrial change ===
Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth). Any apparent change from cold and wet into the hot and dry (fire) is actually a two-step process, as first one of the property changes, then the other. These properties are predicated of an actual substance relative to the work it is able to do; that of heating or chilling and of desiccating or moistening. The four elements exist only with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for "coming to be" and "passing away" – or, in the terms of Aristotle's On Generation and Corruption (Περὶ γενέσεως καὶ φθορᾶς), "generation" and "corruption".
=== Natural place ===
The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the elements earth and water, that place is the center of the (geocentric) universe; the natural place of water is a concentric shell around the Earth because earth is heavier; it sinks in water. The natural place of air is likewise a concentric shell surrounding that of water; bubbles rise in water. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere (carrying the Moon).
In Book Delta of his Physics (IV.5), Aristotle defines topos (place) in terms of two bodies, one of which contains the other: a "place" is where the inner surface of the former (the containing body) touches the contained body. This definition remained dominant until the beginning of the 17th century, even though it had been questioned and debated by philosophers since antiquity. The most significant early critique was made in terms of geometry by the 11th-century Arab polymath al-Hasan Ibn al-Haytham (Alhazen) in his Discourse on Place.
=== Natural motion ===
Terrestrial objects rise or fall, to a greater or lesser extent, according to the ratio of the four elements of which they are composed. For example, earth, the heaviest element, and water, fall toward the center of the cosmos; hence the Earth and for the most part its oceans, will have already come to rest there. At the opposite extreme, the lightest elements, air and especially fire, rise up and away from the center.
The elements are not proper substances in Aristotelian theory (or the modern sense of the word). Instead, they are abstractions used to explain the varying natures and behaviors of actual materials in terms of ratios between them.
Motion and change are closely related in Aristotelian physics. Motion, according to Aristotle, involved a change from potentiality to actuality. He gave example of four types of change, namely change in substance, in quality, in quantity and in place.
Aristotle proposed that the speed at which two identically shaped objects sink or fall is directly proportional to their weights and inversely proportional to the density of the medium through which they move. While describing their terminal velocity, Aristotle must stipulate that there would be no limit at which to compare the speed of atoms falling through a vacuum, (they could move indefinitely fast because there would be no particular place for them to come to rest in the void). Now however it is understood that at any time prior to achieving terminal velocity in a relatively resistance-free medium like air, two such objects are expected to have nearly identical speeds because both are experiencing a force of gravity proportional to their masses and have thus been accelerating at nearly the same rate. This became especially apparent from the eighteenth century when partial vacuum experiments began to be made, but some two hundred years earlier Galileo had already demonstrated that objects of different weights reach the ground in similar times.
=== Unnatural motion ===
Apart from the natural tendency of terrestrial exhalations to rise and objects to fall, unnatural or forced motion from side to side results from the turbulent collision and sliding of the objects as well as transmutation between the elements (On Generation and Corruption). Aristotle phrased this principle as: "Everything that moves is moved by something else. (Omne quod movetur ab alio movetur.)" When the cause ceases, so does the effect. The cause, according to Aristotle, must be a power (i.e., force) that drives the body as long as the external agent remains in direct contact. Aristotle went on to say that the velocity of the body is directly proportional to the force imparted and inversely proportional to the resistance of the medium in which the motion takes place. This gives the law in today's notation
velocity
∝
imparted power
resistance
{\displaystyle {\text{velocity}}\propto {\frac {\text{imparted power}}{\text{resistance}}}}
This law presented three difficulties that Aristotle was aware of. The first is that if the imparted power is less than the resistance, then in reality it will not move the body, but Aristotle's relation says otherwise. Second, what is the source of the increase in imparted power required to increase the velocity of a freely falling body? Third, what is the imparted power that keeps a projectile in motion after it leaves the agent of projection? Aristotle, in his book Physics, Book 8, Chapter 10, 267a 4, proposed the following solution to the third problem in the case of a shot arrow. The bowstring or hand imparts a certain 'power of being a movent' to the air in contact with it, so that this imparted force is transmitted to the next layer of air, and so on, thus keeping the arrow in motion until the power gradually dissipates.
==== Chance ====
In his Physics Aristotle examines accidents (συμβεβηκός, symbebekòs) that have no cause but chance. "Nor is there any definite cause for an accident, but only chance (τύχη, týche), namely an indefinite (ἀόριστον, aóriston) cause" (Metaphysics V, 1025a25).
It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not (Metaphysics VI, 1027a29).
=== Continuum and vacuum ===
Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term "atom"). As a place without anything existing at or within it, Aristotle argued against the possibility of a vacuum or void. Because he believed that the speed of an object's motion is proportional to the force being applied (or, in the case of natural motion, the object's weight) and inversely proportional to the density of the medium, he reasoned that objects moving in a void would move indefinitely fast – and thus any and all objects surrounding the void would immediately fill it. The void, therefore, could never form.
The "voids" of modern-day astronomy (such as the Local Void adjacent to our own galaxy) have the opposite effect: ultimately, bodies off-center are ejected from the void due to the gravity of the material outside.
=== Four causes ===
According to Aristotle, there are four ways to explain the aitia or causes of change. He writes that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause."
Aristotle held that there were four kinds of causes.
==== Material ====
The material cause of a thing is that of which it is made. For a table, that might be wood; for a statue, that might be bronze or marble.
"In one way we say that the aition is that out of which. as existing, something comes to be, like the bronze for the statue, the silver for the phial, and their genera" (194b2 3—6). By "genera", Aristotle means more general ways of classifying the matter (e.g. "metal"; "material"); and that will become important. A little later on, he broadens the range of the material cause to include letters (of syllables), fire and the other elements (of physical bodies), parts (of wholes), and even premises (of conclusions: Aristotle re-iterates this claim, in slightly different terms, in An. Post II. 11).
==== Formal ====
The formal cause of a thing is the essential property that makes it the kind of thing it is. In Metaphysics Book Α Aristotle emphasizes that form is closely related to essence and definition. He says for example that the ratio 2:1, and number in general, is the cause of the octave.
"Another [cause] is the form and the exemplar: this is the formula (logos) of the essence (to ti en einai), and its genera, for instance the ratio 2:1 of the octave" (Phys 11.3 194b26—8)... Form is not just shape... We are asking (and this is the connection with essence, particularly in its canonical Aristotelian formulation) what it is to be some thing. And it is a feature of musical harmonics (first noted and wondered at by the Pythagoreans) that intervals of this type do indeed exhibit this ratio in some form in the instruments used to create them (the length of pipes, of strings, etc.). In some sense, the ratio explains what all the intervals have in common, why they turn out the same.
==== Efficient ====
The efficient cause of a thing is the primary agency by which its matter took its form. For example, the efficient cause of a baby is a parent of the same species and that of a table is a carpenter, who knows the form of the table. In his Physics II, 194b29—32, Aristotle writes: "there is that which is the primary originator of the change and of its cessation, such as the deliberator who is responsible [sc. for the action] and the father of the child, and in general the producer of the thing produced and the changer of the thing changed".
Aristotle’s examples here are instructive: one case of mental and one of physical causation, followed by a perfectly general characterization. But they conceal (or at any rate fail to make patent) a crucial feature of Aristotle’s concept of efficient causation, and one which serves to distinguish it from most modern homonyms. For Aristotle, any process requires a constantly operative efficient cause as long as it continues. This commitment appears most starkly to modern eyes in Aristotle’s discussion of projectile motion: what keeps the projectile moving after it leaves the
hand? "Impetus", "momentum", much less "inertia", are not possible answers. There must be a mover, distinct (at least in some sense) from the thing moved, which is exercising its motive capacity at every moment of the projectile’s flight (see Phys VIII. 10 266b29—267a11). Similarly, in every case of animal generation, there is always some thing responsible for the continuity of that generation, although it may do so by way of some intervening instrument (Phys II.3 194b35—195a3).
==== Final ====
The final cause is that for the sake of which something takes place, its aim or teleological purpose: for a germinating seed, it is the adult plant, for a ball at the top of a ramp, it is coming to rest at the bottom, for an eye, it is seeing, for a knife, it is cutting.
Goals have an explanatory function: that is a commonplace, at least in the context of action-ascriptions. Less of a commonplace is the view espoused by Aristotle, that finality and purpose are to be found throughout nature, which is for him the realm of those things which contain within themselves principles of movement and rest (i.e. efficient causes); thus it makes sense to attribute purposes not only to natural things themselves, but also to their parts: the parts of a natural whole exist for the sake of the whole. As Aristotle himself notes, "for the sake of" locutions are ambiguous: "A is for the sake of B" may mean that A exists or is undertaken in order to bring B about; or it may mean that A is for B’s benefit (An II.4 415b2—3, 20—1); but both types of finality have, he thinks, a crucial role to play in natural, as well as deliberative, contexts. Thus a man may exercise for the sake of his health: and so "health", and not just the hope of achieving it, is the cause of his action (this distinction is not trivial). But the eyelids are for the sake of the eye (to protect it: PA II.1 3) and the eye for the sake of the animal as a whole (to help it function properly: cf. An II.7).
=== Biology ===
According to Aristotle, the science of living things proceeds by gathering observations about each natural kind of animal, organizing them into genera and species (the differentiae in History of Animals) and then going on to study the causes (in Parts of Animals and Generation of Animals, his three main biological works).
The four causes of animal generation can be summarized as follows. The mother and father represent the material and efficient causes, respectively. The mother provides the matter out of which the embryo is formed, while the father provides the agency that informs that material and triggers its development. The formal cause is the definition of the animal’s substantial being (GA I.1 715a4: ho logos tês ousias). The final cause is the adult form, which is the end for the sake of which development takes place.
==== Organism and mechanism ====
The four elements make up the uniform materials such as blood, flesh and bone, which are themselves the matter out of which are created the non-uniform organs of the body (e.g. the heart, liver and hands) "which in turn, as parts, are matter for the functioning body as a whole (PA II. 1 646a 13—24)".
[There] is a certain obvious conceptual economy about the view that in natural processes naturally constituted things simply seek to realize in full actuality the potentials contained within them (indeed, this is what is for them to be natural); on the other hand, as the detractors of Aristotelianism from the seventeenth century on were not slow to point out, this economy is won at the expense of any serious empirical content. Mechanism, at least as practiced by Aristotle’s contemporaries and predecessors, may have been explanatorily inadequate – but at least it was an attempt at a general account given in reductive terms of the lawlike connections between things. Simply introducing what later reductionists were to scoff at as "occult qualities" does not explain – it merely, in the manner of Molière’s famous satirical joke, serves to re-describe the effect. Formal talk, or so it is said, is vacuous.Things are not however quite as bleak as this. For one thing, there’s no point in trying to engage in reductionist science if you don’t have the wherewithal, empirical and conceptual, to do so successfully: science shouldn't be simply unsubstantiated speculative metaphysics. But more than that, there is a point to describing the world in such teleologically loaded terms: it makes sense of things in a way that atomist speculations do not. And further, Aristotle’s talk of species-forms is not as empty as his opponents would insinuate. He doesn't simply say that things do what they do because that's the sort of thing they do: the whole point of his classificatory biology, most clearly exemplified in PA, is to show what sorts of function go with what, which presuppose which and which are subservient to which. And in this sense, formal or functional biology is susceptible of a type of reductionism. We start, he tells us, with the basic animal kinds which we all pre-theoretically (although not indefeasibly) recognize (cf. PA I.4): but we then go on to show how their parts relate to one another: why it is, for instance, that only blooded creatures have lungs, and how certain structures in one species are analogous or homologous to those in another (such as scales in fish, feathers in birds, hair in mammals). And the answers, for Aristotle, are to be found in the economy of functions, and how they all contribute to the overall well-being (the final cause in this sense) of the animal.
See also Organic form.
==== Psychology ====
According to Aristotle, perception and thought are similar, though not exactly alike in that perception is concerned only with the external objects that are acting on our sense organs at any given time, whereas we can think about anything we choose. Thought is about universal forms, in so far as they have been successfully understood, based on our memory of having encountered instances of those forms directly.
Aristotle’s theory of cognition rests on two central pillars: his account of perception and his account of thought. Together, they make up a significant portion of his psychological writings, and his discussion of other mental states depends critically on them. These two activities, moreover, are conceived of in an analogous manner, at least with regard to their most basic forms. Each activity is triggered by its object – each, that is, is about the very thing that brings it about. This simple causal account explains the reliability of cognition: perception and thought are, in effect, transducers, bringing information about the world into our cognitive systems, because, at least in their most basic forms, they are infallibly about the causes that bring them about (An III.4 429a13–18). Other, more complex mental states are far from infallible. But they are still tethered to the world, in so far as they rest on the unambiguous and direct contact perception and thought enjoy with their objects.
== Medieval commentary ==
The Aristotelian theory of motion came under criticism and modification during the Middle Ages. Modifications began with John Philoponus in the 6th century, who partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force" but modified it to include his idea that a hurled body also acquires an inclination (or "motive power") for movement away from whatever caused it to move, an inclination that secures its continued motion. This impressed virtue would be temporary and self-expending, meaning that all motion would tend toward the form of Aristotle's natural motion.
In The Book of Healing (1027), the 11th-century Persian polymath Avicenna developed Philoponean theory into the first coherent alternative to Aristotelian theory. Inclinations in the Avicennan theory of motion were not self-consuming but permanent forces whose effects were dissipated only as a result of external agents such as air resistance, making him "the first to conceive such a permanent type of impressed virtue for non-natural motion". Such a self-motion (mayl) is "almost the opposite of the Aristotelian conception of violent motion of the projectile type, and it is rather reminiscent of the principle of inertia, i.e. Newton's first law of motion."
The eldest Banū Mūsā brother, Ja'far Muhammad ibn Mūsā ibn Shākir (800-873), wrote the Astral Motion and The Force of Attraction. The Persian physicist, Ibn al-Haytham (965-1039) discussed the theory of attraction between bodies. It seems that he was aware of the magnitude of acceleration due to gravity and he discovered that the heavenly bodies "were accountable to the laws of physics". During his debate with Avicenna, al-Biruni also criticized the Aristotelian theory of gravity firstly for denying the existence of levity or gravity in the celestial spheres; and, secondly, for its notion of circular motion being an innate property of the heavenly bodies.
Hibat Allah Abu'l-Barakat al-Baghdaadi (1080–1165) wrote al-Mu'tabar, a critique of Aristotelian physics where he negated Aristotle's idea that a constant force produces uniform motion, as he realized that a force applied continuously produces acceleration, a fundamental law of classical mechanics and an early foreshadowing of Newton's second law of motion. Like Newton, he described acceleration as the rate of change of speed.
In the 14th century, Jean Buridan developed the theory of impetus as an alternative to the Aristotelian theory of motion. The theory of impetus was a precursor to the concepts of inertia and momentum in classical mechanics. Buridan and Albert of Saxony also refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. In the 16th century, Al-Birjandi discussed the possibility of the Earth's rotation and, in his analysis of what might occur if the Earth were rotating, developed a hypothesis similar to Galileo's notion of "circular inertia". He described it in terms of the following observational test:
"The small or large rock will fall to the Earth along the path of a line that is perpendicular to the plane (sath) of the horizon; this is witnessed by experience (tajriba). And this perpendicular is away from the tangent point of the Earth’s sphere and the plane of the perceived (hissi) horizon. This point moves with the motion of the Earth and thus there will be no difference in place of fall of the two rocks."
== Life and death of Aristotelian physics ==
The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula.
In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth.
According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments.
In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum).
Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate.
Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is:
d
y
d
t
∝
y
{\displaystyle {dy \over dt}\propto y}
with the condition that
y
(
0
)
=
0
{\displaystyle y(0)=0}
. Galileo demonstrated that this system would stay at
y
=
0
{\displaystyle y=0}
for all time. If a perturbation set the system into motion somehow, the object would pick up speed exponentially in time, not linearly.
Standing on the surface of the Moon in 1971, David Scott famously repeated Galileo's experiment by dropping a feather and a hammer from each hand at the same time. In the absence of a substantial atmosphere, the two objects fell and hit the Moon's surface at the same time.
The first convincing mathematical theory of gravity – in which two masses are attracted toward each other by a force whose effect decreases according to the inverse square of the distance between them – was Newton's law of universal gravitation. This, in turn, was replaced by the General theory of relativity due to Albert Einstein.
== Modern evaluations of Aristotle's physics ==
Modern scholars differ in their opinions of whether Aristotle's physics were sufficiently based on empirical observations to qualify as science, or else whether they were derived primarily from philosophical speculation and thus fail to satisfy the scientific method.
Carlo Rovelli has argued that Aristotle's physics are an accurate and non-intuitive representation of a particular domain (motion in fluids), and thus are just as scientific as Newton's laws of motion, which also are accurate in some domains while failing in others (i.e. special and general relativity).
== As listed in the Corpus Aristotelicum ==
== See also ==
Minima naturalia, a hylomorphic concept suggested by Aristotle broadly analogous in Peripatetic and Scholastic physical speculation to the atoms of Epicureanism
== Notes ==
== References ==
== Sources ==
H. Carteron (1965) "Does Aristotle Have a Mechanics?" in Articles on Aristotle 1. Science eds. Jonathan Barnes, Malcolm Schofield, Richard Sorabji (London: General Duckworth and Company Limited), 161–174.
Ragep, F. Jamil (2001). "Tusi and Copernicus: The Earth's Motion in Context". Science in Context. 14 (1–2). Cambridge University Press: 145–163. doi:10.1017/s0269889701000060. S2CID 145372613.
Ragep, F. Jamil; Al-Qushji, Ali (2001). "Freeing Astronomy from Philosophy: An Aspect of Islamic Influence on Science". Osiris. 2nd Series. 16 (Science in Theistic Contexts: Cognitive Dimensions): 49–64 and 66–71. Bibcode:2001Osir...16...49R. doi:10.1086/649338. S2CID 142586786.
== Further reading ==
Katalin Martinás, "Aristotelian Thermodynamics" in Thermodynamics: history and philosophy: facts, trends, debates (Veszprém, Hungary 23–28 July 1990), pp. 285–303. | Wikipedia/Aristotelian_physics |
Solid-state physics is the study of rigid matter, or solids, through methods such as solid-state chemistry, quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. Along with solid-state chemistry, it also has direct applications in the technology of transistors and semiconductors.
== Background ==
Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical (e.g. hardness and elasticity), thermal, electrical, magnetic and optical properties of solids. Depending on the material involved and the conditions in which it was formed, the atoms may be arranged in a regular, geometric pattern (crystalline solids, which include metals and ordinary water ice) or irregularly (an amorphous solid such as common window glass).
The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes.
The forces between the atoms in a crystal can take a variety of forms. For example, in a crystal of sodium chloride (common salt), the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding. In solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding.
== History ==
The physical properties of solids have been common subjects of scientific inquiry for centuries, but a separate field going by the name of solid-state physics did not emerge until the 1940s, in particular with the establishment of the Division of Solid State Physics (DSSP) within the American Physical Society. The DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society.
Large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, and diverse other phenomena. During the early Cold War, research in solid state physics was often not restricted to solids, which led some physicists in the 1970s and 1980s to found the field of condensed matter physics, which organized around common techniques used to investigate solids, liquids, plasmas, and other complex matter. Today, solid-state physics is broadly considered to be the subfield of condensed matter physics, often referred to as hard condensed matter, that focuses on the properties of solids with regular crystal lattices.
== Crystal structure and properties ==
Many properties of materials are affected by their crystal structure. This structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction.
The sizes of the individual crystals in a crystalline solid material vary depending on the material involved and the conditions when it was formed. Most crystalline materials encountered in everyday life are polycrystalline, with the individual crystals being microscopic in scale, but macroscopic single crystals can be produced either naturally (e.g. diamonds) or artificially.
Real crystals feature defects or irregularities in the ideal arrangements, and it is these defects that critically determine many of the electrical and mechanical properties of real materials.
== Electronic properties ==
Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics. An early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid. By assuming that the material contains immobile positive ions and an "electron gas" of classical, non-interacting electrons, the Drude model was able to explain electrical and thermal conductivity and the Hall effect in metals, although it greatly overestimated the electronic heat capacity.
Arnold Sommerfeld combined the classical Drude model with quantum mechanics in the free electron model (or Drude-Sommerfeld model). Here, the electrons are modelled as a Fermi gas, a gas of particles which obey the quantum mechanical Fermi–Dirac statistics. The free electron model gave improved predictions for the heat capacity of metals, however, it was unable to explain the existence of insulators.
The nearly free electron model is a modification of the free electron model which includes a weak periodic perturbation meant to model the interaction between the conduction electrons and the ions in a crystalline solid. By introducing the idea of electronic bands, the theory explains the existence of conductors, semiconductors and insulators.
The nearly free electron model rewrites the Schrödinger equation for the case of a periodic potential. The solutions in this case are known as Bloch states. Since Bloch's theorem applies only to periodic potentials, and since unceasing random movements of atoms in a crystal disrupt periodicity, this use of Bloch's theorem is only an approximation, but it has proven to be a tremendously valuable approximation, without which most solid-state physics analysis would be intractable. Deviations from periodicity are treated by quantum mechanical perturbation theory.
== Modern research ==
Modern research topics in solid-state physics include:
High-temperature superconductivity
Quasicrystals
Spin glass
Strongly correlated materials
Two-dimensional materials
Nanomaterials
== See also ==
Condensed matter physics
Crystallography
Nuclear spectroscopy
Solid mechanics
== References ==
== Further reading ==
Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976).
Charles Kittel, Introduction to Solid State Physics (Wiley: New York, 2004).
H. M. Rosenberg, The Solid State (Oxford University Press: Oxford, 1995).
Steven H. Simon, The Oxford Solid State Basics (Oxford University Press: Oxford, 2013).
Out of the Crystal Maze. Chapters from the History of Solid State Physics, ed. Lillian Hoddeson, Ernest Braun, Jürgen Teichmann, Spencer Weart (Oxford: Oxford University Press, 1992).
M. A. Omar, Elementary Solid State Physics (Revised Printing, Addison-Wesley, 1993).
Hofmann, Philip (2015-05-26). Solid State Physics (2 ed.). Wiley-VCH. ISBN 978-3527412822. | Wikipedia/Solid-state_physics |
Aerodynamics (Ancient Greek: ἀήρ aero (air) + Ancient Greek: δυναμική (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
== History ==
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes.
In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes.
In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903.
During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers.
As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft.
By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations.
== Fundamental concepts ==
Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields.
=== Flow classification ===
Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow.
Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results.
Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine).
==== Continuum assumption ====
Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow.
The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics.
=== Conservation laws ===
The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used:
Conservation of mass
Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation.
Conservation of momentum
The mathematical formulation of this principle can be considered an application of Newton's second law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components).
Conservation of energy
The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest.
Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations.
The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables.
== Branches of aerodynamics ==
Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe.
Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic.
The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows.
=== Incompressible aerodynamics ===
An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included.
==== Subsonic flow ====
Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions.
In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics.
=== Compressible aerodynamics ===
According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows.
==== Transonic flow ====
The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic.
==== Supersonic flow ====
Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem.
Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes.
==== Hypersonic flow ====
In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas.
== Associated terminology ==
The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence.
=== Boundary layers ===
The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically.
=== Turbulence ===
In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow.
== Aerodynamics in other fields ==
=== Engineering design ===
Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines.
The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine.
=== Environmental design ===
Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems.
Aerodynamic equations are used in numerical weather prediction.
=== Ball-control in sports ===
Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect".
== See also ==
Aeronautics
Aerostatics
Aviation
Computational fluid dynamics
Fluid dynamics
Insect flight – how bugs fly
List of aerospace engineering topics
List of engineering topics
Nose cone design
== References ==
== Further reading ==
== External links ==
NASA's Guide to Aerodynamics. Archived 2012-07-15 at the Wayback Machine.
Aerodynamics for Students
Aerodynamics for Pilots (archived)
Aerodynamics and Race Car Tuning (archived)
Aerodynamic Related Projects. Archived 2018-12-13 at the Wayback Machine.
eFluids Bicycle Aerodynamics. Archived 2009-12-15 at the Wayback Machine.
Application of Aerodynamics in Formula One (F1) (archived)
Aerodynamics in Car Racing. Archived 2009-12-06 at the Wayback Machine.
Aerodynamics of Birds. Archived 2010-03-24 at the Wayback Machine.
NASA Aerodynamics Index | Wikipedia/Aerodynamics |
Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space—what they are, rather than where they are", which is studied in celestial mechanics.
Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium, and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics.
== History ==
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.
During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions.
For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.
Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.
In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.
In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.
Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery.
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves.
== Observational astrophysics ==
Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.
Most astrophysical observations are made using the electromagnetic spectrum.
Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes.
Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies.
Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae.
Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope.
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.
Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.
The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.
== Theoretical astrophysics ==
Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics.
== Popularization ==
The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.
The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.
== See also ==
Astrochemistry – Study of molecules in the Universe and their reactions
Astronomical observatories
Astronomical spectroscopy – Measurement of electromagnetic radiation for astronomy
Astroparticle physics – Branch of particle physics
Gravitational-wave astronomy – Branch of astronomy using gravitational waves
Hertzsprung–Russell diagram – Scatter plot of stars showing the relationship of luminosity to stellar classification
High-energy astronomy – Study of astronomical objects that release highly energetic electromagnetic radiation
Important publications in astrophysics
List of astronomers – (includes astrophysicists)
Neutrino astronomy – Observing low-mass stellar particles
Timeline of gravitational physics and relativity
Timeline of knowledge about galaxies, clusters of galaxies, and large-scale structure
Timeline of white dwarfs, neutron stars, and supernovae – Chronological list of developments in knowledge and records
== References ==
== Further reading ==
Longair, Malcolm S. (2006), The Cosmic Century: A History of Astrophysics and Cosmology, Cambridge: Cambridge University Press, ISBN 978-0-521-47436-8
Astrophysics, Scholarpedia Expert articles
== External links ==
Astronomy and Astrophysics, a European Journal
Astrophysical Journal
Cosmic Journey: A History of Scientific Cosmology Archived 2008-10-21 at the Wayback Machine from the American Institute of Physics
International Journal of Modern Physics D from World Scientific
List and directory of peer-reviewed Astronomy / Astrophysics Journals
Ned Wright's Cosmology Tutorial, UCLA | Wikipedia/Astrophysics |
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.).
In many languages, the term technical physics is also used.
It has been used since 1861 by the German physics teacher J. Frick in his publications.
== Terminology ==
In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics.
In some universities and their institutions, an engineering physics (or applied physics) major is a discipline or specialization within the scope of engineering science, or applied science.
Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering".
== Expertise ==
Unlike traditional engineering disciplines, engineering science or engineering physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science or engineering physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.
== Degrees ==
In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics.
== Awards ==
There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field.
== See also ==
Applied physics
Engineering
Engineering science and mechanics
Environmental engineering science
Index of engineering science and mechanics articles
Industrial engineering
== Notes and references ==
== External links ==
"Engineering Physics at Xavier"
"The Engineering Physicist Profession"
"Engineering Physicist Professional Profile"
Society of Engineering Science Inc. Archived 2017-08-07 at the Wayback Machine | Wikipedia/Engineering_physics |
Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to produce ephemeris data.
== History ==
Modern analytic celestial mechanics started with Isaac Newton's Principia (1687). The name celestial mechanics is more recent than that. Newton wrote that the field should be called "rational mechanics". The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term celestial mechanics. Prior to Kepler, there was little connection between exact, quantitative prediction of planetary positions, using geometrical or numerical techniques, and contemporary discussions of the physical causes of the planets' motion.
=== Laws of planetary motion ===
Johannes Kepler was the first to closely integrate the predictive geometrical astronomy, which had been dominant from Ptolemy in the 2nd century to Copernicus, with physical concepts to produce a New Astronomy, Based upon Causes, or Celestial Physics in 1609. His work led to the laws of planetary orbits, which he developed using his physical principles and the planetary observations made by Tycho Brahe. Kepler's elliptical model greatly improved the accuracy of predictions of planetary motion, years before Newton developed his law of gravitation in 1686.
=== Newtonian mechanics and universal gravitation ===
Isaac Newton is credited with introducing the idea that the motion of objects in the heavens, such as planets, the Sun, and the Moon, and the motion of objects on the ground, like cannon balls and falling apples, could be described by the same set of physical laws. In this sense he unified celestial and terrestrial dynamics. Using his law of gravity, Newton confirmed Kepler's laws for elliptical orbits by deriving them from the gravitational two-body problem, which Newton included in his epochal Philosophiæ Naturalis Principia Mathematica in 1687.
=== Three-body problem ===
After Newton, Joseph-Louis Lagrange attempted to solve the three-body problem in 1772, analyzed the stability of planetary orbits, and discovered the existence of the Lagrange points. Lagrange also reformulated the principles of classical mechanics, emphasizing energy more than force, and developing a method to use a single polar coordinate equation to describe any orbit, even those that are parabolic and hyperbolic. This is useful for calculating the behaviour of planets and comets and such (parabolic and hyperbolic orbits are conic section extensions of Kepler's elliptical orbits). More recently, it has also become useful to calculate spacecraft trajectories.
Henri Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). Poincaré showed that the three-body problem is not integrable. In other words, the general solution of the three-body problem can not be expressed in terms of algebraic and transcendental functions through unambiguous coordinates and velocities of the bodies. His work in this area was the first major achievement in celestial mechanics since Isaac Newton.
These monographs include an idea of Poincaré, which later became the basis for mathematical "chaos theory" (see, in particular, the Poincaré recurrence theorem) and the general theory of dynamical systems. He introduced the important concept of bifurcation points and proved the existence of equilibrium figures such as the non-ellipsoids, including ring-shaped and pear-shaped figures, and their stability. For this discovery, Poincaré received the Gold Medal of the Royal Astronomical Society (1900).
=== Standardisation of astronomical tables ===
Simon Newcomb was a Canadian-American astronomer who revised Peter Andreas Hansen's table of lunar positions. In 1877, assisted by George William Hill, he recalculated all the major astronomical constants. After 1884 he conceived, with A.M.W. Downing, a plan to resolve much international confusion on the subject. By the time he attended a standardisation conference in Paris, France, in May 1886, the international consensus was that all ephemerides should be based on Newcomb's calculations. A further conference as late as 1950 confirmed Newcomb's constants as the international standard.
=== Anomalous precession of Mercury ===
Albert Einstein explained the anomalous precession of Mercury's perihelion in his 1916 paper The Foundation of the General Theory of Relativity. General relativity led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy.
== Examples of problems ==
Celestial motion, without additional forces such as drag forces or the thrust of a rocket, is governed by the reciprocal gravitational acceleration between masses. A generalization is the n-body problem, where a number n of masses are mutually interacting via the gravitational force. Although analytically not integrable in the general case, the integration can be well approximated numerically.
Examples:
4-body problem: spaceflight to Mars (for parts of the flight the influence of one or two bodies is very small, so that there we have a 2- or 3-body problem; see also the patched conic approximation)
3-body problem:
Quasi-satellite
Spaceflight to, and stay at a Lagrangian point
In the
n
=
2
{\displaystyle n=2}
case (two-body problem) the configuration is much simpler than for
n
>
2
{\displaystyle n>2}
. In this case, the system is fully integrable and exact solutions can be found.
Examples:
A binary star, e.g., Alpha Centauri (approx. the same mass)
A binary asteroid, e.g., 90 Antiope (approx. the same mass)
A further simplification is based on the "standard assumptions in astrodynamics", which include that one body, the orbiting body, is much smaller than the other, the central body. This is also often approximately valid.
Examples:
The Solar System orbiting the center of the Milky Way
A planet orbiting the Sun
A moon orbiting a planet
A spacecraft orbiting Earth, a moon, or a planet (in the latter cases the approximation only applies after arrival at that orbit)
== Perturbation theory ==
Perturbation theory comprises mathematical methods that are used to find an approximate solution to a problem which cannot be solved exactly. (It is closely related to methods used in numerical analysis, which are ancient.) The earliest use of modern perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial mechanics: Newton's solution for the orbit of the Moon, which moves noticeably differently from a simple Keplerian ellipse because of the competing gravitation of the Earth and the Sun.
Perturbation methods start with a simplified form of the original problem, which is carefully chosen to be exactly solvable. In celestial mechanics, this is usually a Keplerian ellipse, which is correct when there are only two gravitating bodies (say, the Earth and the Moon), or a circular orbit, which is only correct in special cases of two-body motion, but is often close enough for practical use.
The solved, but simplified problem is then "perturbed" to make its time-rate-of-change equations for the object's position closer to the values from the real problem, such as including the gravitational attraction of a third, more distant body (the Sun). The slight changes that result from the terms in the equations – which themselves may have been simplified yet again – are used as corrections to the original solution. Because simplifications are made at every step, the corrections are never perfect, but even one cycle of corrections often provides a remarkably better approximate solution to the real problem.
There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-used as the new starting point for yet another cycle of perturbations and corrections. In principle, for most problems the recycling and refining of prior solutions to obtain a new generation of better solutions could continue indefinitely, to any desired finite degree of accuracy.
The common difficulty with the method is that the corrections usually progressively make the new solutions very much more complicated, so each cycle is much more difficult to manage than the previous cycle of corrections. Newton is reported to have said, regarding the problem of the Moon's orbit "It causeth my head to ache."
This general procedure – starting with a simplified problem and gradually adding corrections that make the starting point of the corrected problem closer to the real situation – is a widely used mathematical tool in advanced sciences and engineering. It is the natural extension of the "guess, check, and fix" method used anciently with numbers.
== Reference frame ==
Problems in celestial mechanics are often posed in simplifying reference frames, such as the synodic reference frame applied to the three-body problem, where the origin coincides with the barycenter of the two larger celestial bodies. Other reference frames for n-body simulations include those that place the origin to follow the center of mass of a body, such as the heliocentric and the geocentric reference frames. The choice of reference frame gives rise to many phenomena, including the retrograde motion of superior planets while on a geocentric reference frame.
== Orbital mechanics ==
== See also ==
Astrometry is a part of astronomy that deals with measuring the positions of stars and other celestial bodies, their distances and movements.
Astrophysics
Celestial navigation is a position fixing technique that was the first system devised to help sailors locate themselves on a featureless ocean.
Developmental Ephemeris or the Jet Propulsion Laboratory Developmental Ephemeris (JPL DE) is a widely used model of the solar system, which combines celestial mechanics with numerical analysis and astronomical and spacecraft data.
Dynamics of the celestial spheres concerns pre-Newtonian explanations of the causes of the motions of the stars and planets.
Dynamical time scale
Ephemeris is a compilation of positions of naturally occurring astronomical objects as well as artificial satellites in the sky at a given time or times.
Gravitation
Lunar theory attempts to account for the motions of the Moon.
Numerical analysis is a branch of mathematics, pioneered by celestial mechanicians, for calculating approximate numerical answers (such as the position of a planet in the sky) which are too difficult to solve down to a general, exact formula.
Creating a numerical model of the solar system was the original goal of celestial mechanics, and has only been imperfectly achieved. It continues to motivate research.
An orbit is the path that an object makes, around another object, whilst under the influence of a source of centripetal force, such as gravity.
Orbital elements are the parameters needed to specify a Newtonian two-body orbit uniquely.
Osculating orbit is the temporary Keplerian orbit about a central body that an object would continue on, if other perturbations were not present.
Retrograde motion is orbital motion in a system, such as a planet and its satellites, that is contrary to the direction of rotation of the central body, or more generally contrary in direction to the net angular momentum of the entire system.
Apparent retrograde motion is the periodic, apparently backwards motion of planetary bodies when viewed from the Earth (an accelerated reference frame).
Satellite is an object that orbits another object (known as its primary). The term is often used to describe an artificial satellite (as opposed to natural satellites, or moons). The common noun ‘moon’ (not capitalized) is used to mean any natural satellite of the other planets.
Tidal force is the combination of out-of-balance forces and accelerations of (mostly) solid bodies that raises tides in bodies of liquid (oceans), atmospheres, and strains planets' and satellites' crusts.
Two solutions, called VSOP82 and VSOP87 are versions one mathematical theory for the orbits and positions of the major planets, which seeks to provide accurate positions over an extended period of time.
== Notes ==
== References ==
== Further reading ==
Encyclopedia:Celestial mechanics Scholarpedia Expert articles
Poincaré, H. (1967). New Methods of Celestial Mechanics (3 vol. English translated ed.). American Institute of Physics. ISBN 978-1-56396-117-5.
== External links ==
Calvert, James B. (2003-03-28), Celestial Mechanics, University of Denver, archived from the original on 2006-09-07, retrieved 2006-08-21
Astronomy of the Earth's Motion in Space, high-school level educational web site by David P. Stern
Newtonian Dynamics Undergraduate level course by Richard Fitzpatrick. This includes Lagrangian and Hamiltonian Dynamics and applications to celestial mechanics, gravitational potential theory, the 3-body problem and Lunar motion (an example of the 3-body problem with the Sun, Moon, and the Earth).
Research
Marshall Hampton's research page: Central configurations in the n-body problem Archived 2002-10-01 at the Wayback Machine
Artwork
Celestial Mechanics is a Planetarium Artwork created by D. S. Hessels and G. Dunne
Course notes
Professor Tatum's course notes at the University of Victoria
Associations
Italian Celestial Mechanics and Astrodynamics Association
Simulations | Wikipedia/Celestial_mechanics |
Combinatorial physics or physical combinatorics is the area of interaction between physics and combinatorics.
== Overview ==
"Combinatorial Physics is an emerging area which unites combinatorial and discrete mathematical techniques applied to theoretical physics, especially Quantum Theory."
"Physical combinatorics might be defined naively as combinatorics guided by ideas or insights from physics"
Combinatorics has always played an important role in quantum field theory and statistical physics. However, combinatorial physics only emerged as a specific field after a seminal work by Alain Connes and Dirk Kreimer, showing that the renormalization of Feynman diagrams can be described by a Hopf algebra.
Combinatorial physics can be characterized by the use of algebraic concepts to interpret and solve physical problems involving combinatorics. It gives rise to a particularly harmonious collaboration between mathematicians and physicists.
Among the significant physical results of combinatorial physics, we may mention the reinterpretation of renormalization as a Riemann–Hilbert problem, the fact that the Slavnov–Taylor identities of gauge theories generate a Hopf ideal, the quantization of fields and strings, and a completely algebraic description of the combinatorics of quantum field theory. An important example of applying combinatorics to physics is the enumeration of alternating sign matrix in the solution of ice-type models. The corresponding ice-type model is the six vertex model with domain wall boundary conditions.
== See also ==
Mathematical physics
Statistical physics
Ising model
Percolation theory
Tutte polynomial
Partition function
Hopf algebra
Combinatorics and dynamical systems
Quantum mechanics
== References ==
== Further reading ==
Some Open Problems in Combinatorial Physics, G. Duchamp, H. Cheballah
One-parameter groups and combinatorial physics, G. Duchamp, K.A. Penson, A.I. Solomon, A.Horzela, P.Blasiak
Combinatorial Physics, Normal Order and Model Feynman Graphs, A.I. Solomon, P. Blasiak, G. Duchamp, A. Horzela, K.A. Penson
Hopf Algebras in General and in Combinatorial Physics: a practical introduction, G. Duchamp, P. Blasiak, A. Horzela, K.A. Penson, A.I. Solomon
Discrete and Combinatorial Physics
Bit-String Physics: a Novel "Theory of Everything", H. Pierre Noyes
Combinatorial Physics, Ted Bastin, Clive W. Kilmister, World Scientific, 1995, ISBN 981-02-2212-2
Physical Combinatorics and Quasiparticles, Giovanni Feverati, Paul A. Pearce, Nicholas S. Witte
Fitzgerald, Hannah. "Physical Combinatorics of Non-Unitary Minimal Models" (PDF). CiteSeerX 10.1.1.46.4129. Archived from the original (PDF) on 4 March 2016. Retrieved 17 August 2014.
Paths, Crystals and Fermionic Formulae, G.Hatayama, A.Kuniba, M.Okado, T.Takagi, Z.Tsuboi
On powers of Stirling matrices, István Mező
"On cluster expansions in graph theory and physics", N BIGGS — The Quarterly Journal of Mathematics, 1978 - Oxford Univ Press
Enumeration Of Rational Curves Via Torus Actions, Maxim Kontsevich, 1995
Non-commutative Calculus and Discrete Physics, Louis H. Kauffman, February 1, 2008
Sequential cavity method for computing free energy and surface pressure, David Gamarnik, Dmitriy Katz, July 9, 2008
=== Combinatorics and statistical physics ===
"Graph Theory and Statistical Physics", J.W. Essam, Discrete Mathematics, 1, 83-112 (1971).
Combinatorics In Statistical Physics
Hard Constraints and the Bethe Lattice: Adventures at the Interface of Combinatorics and Statistical Physics, Graham Brightwell, Peter Winkler
Graphs, Morphisms, and Statistical Physics: DIMACS Workshop Graphs, Morphisms and Statistical Physics, March 19-21, 2001, DIMACS Center, Jaroslav Nešetřil, Peter Winkler, AMS Bookstore, 2001, ISBN 0-8218-3551-3
=== Conference proceedings ===
Proc. of Combinatorics and Physics, Los Alamos, August 1998
Physics and Combinatorics 1999: Proceedings of the Nagoya 1999 International Workshop, Anatol N. Kirillov, Akihiro Tsuchiya, Hiroshi Umemura, World Scientific, 2001, ISBN 981-02-4578-5
Physics and combinatorics 2000: proceedings of the Nagoya 2000 International Workshop, Anatol N. Kirillov, Nadejda Liskova, World Scientific, 2001, ISBN 981-02-4642-0
Asymptotic combinatorics with applications to mathematical physics: a European mathematical summer school held at the Euler Institute, St. Petersburg, Russia, July 9-20, 2001, Anatoliĭ, Moiseevich Vershik, Springer, 2002, ISBN 3-540-40312-4
Counting Complexity: An International Workshop On Statistical Mechanics And Combinatorics, 10–15 July 2005, Dunk Island, Queensland, Australia
Proceedings of the Conference on Combinatorics and Physics, MPIM Bonn, March 19–23, 2007 | Wikipedia/Combinatorics_and_physics |
General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever is
present, including matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
Newton's law of universal gravitation, which describes gravity in classical mechanics, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data.
Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic.
Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe.
Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.
== History ==
Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913.
The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life.
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests.
General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.
In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated."
== From classical mechanics to general relativity ==
General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity.
=== Geometry of Newtonian gravity ===
At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime.
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass.
=== Relativistic generalization ===
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry.
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.
A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity.
The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish).
=== Einstein's equations ===
Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations:
On the left-hand side is the Einstein tensor,
G
μ
ν
{\displaystyle G_{\mu \nu }}
, which is symmetric and a specific divergence-free combination of the Ricci tensor
R
μ
ν
{\displaystyle R_{\mu \nu }}
and the metric. In particular,
R
=
g
μ
ν
R
μ
ν
{\displaystyle R=g^{\mu \nu }R_{\mu \nu }}
is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as
R
μ
ν
=
R
α
μ
α
ν
.
{\displaystyle R_{\mu \nu }={R^{\alpha }}_{\mu \alpha \nu }.}
On the right-hand side,
κ
{\displaystyle \kappa }
is a constant and
T
μ
ν
{\displaystyle T_{\mu \nu }}
is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant
κ
{\displaystyle \kappa }
is found to be
κ
=
8
π
G
c
4
{\textstyle \kappa ={\frac {8\pi G}{c^{4}}}}
, where
G
{\displaystyle G}
is the Newtonian constant of gravitation and
c
{\displaystyle c}
the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
R
μ
ν
=
0.
{\displaystyle R_{\mu \nu }=0.}
In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic.
The geodesic equation is:
d
2
x
μ
d
s
2
+
Γ
μ
α
β
d
x
α
d
s
d
x
β
d
s
=
0
,
{\displaystyle {d^{2}x^{\mu } \over ds^{2}}+\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=0,}
where
s
{\displaystyle s}
is a scalar parameter of motion (e.g. the proper time), and
Γ
μ
α
β
{\displaystyle \Gamma ^{\mu }{}_{\alpha \beta }}
are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
. The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
=== Total force in general relativity ===
In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by
U
f
(
r
)
=
−
G
M
m
r
+
L
2
2
m
r
2
−
G
M
L
2
m
c
2
r
3
{\displaystyle U_{f}(r)=-{\frac {GMm}{r}}+{\frac {L^{2}}{2mr^{2}}}-{\frac {GML^{2}}{mc^{2}r^{3}}}}
A conservative total force can then be obtained as its negative gradient
F
f
(
r
)
=
−
G
M
m
r
2
+
L
2
m
r
3
−
3
G
M
L
2
m
c
2
r
4
{\displaystyle F_{f}(r)=-{\frac {GMm}{r^{2}}}+{\frac {L^{2}}{mr^{3}}}-{\frac {3GML^{2}}{mc^{2}r^{4}}}}
where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect.
=== Alternatives to general relativity ===
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.
== Definition and basic applications ==
The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building.
=== Definition and basic properties ===
General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.
While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation.
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.
=== Model-building ===
The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present.
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).
Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories.
== Consequences of Einstein's theory ==
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication.
=== Gravitational time dilation and frequency shift ===
Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation.
Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
=== Light deflection and gravitational time delay ===
General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity.
Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space.
=== Gravitational waves ===
Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.
The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by
10
−
21
{\displaystyle 10^{-21}}
or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
=== Orbital effects and the relativity of direction ===
General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction.
==== Precession of apsides ====
In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude.
In general relativity the perihelion shift
σ
{\displaystyle \sigma }
, expressed in radians per revolution, is approximately given by:
σ
=
24
π
3
L
2
T
2
c
2
(
1
−
e
2
)
,
{\displaystyle \sigma ={\frac {24\pi ^{3}L^{2}}{T^{2}c^{2}(1-e^{2})}}\ ,}
where:
L
{\displaystyle L}
is the semi-major axis
T
{\displaystyle T}
is the orbital period
c
{\displaystyle c}
is the speed of light in vacuum
e
{\displaystyle e}
is the orbital eccentricity
==== Orbital decay ====
According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation.
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations.
==== Geodetic precession and frame-dragging ====
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used.
== Astrophysical applications ==
=== Gravitational lensing ===
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.
The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.
Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies.
=== Gravitational-wave astronomy ===
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015.
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.
=== Black holes and other compact objects ===
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.
General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry.
=== Cosmology ===
The current models of cosmology are based on Einstein's field equations, which include the cosmological constant
Λ
{\displaystyle \Lambda }
since it has important influence on the large-scale dynamics of the cosmos,
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
8
π
G
c
4
T
μ
ν
{\displaystyle R_{\mu \nu }-{\textstyle 1 \over 2}R\,g_{\mu \nu }+\Lambda \ g_{\mu \nu }={\frac {8\pi G}{c^{4}}}\,T_{\mu \nu }}
where
g
μ
ν
{\displaystyle g_{\mu \nu }}
is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.
An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below).
=== Exotic solutions: time travel, warp drives ===
Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel.
Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability.
== Advanced concepts ==
=== Asymptotic symmetries ===
The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries.
=== Causal structure and global geometry ===
In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams.
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results.
=== Horizons ===
Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier.
Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.
Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below).
There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation.
=== Singularities ===
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
=== Evolution equations ===
Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories.
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
=== Global and quasi-local quantities ===
The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy.
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.
== Relationship with quantum theory ==
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question.
=== Quantum field theory in curved spacetime ===
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.
=== Quantum gravity ===
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology.
All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
== Current status ==
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.
Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research.
== See also ==
Alcubierre drive – Hypothetical FTL transportation by warping space (warp drive)
Alternatives to general relativity – Proposed theories of gravity
Contributors to general relativity
Derivations of the Lorentz transformations
Ehrenfest paradox – Paradox in special relativity
Einstein–Hilbert action – Concept in general relativity
Einstein's thought experiments – Albert Einstein's hypothetical situations to argue scientific points
General relativity priority dispute – Debate about credit for general relativity
Introduction to the mathematics of general relativity
Nordström's theory of gravitation – Predecessor to the theory of relativity
Ricci calculus – Tensor index notation for tensor-based calculations
Timeline of gravitational physics and relativity
== References ==
== Bibliography ==
== Further reading ==
=== Popular books ===
Einstein, A. (1916), Relativity: The Special and the General Theory, Berlin, ISBN 978-3-528-06059-6 {{citation}}: ISBN / Date incompatibility (help)CS1 maint: location missing publisher (link)
Geroch, R. (1981), General Relativity from A to B, Chicago: University of Chicago Press, ISBN 978-0-226-28864-2
Lieber, Lillian (2008), The Einstein Theory of Relativity: A Trip to the Fourth Dimension, Philadelphia: Paul Dry Books, Inc., ISBN 978-1-58988-044-3
Schutz, Bernard F. (2001), "Gravitational radiation", in Murdin, Paul (ed.), Encyclopedia of Astronomy and Astrophysics, Institute of Physics Pub., ISBN 978-1-56159-268-5
Thorne, Kip; Hawking, Stephen (1994). Black Holes and Time Warps: Einstein's Outrageous Legacy. New York: W. W. Norton. ISBN 0-393-03505-0.
Wald, Robert M. (1992), Space, Time, and Gravity: the Theory of the Big Bang and Black Holes, Chicago: University of Chicago Press, ISBN 978-0-226-87029-8
Wheeler, John; Ford, Kenneth (1998), Geons, Black Holes, & Quantum Foam: a life in physics, New York: W. W. Norton, ISBN 978-0-393-31991-0
=== Beginning undergraduate textbooks ===
Yvonne Choquet-Bruhat (2014). Introduction to General Relativity, Black Holes, and Cosmology. Oxford University Press. ISBN 9780191936500.
Taylor, Edwin F.; Wheeler, John Archibald (2000), Exploring Black Holes: Introduction to General Relativity, Addison Wesley, ISBN 978-0-201-38423-9
=== Advanced undergraduate textbooks ===
Crowell, Ben (2020). General Relativity.
Dirac, Paul (1996), General Theory of Relativity, Princeton University Press, ISBN 978-0-691-01146-2
Gron, O.; Hervik, S. (2007), Einstein's General theory of Relativity, Springer, ISBN 978-0-387-69199-2
Hartle, James B. (2003), Gravity: an Introduction to Einstein's General Relativity, San Francisco: Addison-Wesley, ISBN 978-0-8053-8662-2
Hughston, L. P.; Tod, K. P. (1991), Introduction to General Relativity, Cambridge: Cambridge University Press, ISBN 978-0-521-33943-8
d'Inverno, Ray (1992), Introducing Einstein's Relativity, Oxford: Oxford University Press, ISBN 978-0-19-859686-8
Ludyk, Günter (2013). Einstein in Matrix Form (1st ed.). Berlin: Springer. ISBN 978-3-642-35797-8.
Møller, Christian (1955) [1952], The Theory of Relativity, Oxford University Press, OCLC 7644624
Moore, Thomas A (2012), A General Relativity Workbook, University Science Books, ISBN 978-1-891389-82-5
Schutz, B. F. (2009), A First Course in General Relativity (Second ed.), Cambridge University Press, Bibcode:2009fcgr.book.....S, ISBN 978-0-521-88705-2
=== Graduate textbooks ===
Carroll, Sean M. (2004), Spacetime and Geometry: An Introduction to General Relativity, San Francisco: Addison-Wesley, Bibcode:2004sgig.book.....C, ISBN 978-0-8053-8732-2
Grøn, Øyvind; Hervik, Sigbjørn (2007), Einstein's General Theory of Relativity, New York: Springer, ISBN 978-0-387-69199-2
Landau, Lev D.; Lifshitz, Evgeny F. (1980), The Classical Theory of Fields (4th ed.), London: Butterworth-Heinemann, ISBN 978-0-7506-2768-9
Landsman, Klaas (2021). Foundations of General Relativity: From Einstein to Black Holes. Radboud University Press. ISBN 9789083178929.
Stephani, Hans (1990), General Relativity: An Introduction to the Theory of the Gravitational Field, Cambridge: Cambridge University Press, Bibcode:1990grit.book.....S, ISBN 978-0-521-37941-0
Charles W. Misner; Kip S. Thorne; John Archibald Wheeler (1973), Gravitation, W. H. Freeman, Princeton University Press, ISBN 0-7167-0344-0
R.K. Sachs; H. Wu (1977), General Relativity for Mathematicians, Springer-Verlag, Bibcode:1977grm..book.....S, ISBN 1-4612-9905-5
Wald, Robert M. (1984). General Relativity. Chicago: University of Chicago Press. ISBN 0-226-87032-4. OCLC 10018614.
=== Specialists' books ===
Hawking, Stephen; Ellis, George (1975). The Large Scale Structure of Space-time. Cambridge University Press. ISBN 978-0-521-09906-6.
Poisson, Eric (2007). A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics. Cambridge University Press. ISBN 978-0-521-53780-3.
=== Journal articles ===
Einstein, Albert (1916), "Die Grundlage der allgemeinen Relativitätstheorie", Annalen der Physik, 49 (7): 769–822, Bibcode:1916AnP...354..769E, doi:10.1002/andp.19163540702 See also English translation at Einstein Papers Project
Flanagan, Éanna É.; Hughes, Scott A. (2005), "The basics of gravitational wave theory", New J. Phys., 7 (1): 204, arXiv:gr-qc/0501041, Bibcode:2005NJPh....7..204F, doi:10.1088/1367-2630/7/1/204
Landgraf, M.; Hechler, M.; Kemble, S. (2005), "Mission design for LISA Pathfinder", Class. Quantum Grav., 22 (10): S487 – S492, arXiv:gr-qc/0411071, Bibcode:2005CQGra..22S.487L, doi:10.1088/0264-9381/22/10/048, S2CID 119476595
Nieto, Michael Martin (2006), "The quest to understand the Pioneer anomaly" (PDF), Europhysics News, 37 (6): 30–34, arXiv:gr-qc/0702017, Bibcode:2006ENews..37f..30N, doi:10.1051/epn:2006604, archived (PDF) from the original on 24 September 2015
Shapiro, I. I.; Pettengill, Gordon; Ash, Michael; Stone, Melvin; Smith, William; Ingalls, Richard; Brockelman, Richard (1968), "Fourth test of general relativity: preliminary results", Phys. Rev. Lett., 20 (22): 1265–1269, Bibcode:1968PhRvL..20.1265S, doi:10.1103/PhysRevLett.20.1265
Valtonen, M. J.; Lehto, H. J.; Nilsson, K.; Heidt, J.; Takalo, L. O.; Sillanpää, A.; Villforth, C.; Kidger, M.; et al. (2008), "A massive binary black-hole system in OJ 287 and a test of general relativity", Nature, 452 (7189): 851–853, arXiv:0809.1280, Bibcode:2008Natur.452..851V, doi:10.1038/nature06896, PMID 18421348, S2CID 4412396
== External links ==
Einstein Online Archived 1 June 2014 at the Wayback Machine – Articles on a variety of aspects of relativistic physics for a general audience; hosted by the Max Planck Institute for Gravitational Physics
GEO600 home page, the official website of the GEO600 project.
LIGO Laboratory
NCSA Spacetime Wrinkles – produced by the numerical relativity group at the NCSA, with an elementary introduction to general relativity
Einstein's General Theory of Relativity on YouTube (lecture by Leonard Susskind recorded 22 September 2008 at Stanford University).
Series of lectures on General Relativity given in 2006 at the Institut Henri Poincaré (introductory/advanced).
General Relativity Tutorials by John Baez.
Brown, Kevin. "Reflections on relativity". Mathpages.com. Archived from the original on 18 December 2015. Retrieved 29 May 2005.
Carroll, Sean M. (1997). "Lecture Notes on General Relativity". arXiv:gr-qc/9712019.
Moor, Rafi. "Understanding General Relativity". Retrieved 11 July 2006.
Waner, Stefan. "Introduction to Differential Geometry and General Relativity". Retrieved 5 April 2015.
The Feynman Lectures on Physics Vol. II Ch. 42: Curved Space | Wikipedia/General_theory_of_relativity |
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter.
Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, including its electrons.
Discoveries in nuclear physics have led to applications in many fields such as nuclear power, nuclear weapons, nuclear medicine and magnetic resonance imaging, industrial and agricultural isotopes, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Such applications are studied in the field of nuclear engineering.
Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of nuclear physics to astrophysics, is crucial in explaining the inner workings of stars and the origin of the chemical elements.
== History ==
The history of nuclear physics as a discipline distinct from atomic physics, starts with the discovery of radioactivity by Henri Becquerel in 1896, made while investigating phosphorescence in uranium salts. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the beginning of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a positively charged ball with smaller negatively charged electrons embedded inside it.
In the years that followed, radioactivity was extensively investigated, notably by Marie Curie, a Polish physicist whose maiden name was Sklodowska, Pierre Curie, Ernest Rutherford and others. By the turn of the century, physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a continuous range of energies, rather than the discrete amounts of energy that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays.
The 1903 Nobel Prize in Physics was awarded jointly to Becquerel, for his discovery and to Marie and Pierre Curie for their subsequent research into radioactivity. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances".
In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.
=== Rutherford discovers the nucleus ===
In 1906, Ernest Rutherford published "Retardation of the a Particle from Radium in passing through matter." Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, and further greatly expanded work was published in 1910 by Geiger. In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.
Published in 1909, with the eventual classical analysis by Rutherford published May 1911, the key preemptive experiment was performed during 1909, at the University of Manchester. Ernest Rutherford's assistant, Professor Johannes "Hans" Geiger, and an undergraduate, Marsden, performed an experiment in which Geiger and Marsden under Rutherford's supervision fired alpha particles (helium 4 nuclei) at a thin film of gold foil. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe: a few particles were scattered through large angles, even completely backwards in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, with Rutherford's analysis of the data in 1911, led to the Rutherford model of the atom, in which the atom had a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles) and the nucleus was surrounded by 7 more orbiting electrons.
=== Eddington and stellar nuclear fusion ===
Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
=== Studies of nuclear spin ===
The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of ±+1⁄2. In the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 1⁄2. Rasetti discovered, however, that nitrogen-14 had a spin of 1.
=== James Chadwick discovers the neutron ===
In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion from Rutherford about the need for such a particle). In the same year Dmitri Ivanenko suggested that there were no electrons in the nucleus — only protons and neutrons — and that neutrons were spin 1⁄2 particles, which explained the mass not due to protons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model each contributed a spin of 1⁄2 in the same direction, giving a final total spin of 1.
With the discovery of the neutron, scientists could at last calculate what fraction of binding energy each nucleus had, by comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way. When nuclear reactions were measured, these were found to agree with Einstein's calculation of the equivalence of mass and energy to within 1% as of 1934.
=== Proca's equations of the massive vector boson field ===
Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces. Proca's equations were known to Wolfgang Pauli who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics.
=== Yukawa's meson postulated to bind nuclei ===
In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.
With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high-energy photons (gamma decay).
The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics, which describes the strong, weak, and electromagnetic forces.
== Modern nuclear physics ==
A heavy nucleus can contain hundreds of nucleons. This means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy that arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.
Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert Mayer and J. Hans D. Jensen. Nuclei with certain "magic" numbers of neutrons and protons are particularly stable, because their shells are filled.
Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons.
Ab initio methods try to solve the nuclear many-body problem from the ground up, starting from the nucleons and their interactions.
Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls or even pears) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark–gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
=== Nuclear decay ===
Eighty elements have at least one stable isotope which is never observed to decay, amounting to a total of about 251 stable nuclides. However, thousands of isotopes have been characterized as unstable. These "radioisotopes" decay over time scales ranging from fractions of a second to trillions of years. Plotted on a chart as a function of atomic and neutron numbers, the binding energy of the nuclides forms what is known as the valley of stability. Stable nuclides lie along the bottom of this energy valley, while increasingly unstable nuclides lie up the valley walls, that is, have weaker binding energy.
The most stable nuclei fall within certain ranges or balances of composition of neutrons and protons: too few or too many neutrons (in relation to the number of protons) will cause it to decay. For example, in beta decay, a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted by the weak interaction into a proton, an electron and an antineutrino. The element is transmuted to another element, with a different number of protons.
In alpha decay, which typically occurs in the heaviest nuclei, the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays (usually beta decay) until a stable element is formed.
In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved).
Other more exotic decays are possible (see the first main article). For example, in internal conversion decay, the energy from an excited nucleus may eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons but is not beta decay and (unlike beta decay) does not transmute one element to another.
=== Nuclear fusion ===
In nuclear fusion, two low-mass nuclei come into very close contact with each other so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction. Nuclear fusion is the origin of the energy (including in the form of light and other electromagnetic radiation) produced by the core of all stars including our own Sun.
=== Nuclear fission ===
Nuclear fission is the reverse process to fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones.
The process of alpha decay is in essence a special type of spontaneous nuclear fission. It is a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.
From several of the heaviest nuclei whose fission produces free neutrons, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a chain reaction. Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission-type nuclear bombs, such as those detonated in Hiroshima and Nagasaki, Japan, at the end of World War II. Heavy nuclei such as uranium and thorium may also undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.
For a neutron-initiated chain reaction to occur, there must be a critical mass of the relevant isotope present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so that there is a greater cross-section or probability of them initiating another fission. In two regions of Oklo, Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain reactions.
=== Production of "heavy" elements ===
According to the theory, as the Universe cooled after the Big Bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the Big Bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis).
Some relatively small quantities of elements beyond helium (lithium, beryllium, and perhaps some boron) were created in the Big Bang, as the protons and neutrons collided with each other, but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number) that we see today, were created inside stars during a series of fusion stages, such as the proton–proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star.
Energy is only released in fusion processes involving smaller atoms than iron because the binding energy per nucleon peaks around iron (56 nucleons). Since the creation of heavier nuclei by fusion requires energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s-process) or the rapid, or r-process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r-process is thought to occur in supernova explosions, which provide the necessary conditions of high temperature, high neutron flux and ejected matter. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers).
== See also ==
== References ==
== Bibliography ==
=== Introductory ===
Semat, H. and Albright, John R. (1972). Introduction to Atomic and Nuclear Physics. Springer. ISBN 978-0-412-15670-0.
Littlefield, T.A. and Thorley, N. (1979) Atomic and Nuclear Physics: An Introduction. Springer US. ISBN 978-0-442-30190-3.
Belyaev, Alexander; Ross, Douglas (2021). The Basics of Nuclear and Particle Physics. Undergraduate Texts in Physics. Cham: Springer International Publishing. Bibcode:2021bnpp.book.....B. doi:10.1007/978-3-030-80116-8. ISBN 978-3-030-80115-1. Retrieved 2023-02-19.
Povh, Bogdan; Rith, Klaus; Scholz, Christoph; Zetsche, Frank; Rodejohann, Werner (2015). Particles and Nuclei: An Introduction to the Physical Concepts. Graduate Texts in Physics. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-46321-5. ISBN 978-3-662-46320-8. Retrieved 2024-05-27.
=== Reference works ===
Handbook of Nuclear Physics. Isao Tanihata, Hiroshi Toki, Toshitaka Kajino (eds.). Singapore: Springer Nature Singapore. 2020. doi:10.1007/978-981-15-8818-1. ISBN 9789811588181. Retrieved 2023-05-31.{{cite book}}: CS1 maint: others (link)
=== Advanced ===
Cohen, Bernard L, (1971). Concepts of Nuclear Physics, McGraw-Hill, Inc.
Bohr, Aage; Mottelson, Ben R (1998) [1969]. Nuclear Structure: (In 2 Volumes). World Scientific. doi:10.1142/3530. ISBN 978-981-02-3197-2. Archived from the original on 2023-01-20. Retrieved 2023-04-19.
Greiner, Walter; Maruhn, Joachim A. and Bromley, D.A (1996) Nuclear Models. Springer ISBN 9783540591801. * Paetz gen. Schieck, Hans (2014). Nuclear Reactions: An Introduction. Lecture Notes in Physics. Vol. 882. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-53986-2. ISBN 978-3-642-53985-5. Retrieved 2023-04-04.
=== Classics or Historic ===
Fermi, E. (1950). Nuclear Physics. Univ. Chicago Press
Mott, N. F.; Massey, H. S. W. (1949). The Theory Of Atomic Collisions. The International Series of Monographs on Physics (2. ed.). Oxford: Calrendon Press (OUP).
Blatt, John M.; Weisskopf, Victor F. (1979) [1952]. Theoretical Nuclear Physics. New York, NY: Springer New York. doi:10.1007/978-1-4612-9959-2. ISBN 978-1-4612-9961-5. Retrieved 2023-02-22.
Bethe, Hans A.; Morrison, Philip (2006) [1956]. Elementary Nuclear Theory. Mineola, NY: Dover Publications. ISBN 978-0-486-45048-3.
== External links ==
Ernest Rutherford's biography at the American Institute of Physics Archived 2016-07-30 at the Wayback Machine
American Physical Society Division of Nuclear Physics Archived 2017-09-20 at the Wayback Machine
American Nuclear Society Archived 2008-12-02 at the Wayback Machine
Annotated bibliography on nuclear physics from the Alsos Digital Library for Nuclear Issues
Nuclear science wiki Archived 2013-10-21 at the Wayback Machine
Nuclear Data Services – IAEA Archived 2021-03-18 at the Wayback Machine
Nuclear Physics Archived 2017-12-23 at the Wayback Machine, BBC Radio 4 discussion with Jim Al-Khalili, John Gribbin and Catherine Sutton (In Our Time, Jan. 10, 2002) | Wikipedia/Nuclear_physics |
The relationship between mathematics and physics has been a subject of study of philosophers, mathematicians and physicists since antiquity, and more recently also by historians and educators. Generally considered a relationship of great intimacy, mathematics has been described as "an essential tool for physics" and physics has been described as "a rich source of inspiration and insight in mathematics". Some of the oldest and most discussed themes are about the main differences between the two subjects, their mutual influence, the role of mathematical rigor in physics, and the problem of explaining the effectiveness of mathematics in physics.
In his work Physics, one of the topics treated by Aristotle is about how the study carried out by mathematicians differs from that carried out by physicists. Considerations about mathematics being the language of nature can be found in the ideas of the Pythagoreans: the convictions that "Numbers rule the world" and "All is number", and two millennia later were also expressed by Galileo Galilei: "The book of nature is written in the language of mathematics".
== Historical interplay ==
Before giving a mathematical proof for the formula for the volume of a sphere, Archimedes used physical reasoning to discover the solution (imagining the balancing of bodies on a scale). Aristotle classified physics and mathematics as theoretical sciences, in contrast to practical sciences (like ethics or politics) and to productive sciences (like medicine or botany).
From the seventeenth century, many of the most important advances in mathematics appeared motivated by the study of physics, and this continued in the following centuries (although in the nineteenth century mathematics started to become increasingly independent from physics). The creation and development of calculus were strongly linked to the needs of physics: There was a need for a new mathematical language to deal with the new dynamics that had arisen from the work of scholars such as Galileo Galilei and Isaac Newton. The concept of derivative was needed, Newton did not have the modern concept of limits, and instead employed infinitesimals, which lacked a rigorous foundation at that time. During this period there was little distinction between physics and mathematics; as an example, Newton regarded geometry as a branch of mechanics.
Non-Euclidean geometry, as formulated by Carl Friedrich Gauss, János Bolyai, Nikolai Lobachevsky, and Bernhard Riemann, freed physics from the limitation of a single Euclidean geometry. A version of non-Euclidean geometry, called Riemannian geometry, enabled Albert Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity.
In the 19th century Auguste Comte in his hierarchy of the sciences, placed physics and astronomy as less general and more complex than mathematics, as both depend on it. In 1900, David Hilbert in his 23 problems for the advancement of mathematical science, considered the axiomatization of physics as his sixth problem. The problem remains open.
In 1930, Paul Dirac invented the Dirac delta function which produced a single value when used in an integral.
The mathematical rigor of this function was in doubt until the mathematician Laurent Schwartz developed on the theory of distributions.
Connections between the two fields sometimes only require identifying similar concepts by different names, as shown in the 1975 Wu–Yang dictionary, that related concepts of gauge theory with differential geometry.: 332
== Physics is not mathematics ==
Despite the close relationship between math and physics, they are not synonyms. In mathematics objects can be defined exactly and logically related, but the object need have no relationship to experimental measurements. In physics, definitions are abstractions or idealizations, approximations adequate when compared to the natural world. In 1960, Georg Rasch noted that no models are ever true, not even Newton's laws, emphasizing that models should not be evaluated based on truth but on their applicability for a given purpose. For example, Newton built a physical model around definitions like his second law of motion
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
based on observations, leading to the development of calculus and highly accurate planetary mechanics, but later this definition was superseded by improved models of mechanics. Mathematics deals with entities whose properties can be known with certainty. According to David Hume, only statements that deal solely with ideas themselves—such as those encountered in mathematics—can be demonstrated to be true with certainty, while any conclusions pertaining to experiences of the real world can only be achieved via "probable reasoning". This leads to a situation that was put by Albert Einstein as "No number of experiments can prove me right; a single experiment can prove me wrong." The ultimate goal in research in pure mathematics are rigorous proofs, while in physics heuristic arguments may sometimes suffice in leading-edge research. In short, the methods and goals of physicists and mathematicians are different. Nonetheless, according to Roland Omnès, the axioms of mathematics are not mere conventions, but have physical origins.
== Role of rigor in physics ==
Rigor is indispensable in pure mathematics. But many definitions and arguments found in the physics literature involve concepts and ideas that are not up to the standards of rigor in mathematics.
For example,
Freeman Dyson characterized quantum field theory as having two "faces". The outward face looked at nature and there the predictions of quantum field theory are exceptionally successful. The inward face looked at mathematical foundations and found inconsistency and mystery. The success of the physical theory comes despite its lack of rigorous mathematical backing.: ix : 2
== Philosophical problems ==
Some of the problems considered in the philosophy of mathematics are the following:
Explain the effectiveness of mathematics in the study of the physical world: "At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" —Albert Einstein, in Geometry and Experience (1921).
Clearly delineate mathematics and physics: For some results or discoveries, it is difficult to say to which area they belong: to the mathematics or to physics.
What is the geometry of physical space?
What is the origin of the axioms of mathematics?
How does the already existing mathematics influence in the creation and development of physical theories?
Is arithmetic analytic or synthetic? (from Kant, see Analytic–synthetic distinction)
What is essentially different between doing a physical experiment to see the result and making a mathematical calculation to see the result? (from the Turing–Wittgenstein debate)
Do Gödel's incompleteness theorems imply that physical theories will always be incomplete? (from Stephen Hawking)
Is mathematics invented or discovered? (millennia-old question, raised among others by Mario Livio)
== Education ==
In recent times the two disciplines have most often been taught separately, despite all the interrelations between physics and mathematics. This led some professional mathematicians who were also interested in mathematics education, such as Felix Klein, Richard Courant, Vladimir Arnold and Morris Kline, to strongly advocate teaching mathematics in a way more closely related to the physical sciences. The initial courses of mathematics for college students of physics are often taught by mathematicians, despite the differences in "ways of thinking" of physicists and mathematicians about those traditional courses and how they are used in the physics courses classes thereafter.
== See also ==
== References ==
== Further reading ==
Arnold, V. I. (1999). "Mathematics and physics: mother and daughter or sisters?". Physics-Uspekhi. 42 (12): 1205–1217. Bibcode:1999PhyU...42.1205A. doi:10.1070/pu1999v042n12abeh000673. S2CID 250835608.
Arnold, V. I. (1998). "On teaching mathematics". Russian Mathematical Surveys. 53 (1). Translated by A. V. Goryunov: 229–236. Bibcode:1998RuMaS..53..229A. doi:10.1070/RM1998v053n01ABEH000005. S2CID 250833432. Archived from the original on 28 April 2017. Retrieved 29 May 2014.
Atiyah, M.; Dijkgraaf, R.; Hitchin, N. (1 February 2010). "Geometry and physics". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 368 (1914): 913–926. Bibcode:2010RSPTA.368..913A. doi:10.1098/rsta.2009.0227. PMC 3263806. PMID 20123740.
Boniolo, Giovanni; Budinich, Paolo; Trobok, Majda, eds. (2005). The Role of Mathematics in Physical Sciences: Interdisciplinary and Philosophical Aspects. Dordrecht: Springer. ISBN 9781402031069.
Colyvan, Mark (2001). "The Miracle of Applied Mathematics" (PDF). Synthese. 127 (3): 265–277. doi:10.1023/A:1010309227321. S2CID 40819230. Retrieved 30 May 2014.
Dirac, Paul (1938–1939). "The Relation between Mathematics and Physics". Proceedings of the Royal Society of Edinburgh. 59 Part II: 122–129. Retrieved 30 March 2014.
Feynman, Richard P. (1992). "The Relation of Mathematics to Physics". The Character of Physical Law (Reprint ed.). London: Penguin Books. pp. 35–58. ISBN 978-0140175059.
Hardy, G. H. (2005). A Mathematician's Apology (PDF) (First electronic ed.). University of Alberta Mathematical Sciences Society. Archived from the original (PDF) on 9 October 2021. Retrieved 30 May 2014.
Hitchin, Nigel (2007). "Interaction between mathematics and physics". ARBOR Ciencia, Pensamiento y Cultura. 725. Retrieved 31 May 2014.
Harvey, Alex (2012). "The Reasonable Effectiveness of Mathematics in the Physical Sciences". General Relativity and Gravitation. 43 (2011): 3057–3064. arXiv:1212.5854. Bibcode:2011GReGr..43.3657H. doi:10.1007/s10714-011-1248-9. S2CID 121985996.
Neumann, John von (1947). "The Mathematician". Works of the Mind. 1 (1): 180–196. (part 1) (part 2).
Poincaré, Henri (1907). The Value of Science (PDF). Translated by George Bruce Halsted. New York: The Science Press.
Schlager, Neil; Lauer, Josh, eds. (2000). "The Intimate Relation between Mathematics and Physics". Science and Its Times: Understanding the Social Significance of Scientific Discovery. Vol. 7: 1950 to Present. Gale Group. pp. 226–229. ISBN 978-0-7876-3939-6.
Vafa, Cumrun (2000). "On the Future of Mathematics/Physics Interaction". Mathematics: Frontiers and Perspectives. USA: AMS. pp. 321–328. ISBN 978-0-8218-2070-4.
Witten, Edward (1986). Physics and Geometry (PDF). Proceedings of the International Conference of Mathematicians. Berkeley, California. pp. 267–303. Archived from the original (PDF) on 2013-12-28. Retrieved 2014-05-27.
Eugene Wigner (1960). "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". Communications on Pure and Applied Mathematics. 13 (1): 1–14. Bibcode:1960CPAM...13....1W. doi:10.1002/cpa.3160130102. S2CID 6112252. Archived from the original on 2011-02-28. Retrieved 2014-05-27.
== External links ==
Gregory W. Moore – Physical Mathematics and the Future (July 4, 2014)
IOP Institute of Physics – Mathematical Physics: What is it and why do we need it? (September 2014)
Feynman explaining the differences between mathematics and physics in a video available on YouTube | Wikipedia/Relationship_between_mathematics_and_physics |
Physical oceanography is the study of physical conditions and physical processes within the ocean, especially the motions and physical properties of ocean waters.
Physical oceanography is one of several sub-domains into which oceanography is divided. Others include biological, chemical and geological oceanography.
Physical oceanography may be subdivided into descriptive and dynamical physical oceanography.
Descriptive physical oceanography seeks to research the ocean through observations and complex numerical models, which describe the fluid motions as precisely as possible.
Dynamical physical oceanography focuses primarily upon the processes that govern the motion of fluids with emphasis upon theoretical research and numerical models. These are part of the large field of Geophysical Fluid Dynamics (GFD) that is shared together with meteorology. GFD is a sub field of Fluid dynamics describing flows occurring on spatial and temporal scales that are greatly influenced by the Coriolis force.
== Physical setting ==
Roughly 97% of the planet's water is in its oceans, and the oceans are the source of the vast majority of water vapor that condenses in the atmosphere and falls as rain or snow on the continents. The tremendous heat capacity of the oceans moderates the planet's climate, and its absorption of various gases affects the composition of the atmosphere. The ocean's influence extends even to the composition of volcanic rocks through seafloor metamorphism, as well as to that of volcanic gases and magmas created at subduction zones.
From sea level, the oceans are far deeper than the continents are tall; examination of the Earth's hypsographic curve shows that the average elevation of Earth's landmasses is only 840 metres (2,760 ft), while the ocean's average depth is 3,800 metres (12,500 ft). Though this apparent discrepancy is great, for both land and sea, the respective extremes such as mountains and trenches are rare.
== Temperature, salinity and density ==
Because the vast majority of the world ocean's volume is deep water, the mean temperature of seawater is low; roughly 75% of the ocean's volume has a temperature from 0° – 5 °C (Pinet 1996). The same percentage falls in a salinity range between 34 and 35 ppt (3.4–3.5%) (Pinet 1996). There is still quite a bit of variation, however. Surface temperatures can range from below freezing near the poles to 35 °C in restricted tropical seas, while salinity can vary from 10 to 41 ppt (1.0–4.1%).
The vertical structure of the temperature can be divided into three basic layers, a surface mixed layer, where gradients are low, a thermocline where gradients are high, and a poorly stratified abyss.
In terms of temperature, the ocean's layers are highly latitude-dependent; the thermocline is pronounced in the tropics, but nonexistent in polar waters (Marshak 2001). The halocline usually lies near the surface, where evaporation raises salinity in the tropics, or meltwater dilutes it in polar regions. These variations of salinity and temperature with depth change the density of the seawater, creating the pycnocline.
=== Temperature ===
The temperature of ocean water varies significantly across different regions and depths. As mentioned, the vast majority of ocean water (around 75%) lies between 0° and 5°C, mostly in the deep ocean, where sunlight does not penetrate. The surface layers, however, experience far greater variability. In polar regions, surface temperatures can drop below freezing, while in tropical and subtropical regions, they may reach up to 35°C. This thermal stratification results in a vertical temperature gradient that divides the ocean into distinct layers.
Surface Mixed Layer: This uppermost layer is well-mixed due to wind and wave action, resulting in minimal temperature variation with depth. The thickness of this layer varies depending on location and season but can extend from 50 to 200 meters.
Thermocline: Below the mixed layer lies the thermocline, a zone where temperature decreases rapidly with increasing depth. The thermocline is especially pronounced in tropical and temperate regions but is absent in polar waters where surface temperatures are already near freezing. The depth and sharpness of the thermocline can shift with seasonal changes and ocean currents, playing a critical role in regulating heat exchange between the ocean and the atmosphere.
Abyssal Zone: Beneath the thermocline is the deep ocean or abyssal zone, where temperatures remain relatively uniform, hovering just above freezing (0°-3°C). This cold, dense water originates from polar regions, where surface water cools, sinks, and spreads towards the equator along the ocean floor, forming the deep ocean circulation system.
=== Salinity ===
Salinity, a measure of the concentration of dissolved salts in seawater, typically ranges between 34 and 35 parts per thousand (ppt) in most of the world’s oceans. However, localized factors such as evaporation, precipitation, river runoff, and ice formation or melting cause significant variations in salinity. These variations are often most evident in coastal areas and marginal seas.
Surface Salinity: In the open ocean, salinity is generally highest in subtropical regions where high evaporation rates dominate, and lowest in regions of high precipitation or freshwater influx from rivers, such as the mouths of the Amazon and Congo Rivers. Tropical and subtropical seas, such as the Red Sea and the Mediterranean, can experience salinities as high as 40-41 ppt due to intense evaporation and restricted water exchange with the open ocean.
Halocline: The halocline is a layer within the ocean where salinity changes rapidly with depth. This stratification can be influenced by surface processes like evaporation (which increases salinity) and freshwater input (which decreases it). The halocline often coincides with the thermocline, particularly in tropical and subtropical regions, contributing to overall water column stability.
Polar Regions: In polar areas, surface salinity is generally lower due to freshwater input from melting ice. During sea ice formation, however, brine rejection increases the salinity of surrounding waters, contributing to the sinking of dense water masses and the formation of deep ocean currents that drive global circulation patterns.
The salt in the oceans originates from runoff from terrestrial sources as well as hydrothermal vents. It has been estimated that the salinity of oceans was greater in the distant past than it is today.
=== Density and the Pycnocline ===
The combination of temperature and salinity variations leads to changes in seawater density. Seawater density is primarily influenced by both these factors—colder, saltier water is denser than warmer, fresher water. This variation in density creates stratification in the ocean and is key to understanding ocean circulation patterns.
Pycnocline: The pycnocline is a layer within the ocean where the density increases rapidly with depth. It typically coincides with the thermocline and halocline in tropical and subtropical waters, forming a sharp boundary between the less dense surface waters and the denser deep ocean waters. This density stratification acts as a barrier to vertical mixing, limiting the exchange of heat, gases, and nutrients between the surface and deep ocean layers.
Thermohaline Circulation: Density differences drive the thermohaline circulation, also known as the global "conveyor belt," which plays a crucial role in regulating Earth's climate. Cold, dense water formed in the polar regions sinks and moves along the ocean floor toward the equator, while warmer surface waters flow poleward to replace it. This global circulation helps redistribute heat and maintains the ocean's dynamic equilibrium.
Regional Variations: In areas of upwelling or downwelling, the density structure of the water column can be disrupted. Upwelling brings cooler, nutrient-rich water to the surface, while downwelling pushes surface waters to greater depths, impacting local ecosystems and global climate.
Understanding the complex interactions between temperature, salinity, and density is essential for predicting ocean circulation patterns, climate change effects, and the health of marine ecosystems. These factors also influence marine life, as many species are sensitive to the specific temperature and salinity ranges of their habitats.
== Circulation ==
Energy for the ocean circulation (and for the atmospheric circulation) comes from solar radiation and gravitational energy from the Sun and Moon. The amount of sunlight absorbed at the surface varies strongly with latitude, being greater at the equator than at the poles, and this engenders fluid motion in both the atmosphere and ocean that acts to redistribute heat from the equator towards the poles, thereby reducing the temperature gradients that would exist in the absence of fluid motion. Perhaps three quarters of this heat is carried in the atmosphere; the rest is carried in the ocean.
The atmosphere is heated from below, which leads to convection, the largest expression of which is the Hadley circulation. By contrast the ocean is heated from above, which tends to suppress convection. Instead ocean deep water is formed in polar regions where cold salty waters sink in fairly restricted areas. This is the beginning of the thermohaline circulation.
Oceanic currents are largely driven by the surface wind stress; hence the large-scale atmospheric circulation is important to understanding the ocean circulation. The Hadley circulation leads to Easterly winds in the tropics and Westerlies in mid-latitudes. This leads to slow equatorward flow throughout most of a subtropical ocean basin (the Sverdrup balance). The return flow occurs in an intense, narrow, poleward western boundary current. Like the atmosphere, the ocean is far wider than it is deep, and hence horizontal motion is in general much faster than vertical motion. In the southern hemisphere there is a continuous belt of ocean, and hence the mid-latitude westerlies force the strong Antarctic Circumpolar Current. In the northern hemisphere the land masses prevent this and the ocean circulation is broken into smaller gyres in the Atlantic and Pacific basins.
=== Coriolis effect ===
The Coriolis effect results in a deflection of fluid flows (to the right in the Northern Hemisphere and left in the Southern Hemisphere). This has profound effects on the flow of the oceans. In particular it means the flow goes around high and low pressure systems, permitting them to persist for long periods of time. As a result, tiny variations in pressure can produce measurable currents. A slope of one part in one million in sea surface height, for example, will result in a current of 10 cm/s at mid-latitudes. The fact that the Coriolis effect is largest at the poles and weak at the equator results in sharp, relatively steady western boundary currents which are absent on eastern boundaries. Also see secondary circulation effects.
=== Ekman transport ===
Ekman transport results in the net transport of surface water 90 degrees to the right of the wind in the Northern Hemisphere, and 90 degrees to the left of the wind in the Southern Hemisphere. As the wind blows across the surface of the ocean, it "grabs" onto a thin layer of the surface water. In turn, that thin sheet of water transfers motion energy to the thin layer of water under it, and so on. However, because of the Coriolis Effect, the direction of travel of the layers of water slowly move farther and farther to the right as they get deeper in the Northern Hemisphere, and to the left in the Southern Hemisphere. In most cases, the very bottom layer of water affected by the wind is at a depth of 100 m – 150 m and is traveling about 180 degrees, completely opposite of the direction that the wind is blowing. Overall, the net transport of water would be 90 degrees from the original direction of the wind.
=== Langmuir circulation ===
Langmuir circulation results in the occurrence of thin, visible stripes, called windrows on the surface of the ocean parallel to the direction that the wind is blowing. If the wind is blowing with more than 3 m s−1, it can create parallel windrows alternating upwelling and downwelling about 5–300 m apart. These windrows are created by adjacent ovular water cells (extending to about 6 m (20 ft) deep) alternating rotating clockwise and counterclockwise. In the convergence zones debris, foam and seaweed accumulates, while at the divergence zones plankton are caught and carried to the surface. If there are many plankton in the divergence zone fish are often attracted to feed on them.
=== Ocean–atmosphere interface ===
At the ocean-atmosphere interface, the ocean and atmosphere exchange fluxes of heat, moisture and momentum.
Heat
The important heat terms at the surface are the sensible heat flux, the latent heat flux, the incoming solar radiation and the balance of long-wave (infrared) radiation. In general, the tropical oceans will tend to show a net gain of heat, and the polar oceans a net loss, the result of a net transfer of energy polewards in the oceans.
The oceans' large heat capacity moderates the climate of areas adjacent to the oceans, leading to a maritime climate at such locations. This can be a result of heat storage in summer and release in winter; or of transport of heat from warmer locations: a particularly notable example of this is Western Europe, which is heated at least in part by the north atlantic drift.
Momentum
Surface winds tend to be of order meters per second; ocean currents of order centimeters per second. Hence from the point of view of the atmosphere, the ocean can be considered effectively stationary; from the point of view of the ocean, the atmosphere imposes a significant wind stress on its surface, and this forces large-scale currents in the ocean.
Through the wind stress, the wind generates ocean surface waves; the longer waves have a phase velocity tending towards the wind speed. Momentum of the surface winds is transferred into the energy flux by the ocean surface waves. The increased roughness of the ocean surface, by the presence of the waves, changes the wind near the surface.
Moisture
The ocean can gain moisture from rainfall, or lose it through evaporation. Evaporative loss leaves the ocean saltier; the Mediterranean and Persian Gulf for example have strong evaporative loss; the resulting plume of dense salty water may be traced through the Straits of Gibraltar into the Atlantic Ocean. At one time, it was believed that evaporation/precipitation was a major driver of ocean currents; it is now known to be only a very minor factor.
=== Planetary waves ===
Kelvin Waves
A Kelvin wave is any progressive wave that is channeled between two boundaries or opposing forces (usually between the Coriolis force and a coastline or the equator). There are two types, coastal and equatorial. Kelvin waves are gravity driven and non-dispersive. This means that Kelvin waves can retain their shape and direction over long periods of time. They are usually created by a sudden shift in the wind, such as the change of the trade winds at the beginning of the El Niño-Southern Oscillation.
Coastal Kelvin waves follow shorelines and will always propagate in a counterclockwise direction in the Northern hemisphere (with the shoreline to the right of the direction of travel) and clockwise in the Southern hemisphere.
Equatorial Kelvin waves propagate to the east in the Northern and Southern hemispheres, using the equator as a guide.
Kelvin waves are known to have very high speeds, typically around 2–3 meters per second. They have wavelengths of thousands of kilometers and amplitudes in the tens of meters.
Rossby Waves
Rossby waves, or planetary waves are huge, slow waves generated in the troposphere by temperature differences between the ocean and the continents. Their major restoring force is the change in Coriolis force with latitude. Their wave amplitudes are usually in the tens of meters and very large wavelengths. They are usually found at low or mid latitudes.
There are two types of Rossby waves, barotropic and baroclinic. Barotropic Rossby waves have the highest speeds and do not vary vertically. Baroclinic Rossby waves are much slower.
The special identifying feature of Rossby waves is that the phase velocity of each individual wave always has a westward component, but the group velocity can be in any direction. Usually the shorter Rossby waves have an eastward group velocity and the longer ones have a westward group velocity.
=== Climate variability ===
The interaction of ocean circulation, which serves as a type of heat pump, and biological effects such as the concentration of carbon dioxide can result in global climate changes on a time scale of decades. Known climate oscillations resulting from these interactions, include the Pacific decadal oscillation, North Atlantic oscillation, and Arctic oscillation. The oceanic process of thermohaline circulation is a significant component of heat redistribution across the globe, and changes in this circulation can have major impacts upon the climate.
==== La Niña–El Niño ====
==== Antarctic circumpolar wave ====
This is a coupled ocean/atmosphere wave that circles the Southern Ocean about every eight years. Since it is a wave-2 phenomenon (there are two peaks and two troughs in a latitude circle) at each fixed point in space a signal with a period of four years is seen. The wave moves eastward in the direction of the Antarctic Circumpolar Current.
=== Ocean currents ===
Among the most important ocean currents are the:
Antarctic Circumpolar Current
Deep ocean (density-driven)
Western boundary currents
Gulf Stream
Kuroshio Current
Labrador Current
Oyashio Current
Agulhas Current
Brazil Current
East Australia Current
Eastern Boundary currents
California Current
Canary Current
Peru Current
Benguela Current
==== Antarctic circumpolar ====
The ocean body surrounding the Antarctic is currently the only continuous body of water where there is a wide latitude band of open water. It interconnects the Atlantic, Pacific and Indian oceans, and provide an uninterrupted stretch for the prevailing westerly winds to significantly increase wave amplitudes. It is generally accepted that these prevailing winds are primarily responsible for the circumpolar current transport. This current is now thought to vary with time, possibly in an oscillatory manner.
==== Deep ocean ====
In the Norwegian Sea evaporative cooling is predominant, and the sinking water mass, the North Atlantic Deep Water (NADW), fills the basin and spills southwards through crevasses in the submarine sills that connect Greenland, Iceland and Britain. It then flows along the western boundary of the Atlantic with some part of the flow moving eastward along the equator and then poleward into the ocean basins. The NADW is entrained into the Circumpolar Current, and can be traced into the Indian and Pacific basins. Flow from the Arctic Ocean Basin into the Pacific, however, is blocked by the narrow shallows of the Bering Strait.
Also see marine geology about that explores the geology of the ocean floor including plate tectonics that create deep ocean trenches.
==== Western boundary ====
An idealised subtropical ocean basin forced by winds circling around a high pressure (anticyclonic) systems such as the Azores-Bermuda high develops a gyre circulation with slow steady flows towards the equator in the interior. As discussed by Henry Stommel, these flows are balanced in the region of the western boundary, where a thin fast polewards flow called a western boundary current develops. Flow in the real ocean is more complex, but the Gulf Stream, Agulhas and Kuroshio are examples of such currents. They are narrow (approximately 100 km across) and fast (approximately 1.5 m/s).
Equatorwards western boundary currents occur in tropical and polar locations, e.g. the East Greenland and Labrador currents, in the Atlantic and the Oyashio. They are forced by winds circulation around low pressure (cyclonic).
Gulf Stream
The Gulf Stream, together with its northern extension, North Atlantic Current, is a powerful, warm, and swift Atlantic Ocean current that originates in the Gulf of Mexico, exits through the Strait of Florida, and follows the eastern coastlines of the United States and Newfoundland to the northeast before crossing the Atlantic Ocean.
Kuroshio
The Kuroshio Current is an ocean current found in the western Pacific Ocean off the east coast of Taiwan and flowing northeastward past Japan, where it merges with the easterly drift of the North Pacific Current. It is analogous to the Gulf Stream in the Atlantic Ocean, transporting warm, tropical water northward towards the polar region.
== Heat flux ==
=== Heat storage ===
Ocean heat flux is a turbulent and complex system which utilizes atmospheric measurement techniques such as eddy covariance to measure the rate of heat transfer expressed in the unit of or petawatts. Heat flux is the flow of energy per unit of area per unit of time. Most of the Earth's heat storage is within its seas with smaller fractions of the heat transfer in processes such as evaporation, radiation, diffusion, or absorption into the sea floor. The majority of the ocean heat flux is through advection or the movement of the ocean's currents. For example, the majority of the warm water movement in the south Atlantic is thought to have originated in the Indian Ocean. Another example of advection is the nonequatorial Pacific heating which results from subsurface processes related to atmospheric anticlines. Recent warming observations of Antarctic bottom water in the Southern Ocean is of concern to ocean scientists because bottom water changes will effect currents, nutrients, and biota elsewhere. The international awareness of global warming has focused scientific research on this topic since the 1988 creation of the Intergovernmental Panel on Climate Change. Improved ocean observation, instrumentation, theory, and funding has increased scientific reporting on regional and global issues related to heat.
=== Sea level change ===
Tide gauges and satellite altimetry suggest an increase in sea level of 1.5–3 mm/yr over the past 100 years.
The IPCC predicts that by 2081–2100, global warming will lead to a sea level rise of 260 to 820 mm.
== Rapid variations ==
=== Tides ===
The rise and fall of the oceans due to tidal effects is a key influence upon the coastal areas. Ocean tides on the planet Earth are created by the gravitational effects of the Sun and Moon. The tides produced by these two bodies are roughly comparable in magnitude, but the orbital motion of the Moon results in tidal patterns that vary over the course of a month.
The ebb and flow of the tides produce a cyclical current along the coast, and the strength of this current can be quite dramatic along narrow estuaries. Incoming tides can also produce a tidal bore along a river or narrow bay as the water flow against the current results in a wave on the surface.
Tide and Current (Wyban 1992) clearly illustrates the impact of these natural cycles on the lifestyle and livelihood of Native Hawaiians tending coastal fishponds. Aia ke ola ka hana meaning . . . Life is in labor.
Tidal resonance occurs in the Bay of Fundy since the time it takes for a large wave to travel from the mouth of the bay to the opposite end, then reflect and travel back to the mouth of the bay coincides with the tidal rhythm producing the world's highest tides.
As the surface tide oscillates over topography, such as submerged seamounts or ridges, it generates internal waves at the tidal frequency, which are known as internal tides.
=== Tsunamis ===
A series of surface waves can be generated due to large-scale displacement of the ocean water. These can be caused by sub-marine landslides, seafloor deformations due to earthquakes, or the impact of a large meteorite.
The waves can travel with a velocity of up to several hundred km/hour across the ocean surface, but in mid-ocean they are barely detectable with wavelengths spanning hundreds of kilometers.
Tsunamis, originally called tidal waves, were renamed because they are not related to the tides. They are regarded as shallow-water waves, or waves in water with a depth less than 1/20 their wavelength. Tsunamis have very large periods, high speeds, and great wave heights.
The primary impact of these waves is along the coastal shoreline, as large amounts of ocean water are cyclically propelled inland and then drawn out to sea. This can result in significant modifications to the coastline regions where the waves strike with sufficient energy.
The tsunami that occurred in Lituya Bay, Alaska on July 9, 1958 was 520 m (1,710 ft) high and is the biggest tsunami ever measured, almost 90 m (300 ft) taller than the Sears Tower in Chicago and about 110 m (360 ft) taller than the former World Trade Center in New York.
=== Surface waves ===
The wind generates ocean surface waves, which have a large impact on offshore structures, ships, Coastal erosion and sedimentation, as well as harbours. After their generation by the wind, ocean surface waves can travel (as swell) over long distances.
== See also ==
== References ==
== Further reading ==
Gill, Adrian E. (1982). Atmosphere-Ocean Dynamics. San Diego: Academic Press. ISBN 0-12-283520-4.
Samelson, R. M. (2011) The Theory of Large-Scale Ocean Circulation. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511736605.
Maury, Matthew F. (1855). The Physical Geography of the Seas and Its Meteorology.
Stewart, Robert H. (2007). Introduction to Physical Oceanography (PDF). College Station: Texas A&M University. OCLC 169907785. Archived from the original (PDF) on 2016-03-29.
Wyban, Carol Araki (1992). Tide and Current: Fishponds of Hawaiʻi. Honolulu: University of Hawaiʻi Press. ISBN 0-8248-1396-0.
== External links ==
Way, John H. "Hypsographic curve". Archived from the original on 2007-03-30. Retrieved 2006-01-10.
NASA Oceanography
Ocean Motion and Surface Currents
Ocean World (digital book)
National Oceanographic and Atmospheric Administration
University-National Oceanographic Laboratory System
Pacific Disaster Center
Pacific Tsunami Museum Hilo, Hawaii
Science of Tsunami Hazards Archived 2006-02-07 at the Wayback Machine (journal)
NEMO academic software for oceanography
[1] History of Salinity Determination | Wikipedia/Physical_oceanography |
In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform.
== General form ==
An integral transform is any transform
T
{\displaystyle T}
of the following form:
(
T
f
)
(
u
)
=
∫
t
1
t
2
f
(
t
)
K
(
t
,
u
)
d
t
{\displaystyle (Tf)(u)=\int _{t_{1}}^{t_{2}}f(t)\,K(t,u)\,dt}
The input of this transform is a function
f
{\displaystyle f}
, and the output is another function
T
f
{\displaystyle Tf}
. An integral transform is a particular kind of mathematical operator.
There are numerous useful integral transforms. Each is specified by a choice of the function
K
{\displaystyle K}
of two variables, that is called the kernel or nucleus of the transform.
Some kernels have an associated inverse kernel
K
−
1
(
u
,
t
)
{\displaystyle K^{-1}(u,t)}
which (roughly speaking) yields an inverse transform:
f
(
t
)
=
∫
u
1
u
2
(
T
f
)
(
u
)
K
−
1
(
u
,
t
)
d
u
{\displaystyle f(t)=\int _{u_{1}}^{u_{2}}(Tf)(u)\,K^{-1}(u,t)\,du}
A symmetric kernel is one that is unchanged when the two variables are permuted; it is a kernel function
K
{\displaystyle K}
such that
K
(
t
,
u
)
=
K
(
u
,
t
)
{\displaystyle K(t,u)=K(u,t)}
. In the theory of integral equations, symmetric kernels correspond to self-adjoint operators.
== Motivation ==
There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform.
There are many applications of probability that rely on integral transforms, such as "pricing kernel" or stochastic discount factor, or the smoothing of data recovered from robust statistics; see kernel (statistics).
== History ==
The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals.
Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis.
== Usage example ==
As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond to power series in the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain.
The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially scaled and time-shifted damped sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines.
Another usage example is the kernel in the path integral:
ψ
(
x
,
t
)
=
∫
−
∞
∞
ψ
(
x
′
,
t
′
)
K
(
x
,
t
;
x
′
,
t
′
)
d
x
′
.
{\displaystyle \psi (x,t)=\int _{-\infty }^{\infty }\psi (x',t')K(x,t;x',t')dx'.}
This states that the total amplitude
ψ
(
x
,
t
)
{\displaystyle \psi (x,t)}
to arrive at
(
x
,
t
)
{\displaystyle (x,t)}
is the sum (the integral) over all possible values
x
′
{\displaystyle x'}
of the total amplitude
ψ
(
x
′
,
t
′
)
{\displaystyle \psi (x',t')}
to arrive at the point
(
x
′
,
t
′
)
{\displaystyle (x',t')}
multiplied by the amplitude to go from
x
′
{\displaystyle x'}
to
x
{\displaystyle x}
[i.e.
K
(
x
,
t
;
x
′
,
t
′
)
{\displaystyle K(x,t;x',t')}
]. It is often referred to as the propagator for a given system. This (physics) kernel is the kernel of the integral transform. However, for each quantum system, there is a different kernel.
== Table of transforms ==
In the limits of integration for the inverse transform, c is a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform, c must be greater than the largest real part of the zeroes of the transform function.
Note that there are alternative notations and conventions for the Fourier transform.
== Different domains ==
Here integral transforms are defined for functions on the real numbers, but they can be defined more generally for functions on a group.
If instead one uses functions on the circle (periodic functions), integration kernels are then biperiodic functions; convolution by functions on the circle yields circular convolution.
If one uses functions on the cyclic group of order n (Cn or Z/nZ), one obtains n × n matrices as integration kernels; convolution corresponds to circulant matrices.
== General theory ==
Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is a linear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be a generalized function then all linear operators are integral transforms (a properly formulated version of this statement is the Schwartz kernel theorem).
The general theory of such integral equations is known as Fredholm theory. In this theory, the kernel is understood to be a compact operator acting on a Banach space of functions. Depending on the situation, the kernel is then variously referred to as the Fredholm operator, the nuclear operator or the Fredholm kernel.
== See also ==
== References ==
== Further reading ==
A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4
R. K. M. Thambynayagam, The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill, New York, 2011. ISBN 978-0-07-175184-1
"Integral transform", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. | Wikipedia/Integral_transforms |
Modern physics is a branch of physics that developed in the early 20th century and onward or branches greatly influenced by early 20th century physics. Notable branches of modern physics include quantum mechanics, special relativity, and general relativity.
Classical physics is typically concerned with everyday conditions: speeds are much lower than the speed of light, sizes are much greater than that of atoms, and energies are relatively small. Modern physics, however, is concerned with more extreme conditions, such as high velocities that are comparable to the speed of light (special relativity), small distances comparable to the atomic radius (quantum mechanics), and very high energies (relativity). In general, quantum and relativistic effects are believed to exist across all scales, although these effects may be very small at human scale. While quantum mechanics is compatible with special relativity (See: Relativistic quantum mechanics), one of the unsolved problems in physics is the unification of quantum mechanics and general relativity, which the Standard Model of particle physics currently cannot account for.
Modern physics is an effort to understand the underlying processes of the interactions of matter using the tools of science and engineering. In a literal sense, the term modern physics means up-to-date physics. In this sense, a significant portion of so-called classical physics is modern. However, since roughly 1890, new discoveries have caused significant paradigm shifts: especially the advent of quantum mechanics (QM) and relativity (ER). Physics that incorporates elements of either QM or ER (or both) is said to be modern physics. It is in this latter sense that the term is generally used.
Modern physics is often encountered when dealing with extreme conditions. Quantum mechanical effects tend to appear when dealing with "lows" (low temperatures, small distances), while relativistic effects tend to appear when dealing with "highs" (high velocities, large distances), the "middles" being classical behavior. For example, when analyzing the behavior of a gas at room temperature, most phenomena will involve the (classical) Maxwell–Boltzmann distribution. However, near absolute zero, the Maxwell–Boltzmann distribution fails to account for the observed behavior of the gas, and the (modern) Fermi–Dirac or Bose–Einstein distributions have to be used instead.
Very often, it is possible to find – or "retrieve" – the classical behavior from the modern description by analyzing the modern description at low speeds and large distances (by taking a limit, or by making an approximation). When doing so, the result is called the classical limit.
== Hallmark experiments ==
These are generally considered to be experiments regarded leading to the foundation of modern physics:
== See also ==
== References ==
A. Beiser (2003). Concepts of Modern Physics (6th ed.). McGraw-Hill. ISBN 978-0-07-123460-3.
P. Tipler, R. Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 978-0-7167-4345-3.
== Notes ==
== External links == | Wikipedia/Modern_physics |
The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.
The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.
== Development and acceptance ==
Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work.
Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.
The term "theory of relativity" was based on the expression "relative theory" (German: Relativtheorie) used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" (German: Relativitätstheorie).
By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics.
By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory.
== Special relativity ==
Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity).
The speed of light in vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.
The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion.
Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock.
Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer.
Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in vacuum.
The effect of gravity can only travel through space at the speed of light, not faster or instantaneously.
Mass–energy equivalence: E = mc2, energy and mass are equivalent and transmutable.
Relativistic mass, idea used by some researchers.
The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)
== General relativity ==
General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s.
In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it.
Some of the consequences of general relativity are:
Gravitational time dilation: Clocks run slower in deeper gravitational wells.
Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars).
Light deflection: Rays of light bend in the presence of a gravitational field.
Frame-dragging: Rotating masses "drag along" the spacetime around them.
Expansion of the universe: The universe is expanding, and certain components within the universe can accelerate the expansion.
Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.
== Experimental evidence ==
Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions.
=== Tests of special relativity ===
Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence.
Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology.
The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the Earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was sufficiently accurate to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an ad hoc hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity.
While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames.
The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect – the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity.
Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations.
=== Tests of general relativity ===
General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging.
== Modern applications ==
Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user, and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects in order to work with precision, such as the consequences of the Earth's gravitational field. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
== See also ==
Doubly special relativity
Galilean invariance
List of textbooks on relativity
== References ==
== Further reading ==
== External links ==
The dictionary definition of theory of relativity at Wiktionary
Media related to Theory of relativity at Wikimedia Commons | Wikipedia/Theory_of_relativity |
Geophysics () is a subject of natural science concerned with the physical processes and properties of Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists conduct investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets.
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics.
Geophysics is pursued for fundamental understanding of the Earth its space environment. Geophysics often addresses societal needs, such as mineral resources, assessment and mitigation of natural hazards and environmental impact assessment. In exploration geophysics, geophysical survey data are used to analyze potential petroleum reservoirs and mineral deposits, locate groundwater, find archaeological relics, determine the thickness of glaciers and soils, and assess sites for environmental remediation.
== Physical phenomena ==
Geophysics is a highly interdisciplinary subject, and geophysicists contribute to every area of the Earth sciences, while some geophysicists conduct research in the planetary sciences. To provide a more clear idea on what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth and its surroundings. Geophysicists also investigate the physical processes and properties of the Earth, its fluid layers, and magnetic field along with the near-Earth environment in the Solar System, which includes other planetary bodies.
=== Gravity ===
The gravitational pull of the Moon and Sun gives rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide.
Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits (see gravity anomaly and gravimetry). The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents (such as with very narrow canals).
=== Vibrations ===
Seismic waves are vibrations that travel through the Earth's interior or along its surface. The entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs. If the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection.
Recording of seismic waves from controlled sources provides information on the region that the waves travel through. If the density or composition of the rock changes, waves are reflected. Reflections recorded using Reflection Seismology can provide a wealth of information on the structure of the earth up to several kilometers deep and are used to increase our understanding of the geology as well as to explore for oil and gas. Changes in the travel direction, called refraction, can be used to infer the deep structure of the Earth.
Earthquakes pose a risk to humans. Understanding their mechanisms, which depend on the type of earthquake (e.g., intraplate or deep focus), can lead to better estimates of earthquake risk and improvements in earthquake engineering.
=== Electricity ===
Although we mainly notice electricity during thunderstorms, there is always a downward electric field near the surface that averages 120 volts per meter. Relative to the solid Earth, the ionization of the planet's atmosphere is a result of the galactic cosmic rays penetrating it, which leaves it with a net positive charge. A current of about 1800 amperes flows in the global circuit. It flows downward from the ionosphere over most of the Earth and back upwards through thunderstorms. The flow is manifested by lightning below the clouds and sprites above.
A variety of electric methods are used in geophysical survey. Some measure spontaneous potential, a potential that arises in the ground because of human-made or natural disturbances. Telluric currents flow in Earth and the oceans. They have two causes: electromagnetic induction by the time-varying, external-origin geomagnetic field and motion of conducting bodies (such as seawater) across the Earth's permanent magnetic field. The distribution of telluric current density can be used to detect variations in electrical resistivity of underground structures. Geophysicists can also provide the electric current themselves (see induced polarization and electrical resistivity tomography).
=== Electromagnetic waves ===
Electromagnetic waves occur in the ionosphere and magnetosphere as well as in Earth's outer core. Dawn chorus is believed to be caused by high-energy electrons that get caught in the Van Allen radiation belt. Whistlers are produced by lightning strikes. Hiss may be generated by both. Electromagnetic waves may also be generated by earthquakes (see seismo-electromagnetics).
In the highly conductive liquid iron of the outer core, magnetic fields are generated by electric currents through electromagnetic induction. Alfvén waves are magnetohydrodynamic waves in the magnetosphere or the Earth's core. In the core, they probably have little observable effect on the Earth's magnetic field, but slower waves such as magnetic Rossby waves may be one source of geomagnetic secular variation.
Electromagnetic methods that are used for geophysical survey include transient electromagnetics, magnetotellurics, surface nuclear magnetic resonance and electromagnetic seabed logging.
=== Magnetism ===
The Earth's magnetic field protects the Earth from the deadly solar wind and has long been used for navigation. It originates in the fluid motions of the outer core. The magnetic field in the upper atmosphere gives rise to the auroras.
The Earth's field is roughly like a tilted dipole, but it changes over time (a phenomenon called geomagnetic secular variation). Mostly the geomagnetic pole stays near the geographic pole, but at random intervals averaging 440,000 to a million years or so, the polarity of the Earth's field reverses. These geomagnetic reversals, analyzed within a Geomagnetic Polarity Time Scale, contain 184 polarity intervals in the last 83 million years, with change in frequency over time, with the most recent brief complete reversal of the Laschamp event occurring 41,000 years ago during the last glacial period. Geologists observed geomagnetic reversal recorded in volcanic rocks, through magnetostratigraphy correlation (see natural remanent magnetization) and their signature can be seen as parallel linear magnetic anomaly stripes on the seafloor. These stripes provide quantitative information on seafloor spreading, a part of plate tectonics. They are the basis of magnetostratigraphy, which correlates magnetic reversals with other stratigraphies to construct geologic time scales. In addition, the magnetization in rocks can be used to measure the motion of continents.
=== Radioactivity ===
Radioactive decay accounts for about 80% of the Earth's internal heat, powering the geodynamo and plate tectonics. The main heat-producing isotopes are potassium-40, uranium-238, uranium-235, and thorium-232.
Radioactive elements are used for radiometric dating, the primary method for establishing an absolute time scale in geochronology.
Unstable isotopes decay at predictable rates, and the decay rates of different isotopes cover several orders of magnitude, so radioactive decay can be used to accurately date both recent events and events in past geologic eras. Radiometric mapping using ground and airborne gamma spectrometry can be used to map the concentration and distribution of radioisotopes near the Earth's surface, which is useful for mapping lithology and alteration.
=== Fluid dynamics ===
Fluid motions occur in the magnetosphere, atmosphere, ocean, mantle and core. Even the mantle, though it has an enormous viscosity, flows like a fluid over long time intervals. This flow is reflected in phenomena such as isostasy, post-glacial rebound and mantle plumes. The mantle flow drives plate tectonics and the flow in the Earth's core drives the geodynamo.
Geophysical fluid dynamics is a primary tool in physical oceanography and meteorology. The rotation of the Earth has profound effects on the Earth's fluid dynamics, often due to the Coriolis effect. In the atmosphere, it gives rise to large-scale patterns like Rossby waves and determines the basic circulation patterns of storms. In the ocean, they drive large-scale circulation patterns as well as Kelvin waves and Ekman spirals at the ocean surface. In the Earth's core, the circulation of the molten iron is structured by Taylor columns.
Waves and other phenomena in the magnetosphere can be modeled using magnetohydrodynamics.
=== Heat flow ===
The Earth is cooling, and the resulting heat flow generates the Earth's magnetic field through the geodynamo and plate tectonics through mantle convection. The main sources of heat are: primordial heat due to Earth's cooling and radioactivity in the planets upper crust. There is also some contributions from phase transitions. Heat is mostly carried to the surface by thermal convection, although there are two thermal boundary layers – the core–mantle boundary and the lithosphere – in which heat is transported by conduction. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about 4.2 × 1013 W, and it is a potential source of geothermal energy.
=== Mineral physics ===
The physical properties of minerals must be understood to infer the composition of the Earth's interior from seismology, the geothermal gradient and other sources of information. Mineral physicists study the elastic properties of minerals; their high-pressure phase diagrams, melting points and equations of state at high pressure; and the rheological properties of rocks, or their ability to flow. Deformation of rocks by creep make flow possible, although over short times the rocks are brittle. The viscosity of rocks is affected by temperature and pressure, and in turn, determines the rates at which tectonic plates move.
Water is a very complex substance and its unique properties are essential for life. Its physical properties shape the hydrosphere and are an essential part of the water cycle and climate. Its thermodynamic properties determine evaporation and the thermal gradient in the atmosphere. The many types of precipitation involve a complex mixture of processes such as coalescence, supercooling and supersaturation. Some precipitated water becomes groundwater, and groundwater flow includes phenomena such as percolation, while the conductivity of water makes electrical and electromagnetic methods useful for tracking groundwater flow. Physical properties of water such as salinity have a large effect on its motion in the oceans.
The many phases of ice form the cryosphere and come in forms like ice sheets, glaciers, sea ice, freshwater ice, snow, and frozen ground (or permafrost).
== Regions of the Earth ==
=== Size and form of the Earth ===
Contrary to popular belief, the earth is not entirely spherical but instead generally exhibits an ellipsoid shape- which is a result of the centrifugal forces the planet generates due to its constant motion. These forces cause the planets diameter to bulge towards the Equator and results in the ellipsoid shape. Earth's shape is constantly changing, and different factors including glacial isostatic rebound (large ice sheets melting causing the Earth's crust to the rebound due to the release of the pressure), geological features such as mountains or ocean trenches, tectonic plate dynamics, and natural disasters can further distort the planet's shape.
=== Structure of the interior ===
Evidence from seismology, heat flow at the surface, and mineral physics is combined with the Earth's mass and moment of inertia to infer models of the Earth's interior – its composition, density, temperature, pressure. For example, the Earth's mean specific gravity (5.515) is far higher than the typical specific gravity of rocks at the surface (2.7–3.3), implying that the deeper material is denser. This is also implied by its low moment of inertia ( 0.33 M R2, compared to 0.4 M R2 for a sphere of constant density). However, some of the density increase is compression under the enormous pressures inside the Earth. The effect of pressure can be calculated using the Adams–Williamson equation. The conclusion is that pressure alone cannot account for the increase in density. Instead, we know that the Earth's core is composed of an alloy of iron and other minerals.
Reconstructions of seismic waves in the deep interior of the Earth show that there are no S-waves in the outer core. This indicates that the outer core is liquid, because liquids cannot support shear. The outer core is liquid, and the motion of this highly conductive fluid generates the Earth's field. Earth's inner core, however, is solid because of the enormous pressure.
Reconstruction of seismic reflections in the deep interior indicates some major discontinuities in seismic velocities that demarcate the major zones of the Earth: inner core, outer core, mantle, lithosphere and crust. The mantle itself is divided into the upper mantle, transition zone, lower mantle and D′′ layer. Between the crust and the mantle is the Mohorovičić discontinuity.
The seismic model of the Earth does not by itself determine the composition of the layers. For a complete model of the Earth, mineral physics is needed to interpret seismic velocities in terms of composition. The mineral properties are temperature-dependent, so the geotherm must also be determined. This requires physical theory for thermal conduction and convection and the heat contribution of radioactive elements. The main model for the radial structure of the interior of the Earth is the preliminary reference Earth model (PREM). Some parts of this model have been updated by recent findings in mineral physics (see post-perovskite) and supplemented by seismic tomography. The mantle is mainly composed of silicates, and the boundaries between layers of the mantle are consistent with phase transitions.
The mantle acts as a solid for seismic waves, but under high pressures and temperatures, it deforms so that over millions of years it acts like a liquid. This makes plate tectonics possible.
=== Magnetosphere ===
If a planet's magnetic field is strong enough, its interaction with the solar wind forms a magnetosphere. Early space probes mapped out the gross dimensions of the Earth's magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles called the Van Allen radiation belts.
== Methods ==
=== Geodesy ===
Geophysical measurements are generally at a particular time and place. Accurate measurements of position, along with earth deformation and gravity, are the province of geodesy. While geodesy and geophysics are separate fields, the two are so closely connected that many scientific organizations such as the American Geophysical Union, the Canadian Geophysical Union and the International Union of Geodesy and Geophysics encompass both.
Absolute positions are most frequently determined using the global positioning system (GPS). A three-dimensional position is calculated using messages from four or more visible satellites and referred to the 1980 Geodetic Reference System. An alternative, optical astronomy, combines astronomical coordinates and the local gravity vector to get geodetic coordinates. This method only provides the position in two coordinates and is more difficult to use than GPS. However, it is useful for measuring motions of the Earth such as nutation and Chandler wobble. Relative positions of two or more points can be determined using very-long-baseline interferometry.
Gravity measurements became part of geodesy because they were needed to related measurements at the surface of the Earth to the reference coordinate system. Gravity measurements on land can be made using gravimeters deployed either on the surface or in helicopter flyovers. Since the 1960s, the Earth's gravity field has been measured by analyzing the motion of satellites. Sea level can also be measured by satellites using radar altimetry, contributing to a more accurate geoid. In 2002, NASA launched the Gravity Recovery and Climate Experiment (GRACE), wherein two twin satellites map variations in Earth's gravity field by making measurements of the distance between the two satellites using GPS and a microwave ranging system. Gravity variations detected by GRACE include those caused by changes in ocean currents; runoff and ground water depletion; melting ice sheets and glaciers.
=== Satellites and space probes ===
Satellites in space have made it possible to collect data from not only the visible light region, but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
=== Global positioning systems (GPS) and geographical information systems (GIS) ===
Since geophysics is concerned with the shape of the Earth, and by extension the mapping of features around and in the planet, geophysical measurements include high accuracy GPS measurements. These measurements are processed to increase their accuracy through differential GPS processing. Once the geophysical measurements have been processed and inverted, the interpreted results are plotted using GIS. Programs such as ArcGIS and Geosoft were built to meet these needs and include many geophysical functions that are built-in, such as upward continuation, and the calculation of the measurement derivative such as the first-vertical derivative. Many geophysics companies have designed in-house geophysics programs that pre-date ArcGIS and GeoSoft in order to meet the visualization requirements of a geophysical dataset.
=== Remote sensing ===
Exploration geophysics is a branch of applied geophysics that involves the development and utilization of different seismic or electromagnetic methods which the aim of investigating different energy, mineral and water resources. This is done through the uses of various remote sensing platforms such as; satellites, aircraft, boats, drones, borehole sensing equipment and seismic receivers. These equipment are often used in conjunction with different geophysical methods such as magnetic, gravimetry, electromagnetic, radiometric, barometry methods in order to gather the data. The remote sensing platforms used in exploration geophysics are not perfect and need adjustments done on them in order to accurately account for the effects that the platform itself may have on the collected data. For example, when gathering aeromagnetic data (aircraft gathered magnetic data) using a conventional fixed-wing aircraft- the platform has to be adjusted to account for the electromagnetic currents that it may generate as it passes through Earth's magnetic field. There are also corrections related to changes in measured potential field intensity as the Earth rotates, as the Earth orbits the Sun, and as the moon orbits the Earth.
=== Signal processing ===
Geophysical measurements are often recorded as time-series with GPS location. Signal processing involves the correction of time-series data for unwanted noise or errors introduced by the measurement platform, such as aircraft vibrations in gravity data. It also involves the reduction of sources of noise, such as diurnal corrections in magnetic data. In seismic data, electromagnetic data, and gravity data, processing continues after error corrections to include computational geophysics which result in the final interpretation of the geophysical data into a geological interpretation of the geophysical measurements
== History ==
Geophysics emerged as a separate discipline only in the 19th century, from the intersection of physical geography, geology, astronomy, meteorology, and physics. The first known use of the word geophysics was in German ("Geophysik") by Julius Fröbel in 1834. However, many geophysical phenomena – such as the Earth's magnetic field and earthquakes – have been investigated since the ancient era.
=== Ancient and classical eras ===
The magnetic compass existed in China back as far as the fourth century BC. It was used as much for feng shui as for navigation on land. It was not until good steel needles could be forged that compasses were used for navigation at sea; before that, they could not retain their magnetism long enough to be useful. The first mention of a compass in Europe was in 1190 AD.
In circa 240 BC, Eratosthenes of Cyrene deduced that the Earth was round and measured the circumference of Earth with great precision. He developed a system of latitude and longitude.
Perhaps the earliest contribution to seismology was the invention of a seismoscope by the prolific inventor Zhang Heng in 132 AD. This instrument was designed to drop a bronze ball from the mouth of a dragon into the mouth of a toad. By looking at which of eight toads had the ball, one could determine the direction of the earthquake. It was 1571 years before the first design for a seismoscope was published in Europe, by Jean de la Hautefeuille. It was never built.
=== Beginnings of modern science ===
The 17th century had major milestones that marked the beginning of modern science. In 1600, William Gilbert release a publication titled De Magnete (1600) where he conducted series of experiments on both natural magnets (called 'loadstones') and artificially magnetized iron. His experiments lead to observations involving a small compass needle (versorium) which replicated magnetic behaviours when subjected to a spherical magnet, along with it experiencing 'magnetic dips' when it was pivoted on a horizontal axis. HIs findings led to the deduction that compasses point north due to the Earth itself being a giant magnet.
In 1687 Isaac Newton published his work titled Principia which was pivotal in the development of modern scientific fields such as astronomy and physics. In it, Newton both laid the foundations for classical mechanics and gravitation, as well as explained different geophysical phenomena such as the precession of the equinox (the orbit of whole star patterns along an ecliptic axis. Newton's theory of gravity had gained so much success, that it resulted in changing the main objective of physics in that era to unravel natures fundamental forces, and their characterizations in laws.
The first seismometer, an instrument capable of keeping a continuous record of seismic activity, was built by James Forbes in 1844.
== See also ==
International Union of Geodesy and Geophysics (IUGG)
Sociedade Brasileira de Geofísica
Earth system science – Scientific study of the Earth's spheres and their natural integrated systems
List of geophysicists – Famous geophysicists
Outline of geophysics – Topics in the physics of the Earth and its vicinity
Geodynamics – Study of dynamics of the Earth
Planetary science – Science of planets and planetary systems
Geological Engineering
Physics
Space physics
Geosciences
Geodesy
Aeromagnetic survey
== Notes ==
== References ==
== External links ==
A reference manual for near-surface geophysics techniques and applications Archived 18 February 2021 at the Wayback Machine
Commission on Geophysical Risk and Sustainability (GeoRisk), International Union of Geodesy and Geophysics (IUGG)
Study of the Earth's Deep Interior, a Committee of IUGG
Union Commissions (IUGG)
USGS Geomagnetism Program
Career crate: Seismic processor
Society of Exploration Geophysicists | Wikipedia/Geophysics |
In physics, Hamiltonian mechanics is a reformulation of Lagrangian mechanics that emerged in 1833. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities
q
˙
i
{\displaystyle {\dot {q}}^{i}}
used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.
Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics.
== Overview ==
=== Phase space coordinates (p, q) and Hamiltonian H ===
Let
(
M
,
L
)
{\displaystyle (M,{\mathcal {L}})}
be a mechanical system with configuration space
M
{\displaystyle M}
and smooth Lagrangian
L
.
{\displaystyle {\mathcal {L}}.}
Select a standard coordinate system
(
q
,
q
˙
)
{\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
on
M
.
{\displaystyle M.}
The quantities
p
i
(
q
,
q
˙
,
t
)
=
def
∂
L
/
∂
q
˙
i
{\displaystyle \textstyle p_{i}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)~{\stackrel {\text{def}}{=}}~{\partial {\mathcal {L}}}/{\partial {\dot {q}}^{i}}}
are called momenta. (Also generalized momenta, conjugate momenta, and canonical momenta). For a time instant
t
,
{\displaystyle t,}
the Legendre transformation of
L
{\displaystyle {\mathcal {L}}}
is defined as the map
(
q
,
q
˙
)
→
(
p
,
q
)
{\displaystyle ({\boldsymbol {q}},{\boldsymbol {\dot {q}}})\to \left({\boldsymbol {p}},{\boldsymbol {q}}\right)}
which is assumed to have a smooth inverse
(
p
,
q
)
→
(
q
,
q
˙
)
.
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})\to ({\boldsymbol {q}},{\boldsymbol {\dot {q}}}).}
For a system with
n
{\displaystyle n}
degrees of freedom, the Lagrangian mechanics defines the energy function
E
L
(
q
,
q
˙
,
t
)
=
def
∑
i
=
1
n
q
˙
i
∂
L
∂
q
˙
i
−
L
.
{\displaystyle E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)\,{\stackrel {\text{def}}{=}}\,\sum _{i=1}^{n}{\dot {q}}^{i}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\mathcal {L}}.}
The Legendre transform of
L
{\displaystyle {\mathcal {L}}}
turns
E
L
{\displaystyle E_{\mathcal {L}}}
into a function
H
(
p
,
q
,
t
)
{\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)}
known as the Hamiltonian. The Hamiltonian satisfies
H
(
∂
L
∂
q
˙
,
q
,
t
)
=
E
L
(
q
,
q
˙
,
t
)
{\displaystyle {\mathcal {H}}\left({\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {\dot {q}}}}},{\boldsymbol {q}},t\right)=E_{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}
which implies that
H
(
p
,
q
,
t
)
=
∑
i
=
1
n
p
i
q
˙
i
−
L
(
q
,
q
˙
,
t
)
,
{\displaystyle {\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)=\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t),}
where the velocities
q
˙
=
(
q
˙
1
,
…
,
q
˙
n
)
{\displaystyle {\boldsymbol {\dot {q}}}=({\dot {q}}^{1},\ldots ,{\dot {q}}^{n})}
are found from the (
n
{\displaystyle n}
-dimensional) equation
p
=
∂
L
/
∂
q
˙
{\displaystyle \textstyle {\boldsymbol {p}}={\partial {\mathcal {L}}}/{\partial {\boldsymbol {\dot {q}}}}}
which, by assumption, is uniquely solvable for
q
˙
{\displaystyle {\boldsymbol {\dot {q}}}}
. The (
2
n
{\displaystyle 2n}
-dimensional) pair
(
p
,
q
)
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})}
is called phase space coordinates. (Also canonical coordinates).
=== From Euler–Lagrange equation to Hamilton's equations ===
In phase space coordinates
(
p
,
q
)
{\displaystyle ({\boldsymbol {p}},{\boldsymbol {q}})}
, the (
n
{\displaystyle n}
-dimensional) Euler–Lagrange equation
∂
L
∂
q
−
d
d
t
∂
L
∂
q
˙
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial {\boldsymbol {q}}}}-{\frac {d}{dt}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {\boldsymbol {q}}}}}=0}
becomes Hamilton's equations in
2
n
{\displaystyle 2n}
dimensions
=== From stationary action principle to Hamilton's equations ===
Let
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
be the set of smooth paths
q
:
[
a
,
b
]
→
M
{\displaystyle {\boldsymbol {q}}:[a,b]\to M}
for which
q
(
a
)
=
x
a
{\displaystyle {\boldsymbol {q}}(a)={\boldsymbol {x}}_{a}}
and
q
(
b
)
=
x
b
.
{\displaystyle {\boldsymbol {q}}(b)={\boldsymbol {x}}_{b}.}
The action functional
S
:
P
(
a
,
b
,
x
a
,
x
b
)
→
R
{\displaystyle {\mathcal {S}}:{\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})\to \mathbb {R} }
is defined via
S
[
q
]
=
∫
a
b
L
(
t
,
q
(
t
)
,
q
˙
(
t
)
)
d
t
=
∫
a
b
(
∑
i
=
1
n
p
i
q
˙
i
−
H
(
p
,
q
,
t
)
)
d
t
,
{\displaystyle {\mathcal {S}}[{\boldsymbol {q}}]=\int _{a}^{b}{\mathcal {L}}(t,{\boldsymbol {q}}(t),{\dot {\boldsymbol {q}}}(t))\,dt=\int _{a}^{b}\left(\sum _{i=1}^{n}p_{i}{\dot {q}}^{i}-{\mathcal {H}}({\boldsymbol {p}},{\boldsymbol {q}},t)\right)\,dt,}
where
q
=
q
(
t
)
{\displaystyle {\boldsymbol {q}}={\boldsymbol {q}}(t)}
, and
p
=
∂
L
/
∂
q
˙
{\displaystyle {\boldsymbol {p}}=\partial {\mathcal {L}}/\partial {\boldsymbol {\dot {q}}}}
(see above). A path
q
∈
P
(
a
,
b
,
x
a
,
x
b
)
{\displaystyle {\boldsymbol {q}}\in {\mathcal {P}}(a,b,{\boldsymbol {x}}_{a},{\boldsymbol {x}}_{b})}
is a stationary point of
S
{\displaystyle {\mathcal {S}}}
(and hence is an equation of motion) if and only if the path
(
p
(
t
)
,
q
(
t
)
)
{\displaystyle ({\boldsymbol {p}}(t),{\boldsymbol {q}}(t))}
in phase space coordinates obeys the Hamilton equations.
=== Basic physical interpretation ===
A simple interpretation of Hamiltonian mechanics comes from its application on a one-dimensional system consisting of one nonrelativistic particle of mass m. The value
H
(
p
,
q
)
{\displaystyle H(p,q)}
of the Hamiltonian is the total energy of the system, in this case the sum of kinetic and potential energy, traditionally denoted T and V, respectively. Here p is the momentum mv and q is the space coordinate. Then
H
=
T
+
V
,
T
=
p
2
2
m
,
V
=
V
(
q
)
{\displaystyle {\mathcal {H}}=T+V,\qquad T={\frac {p^{2}}{2m}},\qquad V=V(q)}
T is a function of p alone, while V is a function of q alone (i.e., T and V are scleronomic).
In this example, the time derivative of q is the velocity, and so the first Hamilton equation means that the particle's velocity equals the derivative of its kinetic energy with respect to its momentum. The time derivative of the momentum p equals the Newtonian force, and so the second Hamilton equation means that the force equals the negative gradient of potential energy.
== Example ==
A spherical pendulum consists of a mass m moving without friction on the surface of a sphere. The only forces acting on the mass are the reaction from the sphere and gravity. Spherical coordinates are used to describe the position of the mass in terms of (r, θ, φ), where r is fixed, r = ℓ.
The Lagrangian for this system is
L
=
1
2
m
ℓ
2
(
θ
˙
2
+
sin
2
θ
φ
˙
2
)
+
m
g
ℓ
cos
θ
.
{\displaystyle L={\frac {1}{2}}m\ell ^{2}\left({\dot {\theta }}^{2}+\sin ^{2}\theta \ {\dot {\varphi }}^{2}\right)+mg\ell \cos \theta .}
Thus the Hamiltonian is
H
=
P
θ
θ
˙
+
P
φ
φ
˙
−
L
{\displaystyle H=P_{\theta }{\dot {\theta }}+P_{\varphi }{\dot {\varphi }}-L}
where
P
θ
=
∂
L
∂
θ
˙
=
m
ℓ
2
θ
˙
{\displaystyle P_{\theta }={\frac {\partial L}{\partial {\dot {\theta }}}}=m\ell ^{2}{\dot {\theta }}}
and
P
φ
=
∂
L
∂
φ
˙
=
m
ℓ
2
sin
2
θ
φ
˙
.
{\displaystyle P_{\varphi }={\frac {\partial L}{\partial {\dot {\varphi }}}}=m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}.}
In terms of coordinates and momenta, the Hamiltonian reads
H
=
[
1
2
m
ℓ
2
θ
˙
2
+
1
2
m
ℓ
2
sin
2
θ
φ
˙
2
]
⏟
T
+
[
−
m
g
ℓ
cos
θ
]
⏟
V
=
P
θ
2
2
m
ℓ
2
+
P
φ
2
2
m
ℓ
2
sin
2
θ
−
m
g
ℓ
cos
θ
.
{\displaystyle H=\underbrace {\left[{\frac {1}{2}}m\ell ^{2}{\dot {\theta }}^{2}+{\frac {1}{2}}m\ell ^{2}\sin ^{2}\!\theta \,{\dot {\varphi }}^{2}\right]} _{T}+\underbrace {{\Big [}-mg\ell \cos \theta {\Big ]}} _{V}={\frac {P_{\theta }^{2}}{2m\ell ^{2}}}+{\frac {P_{\varphi }^{2}}{2m\ell ^{2}\sin ^{2}\theta }}-mg\ell \cos \theta .}
Hamilton's equations give the time evolution of coordinates and conjugate momenta in four first-order differential equations,
θ
˙
=
P
θ
m
ℓ
2
φ
˙
=
P
φ
m
ℓ
2
sin
2
θ
P
θ
˙
=
P
φ
2
m
ℓ
2
sin
3
θ
cos
θ
−
m
g
ℓ
sin
θ
P
φ
˙
=
0.
{\displaystyle {\begin{aligned}{\dot {\theta }}&={P_{\theta } \over m\ell ^{2}}\\[6pt]{\dot {\varphi }}&={P_{\varphi } \over m\ell ^{2}\sin ^{2}\theta }\\[6pt]{\dot {P_{\theta }}}&={P_{\varphi }^{2} \over m\ell ^{2}\sin ^{3}\theta }\cos \theta -mg\ell \sin \theta \\[6pt]{\dot {P_{\varphi }}}&=0.\end{aligned}}}
Momentum
P
φ
{\displaystyle P_{\varphi }}
, which corresponds to the vertical component of angular momentum
L
z
=
ℓ
sin
θ
×
m
ℓ
sin
θ
φ
˙
{\displaystyle L_{z}=\ell \sin \theta \times m\ell \sin \theta \,{\dot {\varphi }}}
, is a constant of motion. That is a consequence of the rotational symmetry of the system around the vertical axis. Being absent from the Hamiltonian, azimuth
φ
{\displaystyle \varphi }
is a cyclic coordinate, which implies conservation of its conjugate momentum.
== Deriving Hamilton's equations ==
Hamilton's equations can be derived by a calculation with the Lagrangian
L
{\displaystyle {\mathcal {L}}}
, generalized positions qi, and generalized velocities ⋅qi, where
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
. Here we work off-shell, meaning
q
i
{\displaystyle q^{i}}
,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
,
t
{\displaystyle t}
are independent coordinates in phase space, not constrained to follow any equations of motion (in particular,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
is not a derivative of
q
i
{\displaystyle q^{i}}
). The total differential of the Lagrangian is:
d
L
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
∂
L
∂
q
˙
i
d
q
˙
i
)
+
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {L}}=\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}\,\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
The generalized momentum coordinates were defined as
p
i
=
∂
L
/
∂
q
˙
i
{\displaystyle p_{i}=\partial {\mathcal {L}}/\partial {\dot {q}}^{i}}
, so we may rewrite the equation as:
d
L
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
p
i
d
q
˙
i
)
+
∂
L
∂
t
d
t
=
∑
i
(
∂
L
∂
q
i
d
q
i
+
d
(
p
i
q
˙
i
)
−
q
˙
i
d
p
i
)
+
∂
L
∂
t
d
t
.
{\displaystyle {\begin{aligned}\mathrm {d} {\mathcal {L}}=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+p_{i}\mathrm {d} {\dot {q}}^{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\mathrm {d} t\\=&\sum _{i}\left({\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+\mathrm {d} (p_{i}{\dot {q}}^{i})-{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\,.\end{aligned}}}
After rearranging, one obtains:
d
(
∑
i
p
i
q
˙
i
−
L
)
=
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} \!\left(\sum _{i}p_{i}{\dot {q}}^{i}-{\mathcal {L}}\right)=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
The term in parentheses on the left-hand side is just the Hamiltonian
H
=
∑
p
i
q
˙
i
−
L
{\textstyle {\mathcal {H}}=\sum p_{i}{\dot {q}}^{i}-{\mathcal {L}}}
defined previously, therefore:
d
H
=
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\,\mathrm {d} q^{i}+{\dot {q}}^{i}\,\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ .}
One may also calculate the total differential of the Hamiltonian
H
{\displaystyle {\mathcal {H}}}
with respect to coordinates
q
i
{\displaystyle q^{i}}
,
p
i
{\displaystyle p_{i}}
,
t
{\displaystyle t}
instead of
q
i
{\displaystyle q^{i}}
,
q
˙
i
{\displaystyle {\dot {q}}^{i}}
,
t
{\displaystyle t}
, yielding:
d
H
=
∑
i
(
∂
H
∂
q
i
d
q
i
+
∂
H
∂
p
i
d
p
i
)
+
∂
H
∂
t
d
t
.
{\displaystyle \mathrm {d} {\mathcal {H}}=\sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .}
One may now equate these two expressions for
d
H
{\displaystyle d{\mathcal {H}}}
, one in terms of
L
{\displaystyle {\mathcal {L}}}
, the other in terms of
H
{\displaystyle {\mathcal {H}}}
:
∑
i
(
−
∂
L
∂
q
i
d
q
i
+
q
˙
i
d
p
i
)
−
∂
L
∂
t
d
t
=
∑
i
(
∂
H
∂
q
i
d
q
i
+
∂
H
∂
p
i
d
p
i
)
+
∂
H
∂
t
d
t
.
{\displaystyle \sum _{i}\left(-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\dot {q}}^{i}\mathrm {d} p_{i}\right)-{\frac {\partial {\mathcal {L}}}{\partial t}}\,\mathrm {d} t\ =\ \sum _{i}\left({\frac {\partial {\mathcal {H}}}{\partial q^{i}}}\mathrm {d} q^{i}+{\frac {\partial {\mathcal {H}}}{\partial p_{i}}}\mathrm {d} p_{i}\right)+{\frac {\partial {\mathcal {H}}}{\partial t}}\,\mathrm {d} t\ .}
Since these calculations are off-shell, one can equate the respective coefficients of
d
q
i
{\displaystyle \mathrm {d} q^{i}}
,
d
p
i
{\displaystyle \mathrm {d} p_{i}}
,
d
t
{\displaystyle \mathrm {d} t}
on the two sides:
∂
H
∂
q
i
=
−
∂
L
∂
q
i
,
∂
H
∂
p
i
=
q
˙
i
,
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\partial {\mathcal {L}} \over \partial t}\ .}
On-shell, one substitutes parametric functions
q
i
=
q
i
(
t
)
{\displaystyle q^{i}=q^{i}(t)}
which define a trajectory in phase space with velocities
q
˙
i
=
d
d
t
q
i
(
t
)
{\displaystyle {\dot {q}}^{i}={\tfrac {d}{dt}}q^{i}(t)}
, obeying Lagrange's equations:
d
d
t
∂
L
∂
q
˙
i
−
∂
L
∂
q
i
=
0
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {\mathcal {L}}}{\partial {\dot {q}}^{i}}}-{\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0\ .}
Rearranging and writing in terms of the on-shell
p
i
=
p
i
(
t
)
{\displaystyle p_{i}=p_{i}(t)}
gives:
∂
L
∂
q
i
=
p
˙
i
.
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}={\dot {p}}_{i}\ .}
Thus Lagrange's equations are equivalent to Hamilton's equations:
∂
H
∂
q
i
=
−
p
˙
i
,
∂
H
∂
p
i
=
q
˙
i
,
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=-{\dot {p}}_{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial p_{i}}}={\dot {q}}^{i}\quad ,\quad {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}\,.}
In the case of time-independent
H
{\displaystyle {\mathcal {H}}}
and
L
{\displaystyle {\mathcal {L}}}
, i.e.
∂
H
/
∂
t
=
−
∂
L
/
∂
t
=
0
{\displaystyle \partial {\mathcal {H}}/\partial t=-\partial {\mathcal {L}}/\partial t=0}
, Hamilton's equations consist of 2n first-order differential equations, while Lagrange's equations consist of n second-order equations. Hamilton's equations usually do not reduce the difficulty of finding explicit solutions, but important theoretical results can be derived from them, because coordinates and momenta are independent variables with nearly symmetric roles.
Hamilton's equations have another advantage over Lagrange's equations: if a system has a symmetry, so that some coordinate
q
i
{\displaystyle q_{i}}
does not occur in the Hamiltonian (i.e. a cyclic coordinate), the corresponding momentum coordinate
p
i
{\displaystyle p_{i}}
is conserved along each trajectory, and that coordinate can be reduced to a constant in the other equations of the set. This effectively reduces the problem from n coordinates to (n − 1) coordinates: this is the basis of symplectic reduction in geometry. In the Lagrangian framework, the conservation of momentum also follows immediately, however all the generalized velocities
q
˙
i
{\displaystyle {\dot {q}}_{i}}
still occur in the Lagrangian, and a system of equations in n coordinates still has to be solved.
The Lagrangian and Hamiltonian approaches provide the groundwork for deeper results in classical mechanics, and suggest analogous formulations in quantum mechanics: the path integral formulation and the Schrödinger equation.
== Properties of the Hamiltonian ==
The value of the Hamiltonian
H
{\displaystyle {\mathcal {H}}}
is the total energy of the system if and only if the energy function
E
L
{\displaystyle E_{\mathcal {L}}}
has the same property. (See definition of
H
{\displaystyle {\mathcal {H}}}
).
d
H
d
t
=
∂
H
∂
t
{\displaystyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial t}}}
when
p
(
t
)
{\displaystyle \mathbf {p} (t)}
,
q
(
t
)
{\displaystyle \mathbf {q} (t)}
form a solution of Hamilton's equations. Indeed,
d
H
d
t
=
∂
H
∂
p
⋅
p
˙
+
∂
H
∂
q
⋅
q
˙
+
∂
H
∂
t
,
{\textstyle {\frac {d{\mathcal {H}}}{dt}}={\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {p}}}}\cdot {\dot {\boldsymbol {p}}}+{\frac {\partial {\mathcal {H}}}{\partial {\boldsymbol {q}}}}\cdot {\dot {\boldsymbol {q}}}+{\frac {\partial {\mathcal {H}}}{\partial t}},}
and everything but the final term cancels out.
H
{\displaystyle {\mathcal {H}}}
does not change under point transformations, i.e. smooth changes
q
↔
q
′
{\displaystyle {\boldsymbol {q}}\leftrightarrow {\boldsymbol {q'}}}
of space coordinates. (Follows from the invariance of the energy function
E
L
{\displaystyle E_{\mathcal {L}}}
under point transformations. The invariance of
E
L
{\displaystyle E_{\mathcal {L}}}
can be established directly).
∂
H
∂
t
=
−
∂
L
∂
t
.
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial t}}=-{\frac {\partial {\mathcal {L}}}{\partial t}}.}
(See § Deriving Hamilton's equations).
−
∂
H
∂
q
i
=
p
˙
i
=
∂
L
∂
q
i
{\displaystyle -{\frac {\partial {\mathcal {H}}}{\partial q^{i}}}={\dot {p}}_{i}={\frac {\partial {\mathcal {L}}}{\partial q^{i}}}}
. (Compare Hamilton's and Euler-Lagrange equations or see § Deriving Hamilton's equations).
∂
H
∂
q
i
=
0
{\displaystyle {\frac {\partial {\mathcal {H}}}{\partial q^{i}}}=0}
if and only if
∂
L
∂
q
i
=
0
{\displaystyle {\frac {\partial {\mathcal {L}}}{\partial q^{i}}}=0}
.A coordinate for which the last equation holds is called cyclic (or ignorable). Every cyclic coordinate
q
i
{\displaystyle q^{i}}
reduces the number of degrees of freedom by
1
{\displaystyle 1}
, causes the corresponding momentum
p
i
{\displaystyle p_{i}}
to be conserved, and makes Hamilton's equations easier to solve.
== Hamiltonian as the total system energy ==
In its application to a given system, the Hamiltonian is often taken to be
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
where
T
{\displaystyle T}
is the kinetic energy and
V
{\displaystyle V}
is the potential energy. Using this relation can be simpler than first calculating the Lagrangian, and then deriving the Hamiltonian from the Lagrangian. However, the relation is not true for all systems.
The relation holds true for nonrelativistic systems when all of the following conditions are satisfied
∂
V
(
q
,
q
˙
,
t
)
∂
q
˙
i
=
0
,
∀
i
{\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}=0\;,\quad \forall i}
∂
T
(
q
,
q
˙
,
t
)
∂
t
=
0
{\displaystyle {\frac {\partial T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0}
T
(
q
,
q
˙
)
=
∑
i
=
1
n
∑
j
=
1
n
(
c
i
j
(
q
)
q
˙
i
q
˙
j
)
{\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})=\sum _{i=1}^{n}\sum _{j=1}^{n}{\biggl (}c_{ij}({\boldsymbol {q}}){\dot {q}}_{i}{\dot {q}}_{j}{\biggr )}}
where
t
{\displaystyle t}
is time,
n
{\displaystyle n}
is the number of degrees of freedom of the system, and each
c
i
j
(
q
)
{\displaystyle c_{ij}({\boldsymbol {q}})}
is an arbitrary scalar function of
q
{\displaystyle {\boldsymbol {q}}}
.
In words, this means that the relation
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
holds true if
T
{\displaystyle T}
does not contain time as an explicit variable (it is scleronomic),
V
{\displaystyle V}
does not contain generalised velocity as an explicit variable, and each term of
T
{\displaystyle T}
is quadratic in generalised velocity.
=== Proof ===
Preliminary to this proof, it is important to address an ambiguity in the related mathematical notation. While a change of variables can be used to equate
L
(
p
,
q
,
t
)
=
L
(
q
,
q
˙
,
t
)
{\displaystyle {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)={\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}
,
it is important to note that
∂
L
(
q
,
q
˙
,
t
)
∂
q
˙
i
≠
∂
L
(
p
,
q
,
t
)
∂
q
˙
i
{\displaystyle {\frac {\partial {\mathcal {L}}({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial {\dot {q}}_{i}}}\neq {\frac {\partial {\mathcal {L}}({\boldsymbol {p}},{\boldsymbol {q}},t)}{\partial {\dot {q}}_{i}}}}
.
In this case, the right hand side always evaluates to 0. To perform a change of variables inside of a partial derivative, the multivariable chain rule should be used. Hence, to avoid ambiguity, the function arguments of any term inside of a partial derivative should be stated.
Additionally, this proof uses the notation
f
(
a
,
b
,
c
)
=
f
(
a
,
b
)
{\displaystyle f(a,b,c)=f(a,b)}
to imply that
∂
f
(
a
,
b
,
c
)
∂
c
=
0
{\displaystyle {\frac {\partial f(a,b,c)}{\partial c}}=0}
.
=== Application to systems of point masses ===
For a system of point masses, the requirement for
T
{\displaystyle T}
to be quadratic in generalised velocity is always satisfied for the case where
T
(
q
,
q
˙
,
t
)
=
T
(
q
,
q
˙
)
{\displaystyle T({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
, which is a requirement for
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
anyway.
=== Conservation of energy ===
If the conditions for
H
=
T
+
V
{\displaystyle {\mathcal {H}}=T+V}
are satisfied, then conservation of the Hamiltonian implies conservation of energy. This requires the additional condition that
V
{\displaystyle V}
does not contain time as an explicit variable.
∂
V
(
q
,
q
˙
,
t
)
∂
t
=
0
{\displaystyle {\frac {\partial V({\boldsymbol {q}},{\boldsymbol {\dot {q}}},t)}{\partial t}}=0}
In summary, the requirements for
H
=
T
+
V
=
constant of time
{\displaystyle {\mathcal {H}}=T+V={\text{constant of time}}}
to be satisfied for a nonrelativistic system are
V
=
V
(
q
)
{\displaystyle V=V({\boldsymbol {q}})}
T
=
T
(
q
,
q
˙
)
{\displaystyle T=T({\boldsymbol {q}},{\boldsymbol {\dot {q}}})}
T
{\displaystyle T}
is a homogeneous quadratic function in
q
˙
{\displaystyle {\boldsymbol {\dot {q}}}}
Regarding extensions to the Euler-Lagrange formulation which use dissipation functions (See Lagrangian mechanics § Extensions to include non-conservative forces), e.g. the Rayleigh dissipation function, energy is not conserved when a dissipation function has effect. It is possible to explain the link between this and the former requirements by relating the extended and conventional Euler-Lagrange equations: grouping the extended terms into the potential function produces a velocity dependent potential. Hence, the requirements are not satisfied when a dissipation function has effect.
== Hamiltonian of a charged particle in an electromagnetic field ==
A sufficient illustration of Hamiltonian mechanics is given by the Hamiltonian of a charged particle in an electromagnetic field. In Cartesian coordinates the Lagrangian of a non-relativistic classical particle in an electromagnetic field is (in SI Units):
L
=
∑
i
1
2
m
x
˙
i
2
+
∑
i
q
x
˙
i
A
i
−
q
φ
,
{\displaystyle {\mathcal {L}}=\sum _{i}{\tfrac {1}{2}}m{\dot {x}}_{i}^{2}+\sum _{i}q{\dot {x}}_{i}A_{i}-q\varphi ,}
where q is the electric charge of the particle, φ is the electric scalar potential, and the Ai are the components of the magnetic vector potential that may all explicitly depend on
x
i
{\displaystyle x_{i}}
and
t
{\displaystyle t}
.
This Lagrangian, combined with Euler–Lagrange equation, produces the Lorentz force law
m
x
¨
=
q
E
+
q
x
˙
×
B
,
{\displaystyle m{\ddot {\mathbf {x} }}=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \,,}
and is called minimal coupling.
The canonical momenta are given by:
p
i
=
∂
L
∂
x
˙
i
=
m
x
˙
i
+
q
A
i
.
{\displaystyle p_{i}={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}_{i}}}=m{\dot {x}}_{i}+qA_{i}.}
The Hamiltonian, as the Legendre transformation of the Lagrangian, is therefore:
H
=
∑
i
x
˙
i
p
i
−
L
=
∑
i
(
p
i
−
q
A
i
)
2
2
m
+
q
φ
.
{\displaystyle {\mathcal {H}}=\sum _{i}{\dot {x}}_{i}p_{i}-{\mathcal {L}}=\sum _{i}{\frac {\left(p_{i}-qA_{i}\right)^{2}}{2m}}+q\varphi .}
This equation is used frequently in quantum mechanics.
Under gauge transformation:
A
→
A
+
∇
f
,
φ
→
φ
−
f
˙
,
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} +\nabla f\,,\quad \varphi \rightarrow \varphi -{\dot {f}}\,,}
where f(r, t) is any scalar function of space and time. The aforementioned Lagrangian, the canonical momenta, and the Hamiltonian transform like:
L
→
L
′
=
L
+
q
d
f
d
t
,
p
→
p
′
=
p
+
q
∇
f
,
H
→
H
′
=
H
−
q
∂
f
∂
t
,
{\displaystyle L\rightarrow L'=L+q{\frac {df}{dt}}\,,\quad \mathbf {p} \rightarrow \mathbf {p'} =\mathbf {p} +q\nabla f\,,\quad H\rightarrow H'=H-q{\frac {\partial f}{\partial t}}\,,}
which still produces the same Hamilton's equation:
∂
H
′
∂
x
i
|
p
i
′
=
∂
∂
x
i
|
p
i
′
(
x
˙
i
p
i
′
−
L
′
)
=
−
∂
L
′
∂
x
i
|
p
i
′
=
−
∂
L
∂
x
i
|
p
i
′
−
q
∂
∂
x
i
|
p
i
′
d
f
d
t
=
−
d
d
t
(
∂
L
∂
x
˙
i
|
p
i
′
+
q
∂
f
∂
x
i
|
p
i
′
)
=
−
p
˙
i
′
{\displaystyle {\begin{aligned}\left.{\frac {\partial H'}{\partial {x_{i}}}}\right|_{p'_{i}}&=\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}({\dot {x}}_{i}p'_{i}-L')=-\left.{\frac {\partial L'}{\partial {x_{i}}}}\right|_{p'_{i}}\\&=-\left.{\frac {\partial L}{\partial {x_{i}}}}\right|_{p'_{i}}-q\left.{\frac {\partial }{\partial {x_{i}}}}\right|_{p'_{i}}{\frac {df}{dt}}\\&=-{\frac {d}{dt}}\left(\left.{\frac {\partial L}{\partial {{\dot {x}}_{i}}}}\right|_{p'_{i}}+q\left.{\frac {\partial f}{\partial {x_{i}}}}\right|_{p'_{i}}\right)\\&=-{\dot {p}}'_{i}\end{aligned}}}
In quantum mechanics, the wave function will also undergo a local U(1) group transformation during the Gauge Transformation, which implies that all physical results must be invariant under local U(1) transformations.
=== Relativistic charged particle in an electromagnetic field ===
The relativistic Lagrangian for a particle (rest mass
m
{\displaystyle m}
and charge
q
{\displaystyle q}
) is given by:
L
(
t
)
=
−
m
c
2
1
−
x
˙
(
t
)
2
c
2
+
q
x
˙
(
t
)
⋅
A
(
x
(
t
)
,
t
)
−
q
φ
(
x
(
t
)
,
t
)
{\displaystyle {\mathcal {L}}(t)=-mc^{2}{\sqrt {1-{\frac {{{\dot {\mathbf {x} }}(t)}^{2}}{c^{2}}}}}+q{\dot {\mathbf {x} }}(t)\cdot \mathbf {A} \left(\mathbf {x} (t),t\right)-q\varphi \left(\mathbf {x} (t),t\right)}
Thus the particle's canonical momentum is
p
(
t
)
=
∂
L
∂
x
˙
=
m
x
˙
1
−
x
˙
2
c
2
+
q
A
{\displaystyle \mathbf {p} (t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\mathbf {x} }}}}={\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}+q\mathbf {A} }
that is, the sum of the kinetic momentum and the potential momentum.
Solving for the velocity, we get
x
˙
(
t
)
=
p
−
q
A
m
2
+
1
c
2
(
p
−
q
A
)
2
{\displaystyle {\dot {\mathbf {x} }}(t)={\frac {\mathbf {p} -q\mathbf {A} }{\sqrt {m^{2}+{\frac {1}{c^{2}}}{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}}}
So the Hamiltonian is
H
(
t
)
=
x
˙
⋅
p
−
L
=
c
m
2
c
2
+
(
p
−
q
A
)
2
+
q
φ
{\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}\cdot \mathbf {p} -{\mathcal {L}}=c{\sqrt {m^{2}c^{2}+{\left(\mathbf {p} -q\mathbf {A} \right)}^{2}}}+q\varphi }
This results in the force equation (equivalent to the Euler–Lagrange equation)
p
˙
=
−
∂
H
∂
x
=
q
x
˙
⋅
(
∇
A
)
−
q
∇
φ
=
q
∇
(
x
˙
⋅
A
)
−
q
∇
φ
{\displaystyle {\dot {\mathbf {p} }}=-{\frac {\partial {\mathcal {H}}}{\partial \mathbf {x} }}=q{\dot {\mathbf {x} }}\cdot ({\boldsymbol {\nabla }}\mathbf {A} )-q{\boldsymbol {\nabla }}\varphi =q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi }
from which one can derive
d
d
t
(
m
x
˙
1
−
x
˙
2
c
2
)
=
d
d
t
(
p
−
q
A
)
=
p
˙
−
q
∂
A
∂
t
−
q
(
x
˙
⋅
∇
)
A
=
q
∇
(
x
˙
⋅
A
)
−
q
∇
φ
−
q
∂
A
∂
t
−
q
(
x
˙
⋅
∇
)
A
=
q
E
+
q
x
˙
×
B
{\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {m{\dot {\mathbf {x} }}}{\sqrt {1-{\frac {{\dot {\mathbf {x} }}^{2}}{c^{2}}}}}}\right)&={\frac {\mathrm {d} }{\mathrm {d} t}}(\mathbf {p} -q\mathbf {A} )={\dot {\mathbf {p} }}-q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q{\boldsymbol {\nabla }}({\dot {\mathbf {x} }}\cdot \mathbf {A} )-q{\boldsymbol {\nabla }}\varphi -q{\frac {\partial \mathbf {A} }{\partial t}}-q({\dot {\mathbf {x} }}\cdot \nabla )\mathbf {A} \\&=q\mathbf {E} +q{\dot {\mathbf {x} }}\times \mathbf {B} \end{aligned}}}
The above derivation makes use of the vector calculus identity:
1
2
∇
(
A
⋅
A
)
=
A
⋅
J
A
=
A
⋅
(
∇
A
)
=
(
A
⋅
∇
)
A
+
A
×
(
∇
×
A
)
.
{\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)=\mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }=\mathbf {A} \cdot (\nabla \mathbf {A} )=(\mathbf {A} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {A} ).}
An equivalent expression for the Hamiltonian as function of the relativistic (kinetic) momentum,
P
=
γ
m
x
˙
(
t
)
=
p
−
q
A
{\displaystyle \mathbf {P} =\gamma m{\dot {\mathbf {x} }}(t)=\mathbf {p} -q\mathbf {A} }
, is
H
(
t
)
=
x
˙
(
t
)
⋅
P
(
t
)
+
m
c
2
γ
+
q
φ
(
x
(
t
)
,
t
)
=
γ
m
c
2
+
q
φ
(
x
(
t
)
,
t
)
=
E
+
V
{\displaystyle {\mathcal {H}}(t)={\dot {\mathbf {x} }}(t)\cdot \mathbf {P} (t)+{\frac {mc^{2}}{\gamma }}+q\varphi (\mathbf {x} (t),t)=\gamma mc^{2}+q\varphi (\mathbf {x} (t),t)=E+V}
This has the advantage that kinetic momentum
P
{\displaystyle \mathbf {P} }
can be measured experimentally whereas canonical momentum
p
{\displaystyle \mathbf {p} }
cannot. Notice that the Hamiltonian (total energy) can be viewed as the sum of the relativistic energy (kinetic+rest),
E
=
γ
m
c
2
{\displaystyle E=\gamma mc^{2}}
, plus the potential energy,
V
=
q
φ
{\displaystyle V=q\varphi }
.
== From symplectic geometry to Hamilton's equations ==
=== Geometry of Hamiltonian systems ===
The Hamiltonian can induce a symplectic structure on a smooth even-dimensional manifold M2n in several equivalent ways, the best known being the following:
As a closed nondegenerate symplectic 2-form ω. According to Darboux's theorem, in a small neighbourhood around any point on M there exist suitable local coordinates
p
1
,
⋯
,
p
n
,
q
1
,
⋯
,
q
n
{\displaystyle p_{1},\cdots ,p_{n},\ q_{1},\cdots ,q_{n}}
(canonical or symplectic coordinates) in which the symplectic form becomes:
ω
=
∑
i
=
1
n
d
p
i
∧
d
q
i
.
{\displaystyle \omega =\sum _{i=1}^{n}dp_{i}\wedge dq_{i}\,.}
The form
ω
{\displaystyle \omega }
induces a natural isomorphism of the tangent space with the cotangent space:
T
x
M
≅
T
x
∗
M
{\displaystyle T_{x}M\cong T_{x}^{*}M}
. This is done by mapping a vector
ξ
∈
T
x
M
{\displaystyle \xi \in T_{x}M}
to the 1-form
ω
ξ
∈
T
x
∗
M
{\displaystyle \omega _{\xi }\in T_{x}^{*}M}
, where
ω
ξ
(
η
)
=
ω
(
η
,
ξ
)
{\displaystyle \omega _{\xi }(\eta )=\omega (\eta ,\xi )}
for all
η
∈
T
x
M
{\displaystyle \eta \in T_{x}M}
. Due to the bilinearity and non-degeneracy of
ω
{\displaystyle \omega }
, and the fact that
dim
T
x
M
=
dim
T
x
∗
M
{\displaystyle \dim T_{x}M=\dim T_{x}^{*}M}
, the mapping
ξ
→
ω
ξ
{\displaystyle \xi \to \omega _{\xi }}
is indeed a linear isomorphism. This isomorphism is natural in that it does not change with change of coordinates on
M
.
{\displaystyle M.}
Repeating over all
x
∈
M
{\displaystyle x\in M}
, we end up with an isomorphism
J
−
1
:
Vect
(
M
)
→
Ω
1
(
M
)
{\displaystyle J^{-1}:{\text{Vect}}(M)\to \Omega ^{1}(M)}
between the infinite-dimensional space of smooth vector fields and that of smooth 1-forms. For every
f
,
g
∈
C
∞
(
M
,
R
)
{\displaystyle f,g\in C^{\infty }(M,\mathbb {R} )}
and
ξ
,
η
∈
Vect
(
M
)
{\displaystyle \xi ,\eta \in {\text{Vect}}(M)}
,
J
−
1
(
f
ξ
+
g
η
)
=
f
J
−
1
(
ξ
)
+
g
J
−
1
(
η
)
.
{\displaystyle J^{-1}(f\xi +g\eta )=fJ^{-1}(\xi )+gJ^{-1}(\eta ).}
(In algebraic terms, one would say that the
C
∞
(
M
,
R
)
{\displaystyle C^{\infty }(M,\mathbb {R} )}
-modules
Vect
(
M
)
{\displaystyle {\text{Vect}}(M)}
and
Ω
1
(
M
)
{\displaystyle \Omega ^{1}(M)}
are isomorphic). If
H
∈
C
∞
(
M
×
R
t
,
R
)
{\displaystyle H\in C^{\infty }(M\times \mathbb {R} _{t},\mathbb {R} )}
, then, for every fixed
t
∈
R
t
{\displaystyle t\in \mathbb {R} _{t}}
,
d
H
∈
Ω
1
(
M
)
{\displaystyle dH\in \Omega ^{1}(M)}
, and
J
(
d
H
)
∈
Vect
(
M
)
{\displaystyle J(dH)\in {\text{Vect}}(M)}
.
J
(
d
H
)
{\displaystyle J(dH)}
is known as a Hamiltonian vector field. The respective differential equation on
M
{\displaystyle M}
x
˙
=
J
(
d
H
)
(
x
)
{\displaystyle {\dot {x}}=J(dH)(x)}
is called Hamilton's equation. Here
x
=
x
(
t
)
{\displaystyle x=x(t)}
and
J
(
d
H
)
(
x
)
∈
T
x
M
{\displaystyle J(dH)(x)\in T_{x}M}
is the (time-dependent) value of the vector field
J
(
d
H
)
{\displaystyle J(dH)}
at
x
∈
M
{\displaystyle x\in M}
.
A Hamiltonian system may be understood as a fiber bundle E over time R, with the fiber Et being the position space at time t ∈ R. The Lagrangian is thus a function on the jet bundle J over E; taking the fiberwise Legendre transform of the Lagrangian produces a function on the dual bundle over time whose fiber at t is the cotangent space T∗Et, which comes equipped with a natural symplectic form, and this latter function is the Hamiltonian. The correspondence between Lagrangian and Hamiltonian mechanics is achieved with the tautological one-form.
Any smooth real-valued function H on a symplectic manifold can be used to define a Hamiltonian system. The function H is known as "the Hamiltonian" or "the energy function." The symplectic manifold is then called the phase space. The Hamiltonian induces a special vector field on the symplectic manifold, known as the Hamiltonian vector field.
The Hamiltonian vector field induces a Hamiltonian flow on the manifold. This is a one-parameter family of transformations of the manifold (the parameter of the curves is commonly called "the time"); in other words, an isotopy of symplectomorphisms, starting with the identity. By Liouville's theorem, each symplectomorphism preserves the volume form on the phase space. The collection of symplectomorphisms induced by the Hamiltonian flow is commonly called "the Hamiltonian mechanics" of the Hamiltonian system.
The symplectic structure induces a Poisson bracket. The Poisson bracket gives the space of functions on the manifold the structure of a Lie algebra.
If F and G are smooth functions on M then the smooth function ω(J(dF), J(dG)) is properly defined; it is called a Poisson bracket of functions F and G and is denoted {F, G}. The Poisson bracket has the following properties:
bilinearity
antisymmetry
Leibniz rule:
{
F
1
⋅
F
2
,
G
}
=
F
1
{
F
2
,
G
}
+
F
2
{
F
1
,
G
}
{\displaystyle \{F_{1}\cdot F_{2},G\}=F_{1}\{F_{2},G\}+F_{2}\{F_{1},G\}}
Jacobi identity:
{
{
H
,
F
}
,
G
}
+
{
{
F
,
G
}
,
H
}
+
{
{
G
,
H
}
,
F
}
≡
0
{\displaystyle \{\{H,F\},G\}+\{\{F,G\},H\}+\{\{G,H\},F\}\equiv 0}
non-degeneracy: if the point x on M is not critical for F then a smooth function G exists such that
{
F
,
G
}
(
x
)
≠
0
{\displaystyle \{F,G\}(x)\neq 0}
.
Given a function f
d
d
t
f
=
∂
∂
t
f
+
{
f
,
H
}
,
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}f={\frac {\partial }{\partial t}}f+\left\{f,{\mathcal {H}}\right\},}
if there is a probability distribution ρ, then (since the phase space velocity
(
p
˙
i
,
q
˙
i
)
{\displaystyle ({\dot {p}}_{i},{\dot {q}}_{i})}
has zero divergence and probability is conserved) its convective derivative can be shown to be zero and so
∂
∂
t
ρ
=
−
{
ρ
,
H
}
{\displaystyle {\frac {\partial }{\partial t}}\rho =-\left\{\rho ,{\mathcal {H}}\right\}}
This is called Liouville's theorem. Every smooth function G over the symplectic manifold generates a one-parameter family of symplectomorphisms and if {G, H} = 0, then G is conserved and the symplectomorphisms are symmetry transformations.
A Hamiltonian may have multiple conserved quantities Gi. If the symplectic manifold has dimension 2n and there are n functionally independent conserved quantities Gi which are in involution (i.e., {Gi, Gj} = 0), then the Hamiltonian is Liouville integrable. The Liouville–Arnold theorem says that, locally, any Liouville integrable Hamiltonian can be transformed via a symplectomorphism into a new Hamiltonian with the conserved quantities Gi as coordinates; the new coordinates are called action–angle coordinates. The transformed Hamiltonian depends only on the Gi, and hence the equations of motion have the simple form
G
˙
i
=
0
,
φ
˙
i
=
F
i
(
G
)
{\displaystyle {\dot {G}}_{i}=0\quad ,\quad {\dot {\varphi }}_{i}=F_{i}(G)}
for some function F. There is an entire field focusing on small deviations from integrable systems governed by the KAM theorem.
The integrability of Hamiltonian vector fields is an open question. In general, Hamiltonian systems are chaotic; concepts of measure, completeness, integrability and stability are poorly defined.
=== Riemannian manifolds ===
An important special case consists of those Hamiltonians that are quadratic forms, that is, Hamiltonians that can be written as
H
(
q
,
p
)
=
1
2
⟨
p
,
p
⟩
q
{\displaystyle {\mathcal {H}}(q,p)={\tfrac {1}{2}}\langle p,p\rangle _{q}}
where ⟨ , ⟩q is a smoothly varying inner product on the fibers T∗qQ, the cotangent space to the point q in the configuration space, sometimes called a cometric. This Hamiltonian consists entirely of the kinetic term.
If one considers a Riemannian manifold or a pseudo-Riemannian manifold, the Riemannian metric induces a linear isomorphism between the tangent and cotangent bundles. (See Musical isomorphism). Using this isomorphism, one can define a cometric. (In coordinates, the matrix defining the cometric is the inverse of the matrix defining the metric.) The solutions to the Hamilton–Jacobi equations for this Hamiltonian are then the same as the geodesics on the manifold. In particular, the Hamiltonian flow in this case is the same thing as the geodesic flow. The existence of such solutions, and the completeness of the set of solutions, are discussed in detail in the article on geodesics. See also Geodesics as Hamiltonian flows.
=== Sub-Riemannian manifolds ===
When the cometric is degenerate, then it is not invertible. In this case, one does not have a Riemannian manifold, as one does not have a metric. However, the Hamiltonian still exists. In the case where the cometric is degenerate at every point q of the configuration space manifold Q, so that the rank of the cometric is less than the dimension of the manifold Q, one has a sub-Riemannian manifold.
The Hamiltonian in this case is known as a sub-Riemannian Hamiltonian. Every such Hamiltonian uniquely determines the cometric, and vice versa. This implies that every sub-Riemannian manifold is uniquely determined by its sub-Riemannian Hamiltonian, and that the converse is true: every sub-Riemannian manifold has a unique sub-Riemannian Hamiltonian. The existence of sub-Riemannian geodesics is given by the Chow–Rashevskii theorem.
The continuous, real-valued Heisenberg group provides a simple example of a sub-Riemannian manifold. For the Heisenberg group, the Hamiltonian is given by
H
(
x
,
y
,
z
,
p
x
,
p
y
,
p
z
)
=
1
2
(
p
x
2
+
p
y
2
)
.
{\displaystyle {\mathcal {H}}\left(x,y,z,p_{x},p_{y},p_{z}\right)={\tfrac {1}{2}}\left(p_{x}^{2}+p_{y}^{2}\right).}
pz is not involved in the Hamiltonian.
=== Poisson algebras ===
Hamiltonian systems can be generalized in various ways. Instead of simply looking at the algebra of smooth functions over a symplectic manifold, Hamiltonian mechanics can be formulated on general commutative unital real Poisson algebras. A state is a continuous linear functional on the Poisson algebra (equipped with some suitable topology) such that for any element A of the algebra, A2 maps to a nonnegative real number.
A further generalization is given by Nambu dynamics.
=== Generalization to quantum mechanics through Poisson bracket ===
Hamilton's equations above work well for classical mechanics, but not for quantum mechanics, since the differential equations discussed assume that one can specify the exact position and momentum of the particle simultaneously at any point in time. However, the equations can be further generalized to then be extended to apply to quantum mechanics as well as to classical mechanics, through the deformation of the Poisson algebra over p and q to the algebra of Moyal brackets.
Specifically, the more general form of the Hamilton's equation reads
d
f
d
t
=
{
f
,
H
}
+
∂
f
∂
t
,
{\displaystyle {\frac {\mathrm {d} f}{\mathrm {d} t}}=\left\{f,{\mathcal {H}}\right\}+{\frac {\partial f}{\partial t}},}
where f is some function of p and q, and H is the Hamiltonian. To find out the rules for evaluating a Poisson bracket without resorting to differential equations, see Lie algebra; a Poisson bracket is the name for the Lie bracket in a Poisson algebra. These Poisson brackets can then be extended to Moyal brackets comporting to an inequivalent Lie algebra, as proven by Hilbrand J. Groenewold, and thereby describe quantum mechanical diffusion in phase space (See Phase space formulation and Wigner–Weyl transform). This more algebraic approach not only permits ultimately extending probability distributions in phase space to Wigner quasi-probability distributions, but, at the mere Poisson bracket classical setting, also provides more power in helping analyze the relevant conserved quantities in a system.
== See also ==
== References ==
== Further reading ==
== External links ==
Binney, James J., Classical Mechanics (lecture notes) (PDF), University of Oxford, retrieved 27 October 2010
Tong, David, Classical Dynamics (Cambridge lecture notes), University of Cambridge, retrieved 27 October 2010
Hamilton, William Rowan, On a General Method in Dynamics, Trinity College Dublin
Malham, Simon J.A. (2016), An introduction to Lagrangian and Hamiltonian mechanics (lecture notes) (PDF)
Morin, David (2008), Introduction to Classical Mechanics (Additional material: The Hamiltonian method) (PDF) | Wikipedia/Hamiltonian_dynamics |
In physics and materials science, elasticity is the ability of a body to resist a distorting influence and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate loads are applied to them; if the material is elastic, the object will return to its initial shape and size after removal. This is in contrast to plasticity, in which the object fails to do so and instead remains in its deformed state.
The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes size and shape when forces are applied (energy is added to the system). When forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied.
Hooke's law states that the force required to deform elastic objects should be directly proportional to the distance of deformation, regardless of how large that distance becomes. This is known as perfect elasticity, in which a given object will return to its original shape no matter how strongly it is deformed. This is an ideal concept only; most materials which possess elasticity in practice remain purely elastic only up to very small deformations, after which plastic (permanent) deformation occurs.
In engineering, the elasticity of a material is quantified by the elastic modulus such as the Young's modulus, bulk modulus or shear modulus which measure the amount of stress needed to achieve a unit of strain; a higher modulus indicates that the material is harder to deform. The SI unit of this modulus is the pascal (Pa). The material's elastic limit or yield strength is the maximum stress that can arise before the onset of plastic deformation. Its SI unit is also the pascal (Pa).
== Overview ==
When an elastic material is deformed due to an external force, it experiences internal resistance to the deformation and restores it to its original state if the external force is no longer applied. There are various elastic moduli, such as Young's modulus, the shear modulus, and the bulk modulus, all of which are measures of the inherent elastic properties of a material as a resistance to deformation under an applied load. The various moduli apply to different kinds of deformation. For instance, Young's modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. Young's modulus and shear modulus are only for solids, whereas the bulk modulus is for solids, liquids, and gases.
The elasticity of materials is described by a stress–strain curve, which shows the relation between stress (the average restorative internal force per unit area) and strain (the relative deformation). The curve is generally nonlinear, but it can (by use of a Taylor series) be approximated as linear for sufficiently small deformations (in which higher-order terms are negligible). If the material is isotropic, the linearized stress–strain relationship is called Hooke's law, which is often presumed to apply up to the elastic limit for most metals or crystalline materials whereas nonlinear elasticity is generally required to model large deformations of rubbery materials even in the elastic range. For even higher stresses, materials exhibit plastic behavior, that is, they deform irreversibly and do not return to their original shape after stress is no longer applied. For rubber-like materials such as elastomers, the slope of the stress–strain curve increases with stress, meaning that rubbers progressively become more difficult to stretch, while for most metals, the gradient decreases at very high stresses, meaning that they progressively become easier to stretch. Elasticity is not exhibited only by solids; non-Newtonian fluids, such as viscoelastic fluids, will also exhibit elasticity in certain conditions quantified by the Deborah number. In response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, these fluids may start to flow like a viscous liquid.
Because the elasticity of a material is described in terms of a stress–strain relation, it is essential that the terms stress and strain be defined without ambiguity. Typically, two types of relation are considered. The first type deals with materials that are elastic only for small strains. The second deals with materials that are not limited to small strains. Clearly, the second type of relation is more general in the sense that it must include the first type as a special case.
For small strains, the measure of stress that is used is the Cauchy stress while the measure of strain that is used is the infinitesimal strain tensor; the resulting (predicted) material behavior is termed linear elasticity, which (for isotropic media) is called the generalized Hooke's law. Cauchy elastic materials and hypoelastic materials are models that extend Hooke's law to allow for the possibility of large rotations, large distortions, and intrinsic or induced anisotropy.
For more general situations, any of a number of stress measures can be used, and it is generally desired (but not required) that the elastic stress–strain relation be phrased in terms of a finite strain measure that is work conjugate to the selected stress measure, i.e., the time integral of the inner product of the stress measure with the rate of the strain measure should be equal to the change in internal energy for any adiabatic process that remains below the elastic limit.
== Units ==
=== International System ===
The SI unit for elasticity and the elastic modulus is the pascal (Pa). This unit is defined as force per unit area, generally a measurement of pressure, which in mechanics corresponds to stress. The pascal and therefore elasticity have the dimension L−1⋅M⋅T−2.
For most commonly used engineering materials, the elastic modulus is on the scale of gigapascals (GPa, 109 Pa).
== Linear elasticity ==
As noted above, for small deformations, most elastic materials such as springs exhibit linear elasticity and can be described by a linear relation between the stress and strain. This relationship is known as Hooke's law. A geometry-dependent version of the idea was first formulated by Robert Hooke in 1675 as a Latin anagram, "ceiiinosssttuv". He published the answer in 1678: "Ut tensio, sic vis" meaning "As the extension, so the force", a linear relationship commonly referred to as Hooke's law. This law can be stated as a relationship between tensile force F and corresponding extension displacement
x
{\displaystyle x}
,
F
=
k
x
,
{\displaystyle F=kx,}
where k is a constant known as the rate or spring constant. It can also be stated as a relationship between stress
σ
{\displaystyle \sigma }
and strain
ε
{\displaystyle \varepsilon }
:
σ
=
E
ε
,
{\displaystyle \sigma =E\varepsilon ,}
where E is known as the Young's modulus.
Although the general proportionality constant between stress and strain in three dimensions is a 4th-order tensor called stiffness, systems that exhibit symmetry, such as a one-dimensional rod, can often be reduced to applications of Hooke's law.
== Finite elasticity ==
The elastic behavior of objects that undergo finite deformations has been described using a number of models, such as Cauchy elastic material models, Hypoelastic material models, and Hyperelastic material models. The deformation gradient (F) is the primary deformation measure used in finite strain theory.
=== Cauchy elastic materials ===
A material is said to be Cauchy-elastic if the Cauchy stress tensor σ is a function of the deformation gradient F alone:
σ
=
G
(
F
)
{\displaystyle \ {\boldsymbol {\sigma }}={\mathcal {G}}({\boldsymbol {F}})}
It is generally incorrect to state that Cauchy stress is a function of merely a strain tensor, as such a model lacks crucial information about material rotation needed to produce correct results for an anisotropic medium subjected to vertical extension in comparison to the same extension applied horizontally and then subjected to a 90-degree rotation; both these deformations have the same spatial strain tensors yet must produce different values of the Cauchy stress tensor.
Even though the stress in a Cauchy-elastic material depends only on the state of deformation, the work done by stresses might depend on the path of deformation. Therefore, Cauchy elasticity includes non-conservative "non-hyperelastic" models (in which work of deformation is path dependent) as well as conservative "hyperelastic material" models (for which stress can be derived from a scalar "elastic potential" function).
=== Hypoelastic materials ===
A hypoelastic material can be rigorously defined as one that is modeled using a constitutive equation satisfying the following two criteria:
The Cauchy stress
σ
{\displaystyle {\boldsymbol {\sigma }}}
at time
t
{\displaystyle t}
depends only on the order in which the body has occupied its past configurations, but not on the time rate at which these past configurations were traversed. As a special case, this criterion includes a Cauchy elastic material, for which the current stress depends only on the current configuration rather than the history of past configurations.
There is a tensor-valued function
G
{\displaystyle G}
such that
σ
˙
=
G
(
σ
,
L
)
,
{\displaystyle {\dot {\boldsymbol {\sigma }}}=G({\boldsymbol {\sigma }},{\boldsymbol {L}})\,,}
in which
σ
˙
{\displaystyle {\dot {\boldsymbol {\sigma }}}}
is the material rate of the Cauchy stress tensor, and
L
{\displaystyle {\boldsymbol {L}}}
is the spatial velocity gradient tensor.
If only these two original criteria are used to define hypoelasticity, then hyperelasticity would be included as a special case, which prompts some constitutive modelers to append a third criterion that specifically requires a hypoelastic model to not be hyperelastic (i.e., hypoelasticity implies that stress is not derivable from an energy potential). If this third criterion is adopted, it follows that a hypoelastic material might admit nonconservative adiabatic loading paths that start and end with the same deformation gradient but do not start and end at the same internal energy.
Note that the second criterion requires only that the function
G
{\displaystyle G}
exists. As detailed in the main hypoelastic material article, specific formulations of hypoelastic models typically employ so-called objective rates so that the
G
{\displaystyle G}
function exists only implicitly and is typically needed explicitly only for numerical stress updates performed via direct integration of the actual (not objective) stress rate.
=== Hyperelastic materials ===
Hyperelastic materials (also called Green elastic materials) are conservative models that are derived from a strain energy density function (W). A model is hyperelastic if and only if it is possible to express the Cauchy stress tensor as a function of the deformation gradient via a relationship of the form
σ
=
1
J
∂
W
∂
F
F
T
where
J
:=
det
F
.
{\displaystyle {\boldsymbol {\sigma }}={\cfrac {1}{J}}~{\cfrac {\partial W}{\partial {\boldsymbol {F}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.}
This formulation takes the energy potential (W) as a function of the deformation gradient (
F
{\displaystyle {\boldsymbol {F}}}
). By also requiring satisfaction of material objectivity, the energy potential may be alternatively regarded as a function of the Cauchy-Green deformation tensor (
C
:=
F
T
F
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{\textsf {T}}{\boldsymbol {F}}}
), in which case the hyperelastic model may be written alternatively as
σ
=
2
J
F
∂
W
∂
C
F
T
where
J
:=
det
F
.
{\displaystyle {\boldsymbol {\sigma }}={\cfrac {2}{J}}~{\boldsymbol {F}}{\cfrac {\partial W}{\partial {\boldsymbol {C}}}}{\boldsymbol {F}}^{\textsf {T}}\quad {\text{where}}\quad J:=\det {\boldsymbol {F}}\,.}
== Applications ==
Linear elasticity is used widely in the design and analysis of structures such as beams, plates and shells, and sandwich composites. This theory is also the basis of much of fracture mechanics.
Hyperelasticity is primarily used to determine the response of elastomer-based objects such as gaskets and of biological materials such as soft tissues and cell membranes.
== Factors affecting elasticity ==
In a given isotropic solid, with known theoretical elasticity for the bulk material in terms of Young's modulus,the effective elasticity will be governed by porosity. Generally a more porous material will exhibit lower stiffness. More specifically, the fraction of pores, their distribution at different sizes and the nature of the fluid with which they are filled give rise to different elastic behaviours in solids.
For isotropic materials containing cracks, the presence of fractures affects the Young and the shear moduli perpendicular to the planes of the cracks, which decrease (Young's modulus faster than the shear modulus) as the fracture density increases, indicating that the presence of cracks makes bodies brittler. Microscopically, the stress–strain relationship of materials is in general governed by the Helmholtz free energy, a thermodynamic quantity. Molecules settle in the configuration which minimizes the free energy, subject to constraints derived from their structure, and, depending on whether the energy or the entropy term dominates the free energy, materials can broadly be classified as energy-elastic and entropy-elastic. As such, microscopic factors affecting the free energy, such as the equilibrium distance between molecules, can affect the elasticity of materials: for instance, in inorganic materials, as the equilibrium distance between molecules at 0 K increases, the bulk modulus decreases. The effect of temperature on elasticity is difficult to isolate, because there are numerous factors affecting it. For instance, the bulk modulus of a material is dependent on the form of its lattice, its behavior under expansion, as well as the vibrations of the molecules, all of which are dependent on temperature.
== See also ==
== Notes ==
== References ==
== External links ==
The Feynman Lectures on Physics Vol. II Ch. 38: Elasticity | Wikipedia/Elasticity_theory |
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's 1905 paper,
"On the Electrodynamics of Moving Bodies", the theory is presented as being based on just two postulates:
The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity.
The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
The first postulate was first formulated by Galileo Galilei (see Galilean invariance).
== Background ==
Special relativity builds upon important physics ideas. The non-technical ideas include:
speed or velocity, how the relative distance between an object and a reference point changes with time.: 25
speed of light, the maximum speed of information, independent of the speed of the source and receiver,: 39
event: something that happens at a definite place and time. For examples, an explosion or a flash of light from an atom;: 10 a generalization of a point in geometrical space,: 43
clocks, relativity is all about time; in relativity observers read clocks.: 39
Two observers in relative motion receive information about two events via light signals traveling at constant speed, independent of either observer's speed. Their motion during the transit time causes them to get the information at different times on their local clock.
The more technical background ideas include:
invariance: when physical laws do not change when a specific circumstance changes, such as observations at different uniform velocities;: 2
spacetime: a union of geometrical space and time.: 18
spacetime interval between two events: a measure of separation that generalizes distance:: 9
(
interval
)
2
=
[
event separation in time
]
2
−
[
event separation in space
]
2
{\displaystyle ({\text{interval}})^{2}=\left[{\text{event separation in time}}\right]^{2}-\left[{\text{event separation in space}}\right]^{2}}
coordinate system or reference frame: a mechanism to specify points in space with respect common reference axes,
inertial reference frame: a region of spacetime within which all objects move with the same acceleration,: 44
relative velocity: the amount and direction of uniform, relative motion of objects in two reference frames,
coordinate transformation: a procedure to respecify a points against a different coordinate system.
The spacetime interval is an invariant between inertial frames, demonstrating the physical unity of spacetime.: 15 Coordinate systems are not invariant between inertial frames and require transformations.: 95
== Overview ==
=== Basis ===
Unusual among modern topics in physics, the theory of special relativity needs only mathematics at high school level and yet it fundamentally alters our understanding, especially our understanding of the concept of time.: ix Built on just two postulates or assumptions, many interesting consequences follow.
The two postulates both concern observers moving at a constant speed relative to each other. The first postulate, the § principle of relativity, says the laws of physics do not depend on objects being at absolute rest: an observer on a moving train sees natural phenomena on that train that look the same whether the train is moving or not.: 5 The second postulate, constant speed of light, says observers on a moving train or on in the train station see light travel at the same speed. A light signal from the station to the train has the same speed, no matter how fast a train goes.: 25
In the theory of special relativity, the two postulates combine to change the definition of "relative speed". Rather than the simple concept of distance traveled divided by time spent, the new theory incorporates the speed of light as the maximum possible speed. In special relativity, covering ten times more distance on the ground in the same amount of time according to a moving watch does not result in a speed up as seen from the ground by a factor of ten.: 28
=== Consequences ===
Special relativity has a wide range of consequences that have been experimentally verified. The conceptual effects include:
the § relativity of simultaneity, events that appear simultaneous to one observer may not be simultaneous to an observer in motion,: 49
§ time dilation, time measured between two events by observers in motions differ,
§ length contraction, distances between two events by observers in motions differ,
the § Lorentz transformation of velocities, velocities no longer simply add,
Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula
E
=
m
c
2
{\displaystyle E=mc^{2}}
, where
c
{\displaystyle c}
is the speed of light in vacuum.
Special relativity replaced the conventional notion of an absolute, universal time with the notion of a time that is local to each observer.: 33 Information about distant objects can arrive no faster than the speed of light so visual observations always report events that have happened in the past. This effect makes visual descriptions of the effects of special relativity especially prone to mistakes.
Special relativity also has profound technical consequences.
A defining feature of special relativity is the replacement of Euclidean geometry with Lorentzian geometry.: 8 Distances in Euclidean geometry are calculated with the Pythagorean theorem and only involved spatial coordinates. In Lorentzian geometry, 'distances' become 'intervals' and include a time coordinate with a minus sign. Unlike spatial distances, the interval between two events has the same value for all observers independent of their relative velocity. When comparing two sets of coordinates in relative motion is Lorentz transformation replace Galilean transformations of Newtonian mechanics.: 98
Other effects include the relativistic corrects to the Doppler effect and the Thomas precession.
It also explains how electricity and magnetism are related.
== History ==
The principle of relativity, forming one of the two postulates of special relativity, was described by Galileo Galilei in 1632 using a thought experiment involving observing natural phenomena on a moving ship. His conclusions were summarized as Galilean relativity and used as the basis of Newtonian mechanics.: 1 This principle can be expressed as a coordinate transformation, between two coordinate systems. Isaac Newton noted that many transformations, such as those involving rotation or acceleration, will not preserve the observation of physical phenomena. Newton considered only those transformations involving motion with respect to an immovable absolute space, now called transformations between inertial frames.: 17
In 1864 James Clerk Maxwell presented a theory of electromagnetism which did not obey Galilean relativity. The theory specifically predicted a constant speed of light in vacuum, no matter the motion (velocity, acceleration, etc.) of the light emitter or receiver or its frequency, wavelength, direction, polarization, or phase. This, as yet untested theory, was thought at the time to be only valid in inertial frames fixed in an aether. Numerous experiments followed, attempting to measure the speed of light as Earth moved through the proposed fixed aether, culminating in the 1887 Michelson–Morley experiment which only confirmed the constant speed of light.: 18
Several fixes to the aether theory were proposed, with those of George Francis Fitzgerald, Hendrik Antoon Lorentz, and Jules Henri Poincare all pointing in the direction of a result similar to the theory of special relativity. The final important step was taken by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". Einstein applied the Lorentz transformations known to be compatible with Maxwell's equations for electrodynamics to the classical laws of mechanics. This changed Newton's mechanics situations involving all motions, especially velocities close to that of light: 18 (known as relativistic velocities).
Another way to describe the advance made by the special theory is to say Einstein extended the Galilean principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.
The theory became essentially complete in 1907, with Hermann Minkowski's papers on spacetime.
Special relativity has proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model remains accurate at low velocities relative to the speed of light, for example, everyday motion on Earth.
When updating his 1911 book on relativity, to include general relativity in 1920, Robert Daniel Carmichael called the earlier work the "restricted theory" as a "special case" of the new general theory; he also used the phrase "special theory of relativity". In comparing to the general theory in 1923 Einstein specifically called his earlier work "the special theory of relativity", saying he meant a restriction to frames uniform motion.: 111
Just as Galilean relativity is accepted as an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. But general relativity incorporates non-Euclidean geometry to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime.
== Traditional "two postulates" approach to special relativity ==
Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as:
The principle of relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.
The principle of invariant light speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source.
The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.
The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history.
== Principle of relativity ==
=== Reference frames and relative motion ===
Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a "clock" (any reference device with uniform periodicity).
An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired.
For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S.
In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations.
=== Standard configuration ===
To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration.: 107 With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame S′ (pronounced "S prime" or "S dash") belongs to a second observer O′.
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
Frame S′ moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S.
The origins of frames S and S′ are coincident when time t = 0 for frame S and t′ = 0 for frame S′.
Since there is no absolute reference frame in relativity theory, a concept of "moving" does not strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and S′ are not comoving.
=== Lack of an absolute reference frame ===
The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. But in the late 19th century the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
=== Relativity without the second postulate ===
From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum.
== Lorentz invariance as the essential core of special relativity ==
=== Two- vs one- postulate approaches ===
In Einstein's own view, the two postulates of relativity and the invariance of the speed of light lead to a single postulate, the Lorentz transformation:
The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events ... The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural laws ...
Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, but Einstein stuck to his approach throughout work.
Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms.
While the traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations, other treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime. Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler and by Callahan.
=== Lorentz transformation and its inverse ===
Define an event to have spacetime coordinates (t, x, y, z) in system S and (t′, x′, y′, z′) in a reference frame moving at a velocity v on the x-axis with respect to that frame, S′. Then the Lorentz transformation specifies that these coordinates are related in the following way:
t
′
=
γ
(
t
−
v
x
/
c
2
)
x
′
=
γ
(
x
−
v
t
)
y
′
=
y
z
′
=
z
,
{\displaystyle {\begin{aligned}t'&=\gamma \ (t-vx/c^{2})\\x'&=\gamma \ (x-vt)\\y'&=y\\z'&=z,\end{aligned}}}
where
γ
=
1
1
−
v
2
/
c
2
{\displaystyle \gamma ={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}}
is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of S′, relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity.
Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation:
t
=
γ
(
t
′
+
v
x
′
/
c
2
)
x
=
γ
(
x
′
+
v
t
′
)
y
=
y
′
z
=
z
′
.
{\displaystyle {\begin{aligned}t&=\gamma (t'+vx'/c^{2})\\x&=\gamma (x'+vt')\\y&=y'\\z&=z'.\end{aligned}}}
This shows that the unprimed frame is moving with the velocity −v, as measured in the primed frame.
There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular; see the article Lorentz transformation for details.
A quantity that is invariant under Lorentz transformations is known as a Lorentz scalar.
Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates (x1, t1) and (x′1, t′1), another event has coordinates (x2, t2) and (x′2, t′2), and the differences are defined as
Eq. 1:
Δ
x
′
=
x
2
′
−
x
1
′
,
Δ
t
′
=
t
2
′
−
t
1
′
.
{\displaystyle \Delta x'=x'_{2}-x'_{1}\ ,\ \Delta t'=t'_{2}-t'_{1}\ .}
Eq. 2:
Δ
x
=
x
2
−
x
1
,
Δ
t
=
t
2
−
t
1
.
{\displaystyle \Delta x=x_{2}-x_{1}\ ,\ \ \Delta t=t_{2}-t_{1}\ .}
we get
Eq. 3:
Δ
x
′
=
γ
(
Δ
x
−
v
Δ
t
)
,
{\displaystyle \Delta x'=\gamma \ (\Delta x-v\,\Delta t)\ ,\ \ }
Δ
t
′
=
γ
(
Δ
t
−
v
Δ
x
/
c
2
)
.
{\displaystyle \Delta t'=\gamma \ \left(\Delta t-v\ \Delta x/c^{2}\right)\ .}
Eq. 4:
Δ
x
=
γ
(
Δ
x
′
+
v
Δ
t
′
)
,
{\displaystyle \Delta x=\gamma \ (\Delta x'+v\,\Delta t')\ ,\ }
Δ
t
=
γ
(
Δ
t
′
+
v
Δ
x
′
/
c
2
)
.
{\displaystyle \Delta t=\gamma \ \left(\Delta t'+v\ \Delta x'/c^{2}\right)\ .}
If we take differentials instead of taking differences, we get
Eq. 5:
d
x
′
=
γ
(
d
x
−
v
d
t
)
,
{\displaystyle dx'=\gamma \ (dx-v\,dt)\ ,\ \ }
d
t
′
=
γ
(
d
t
−
v
d
x
/
c
2
)
.
{\displaystyle dt'=\gamma \ \left(dt-v\ dx/c^{2}\right)\ .}
Eq. 6:
d
x
=
γ
(
d
x
′
+
v
d
t
′
)
,
{\displaystyle dx=\gamma \ (dx'+v\,dt')\ ,\ }
d
t
=
γ
(
d
t
′
+
v
d
x
′
/
c
2
)
.
{\displaystyle dt=\gamma \ \left(dt'+v\ dx'/c^{2}\right)\ .}
=== Graphical representation of the Lorentz transformation ===
Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario.
To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S′, in standard configuration, as shown in Fig. 2-1.: 155–199
Fig. 3-1a. Draw the
x
{\displaystyle x}
and
t
{\displaystyle t}
axes of frame S. The
x
{\displaystyle x}
axis is horizontal and the
t
{\displaystyle t}
(actually
c
t
{\displaystyle ct}
) axis is vertical, which is the opposite of the usual convention in kinematics. The
c
t
{\displaystyle ct}
axis is scaled by a factor of
c
{\displaystyle c}
so that both axes have common units of length. In the diagram shown, the gridlines are spaced one unit distance apart. The 45° diagonal lines represent the worldlines of two photons passing through the origin at time
t
=
0.
{\displaystyle t=0.}
The slope of these worldlines is 1 because the photons advance one unit in space per unit of time. Two events,
A
{\displaystyle {\text{A}}}
and
B
,
{\displaystyle {\text{B}},}
have been plotted on this graph so that their coordinates may be compared in the S and S' frames.
Fig. 3-1b. Draw the
x
′
{\displaystyle x'}
and
c
t
′
{\displaystyle ct'}
axes of frame S'. The
c
t
′
{\displaystyle ct'}
axis represents the worldline of the origin of the S' coordinate system as measured in frame S. In this figure,
v
=
c
/
2.
{\displaystyle v=c/2.}
Both the
c
t
′
{\displaystyle ct'}
and
x
′
{\displaystyle x'}
axes are tilted from the unprimed axes by an angle
α
=
tan
−
1
(
β
)
,
{\displaystyle \alpha =\tan ^{-1}(\beta ),}
where
β
=
v
/
c
.
{\displaystyle \beta =v/c.}
The primed and unprimed axes share a common origin because frames S and S' had been set up in standard configuration, so that
t
=
0
{\displaystyle t=0}
when
t
′
=
0.
{\displaystyle t'=0.}
Fig. 3-1c. Units in the primed axes have a different scale from units in the unprimed axes. From the Lorentz transformations, we observe that
(
x
′
,
c
t
′
)
{\displaystyle (x',ct')}
coordinates of
(
0
,
1
)
{\displaystyle (0,1)}
in the primed coordinate system transform to
(
β
γ
,
γ
)
{\displaystyle (\beta \gamma ,\gamma )}
in the unprimed coordinate system. Likewise,
(
x
′
,
c
t
′
)
{\displaystyle (x',ct')}
coordinates of
(
1
,
0
)
{\displaystyle (1,0)}
in the primed coordinate system transform to
(
γ
,
β
γ
)
{\displaystyle (\gamma ,\beta \gamma )}
in the unprimed system. Draw gridlines parallel with the
c
t
′
{\displaystyle ct'}
axis through points
(
k
γ
,
k
β
γ
)
{\displaystyle (k\gamma ,k\beta \gamma )}
as measured in the unprimed frame, where
k
{\displaystyle k}
is an integer. Likewise, draw gridlines parallel with the
x
′
{\displaystyle x'}
axis through
(
k
β
γ
,
k
γ
)
{\displaystyle (k\beta \gamma ,k\gamma )}
as measured in the unprimed frame. Using the Pythagorean theorem, we observe that the spacing between
c
t
′
{\displaystyle ct'}
units equals
(
1
+
β
2
)
/
(
1
−
β
2
)
{\textstyle {\sqrt {(1+\beta ^{2})/(1-\beta ^{2})}}}
times the spacing between
c
t
{\displaystyle ct}
units, as measured in frame S. This ratio is always greater than 1, and ultimately it approaches infinity as
β
→
1.
{\displaystyle \beta \to 1.}
Fig. 3-1d. Since the speed of light is an invariant, the worldlines of two photons passing through the origin at time
t
′
=
0
{\displaystyle t'=0}
still plot as 45° diagonal lines. The primed coordinates of
A
{\displaystyle {\text{A}}}
and
B
{\displaystyle {\text{B}}}
are related to the unprimed coordinates through the Lorentz transformations and could be approximately measured from the graph (assuming that it has been plotted accurately enough), but the real merit of a Minkowski diagram is its granting us a geometric view of the scenario. For example, in this figure, we observe that the two timelike-separated events that had different x-coordinates in the unprimed frame are now at the same position in space.
While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent.
== Consequences derived from the Lorentz transformation ==
The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive.
=== Invariant interval ===
In Galilean relativity, the spatial separation, (
Δ
r
{\displaystyle \Delta r}
), and the temporal separation, (
Δ
t
{\displaystyle \Delta t}
), between two events are independent invariants, the values of which do not change when observed from different frames of reference.
In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as
Δ
s
2
{\displaystyle \Delta s^{2}}
:
Δ
s
2
=
def
c
2
Δ
t
2
−
(
Δ
x
2
+
Δ
y
2
+
Δ
z
2
)
{\displaystyle \Delta s^{2}\;{\overset {\text{def}}{=}}\;c^{2}\Delta t^{2}-(\Delta x^{2}+\Delta y^{2}+\Delta z^{2})}
In considering the physical significance of
Δ
s
2
{\displaystyle \Delta s^{2}}
, there are three cases to note:: 25–39
Δs2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that
|
Δ
x
/
Δ
t
|
<
c
{\displaystyle \vert \Delta x/\Delta t\vert <c}
, and given the Lorentz transformation
Δ
x
′
=
γ
(
Δ
x
−
v
Δ
t
)
{\displaystyle \Delta x'=\gamma \ (\Delta x-v\ \Delta t)}
, it is evident that there exists a
v
{\displaystyle v}
less than
c
{\displaystyle c}
for which
Δ
x
′
=
0
{\displaystyle \Delta x'=0}
(in particular,
v
=
Δ
x
/
Δ
t
{\displaystyle v=\Delta x/\Delta t}
). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time,
Δ
s
/
c
{\displaystyle \Delta s/c}
, is called the proper time.
Δs2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that
|
Δ
x
/
Δ
t
|
>
c
{\displaystyle \vert \Delta x/\Delta t\vert >c}
, and given the Lorentz transformation
Δ
t
′
=
γ
(
Δ
t
−
v
Δ
x
/
c
2
)
{\displaystyle \Delta t'=\gamma \ (\Delta t-v\Delta x/c^{2})}
, there exists a
v
{\displaystyle v}
less than
c
{\displaystyle c}
for which
Δ
t
′
=
0
{\displaystyle \Delta t'=0}
(in particular,
v
=
c
2
Δ
t
/
Δ
x
{\displaystyle v=c^{2}\Delta t/\Delta x}
). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space,
−
Δ
s
2
{\displaystyle \textstyle {\sqrt {-\Delta s^{2}}}}
, is called the proper distance, or proper length. For values of
v
{\displaystyle v}
greater than and less than
c
2
Δ
t
/
Δ
x
{\displaystyle c^{2}\Delta t/\Delta x}
, the sign of
Δ
t
′
{\displaystyle \Delta t'}
changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. But the temporal order of timelike-separated events is absolute, since the only way that
v
{\displaystyle v}
could be greater than
c
2
Δ
t
/
Δ
x
{\displaystyle c^{2}\Delta t/\Delta x}
would be if
v
>
c
{\displaystyle v>c}
.
Δs2 = 0: In this case, the two events are said to be lightlike separated. This implies that
|
Δ
x
/
Δ
t
|
=
c
{\displaystyle \vert \Delta x/\Delta t\vert =c}
, and this relationship is frame independent due to the invariance of
s
2
{\displaystyle s^{2}}
. From this, we observe that the speed of light is
c
{\displaystyle c}
in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory.
The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames.
The form of
Δ
s
2
{\displaystyle \Delta s^{2}}
, being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame.: 33–34
In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed:
Δ
s
2
=
c
2
Δ
t
2
−
Δ
x
2
{\displaystyle \Delta s^{2}\,=\,c^{2}\Delta t^{2}-\Delta x^{2}}
Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration:
c
2
Δ
t
2
−
Δ
x
2
=
c
2
γ
2
(
Δ
t
′
+
v
Δ
x
′
c
2
)
2
−
γ
2
(
Δ
x
′
+
v
Δ
t
′
)
2
=
γ
2
(
c
2
Δ
t
′
2
+
2
v
Δ
x
′
Δ
t
′
+
v
2
Δ
x
′
2
c
2
)
−
γ
2
(
Δ
x
′
2
+
2
v
Δ
x
′
Δ
t
′
+
v
2
Δ
t
′
2
)
=
γ
2
c
2
Δ
t
′
2
−
γ
2
v
2
Δ
t
′
2
−
γ
2
Δ
x
′
2
+
γ
2
v
2
Δ
x
′
2
c
2
=
γ
2
c
2
Δ
t
′
2
(
1
−
v
2
c
2
)
−
γ
2
Δ
x
′
2
(
1
−
v
2
c
2
)
=
c
2
Δ
t
′
2
−
Δ
x
′
2
{\displaystyle {\begin{aligned}c^{2}\Delta t^{2}-\Delta x^{2}&=c^{2}\gamma ^{2}\left(\Delta t'+{\dfrac {v\Delta x'}{c^{2}}}\right)^{2}-\gamma ^{2}\ (\Delta x'+v\Delta t')^{2}\\&=\gamma ^{2}\left(c^{2}\Delta t'^{\,2}+2v\Delta x'\Delta t'+{\dfrac {v^{2}\Delta x'^{\,2}}{c^{2}}}\right)-\gamma ^{2}\ (\Delta x'^{\,2}+2v\Delta x'\Delta t'+v^{2}\Delta t'^{\,2})\\&=\gamma ^{2}c^{2}\Delta t'^{\,2}-\gamma ^{2}v^{2}\Delta t'^{\,2}-\gamma ^{2}\Delta x'^{\,2}+\gamma ^{2}{\dfrac {v^{2}\Delta x'^{\,2}}{c^{2}}}\\&=\gamma ^{2}c^{2}\Delta t'^{\,2}\left(1-{\dfrac {v^{2}}{c^{2}}}\right)-\gamma ^{2}\Delta x'^{\,2}\left(1-{\dfrac {v^{2}}{c^{2}}}\right)\\&=c^{2}\Delta t'^{\,2}-\Delta x'^{\,2}\end{aligned}}}
The value of
Δ
s
2
{\displaystyle \Delta s^{2}}
is hence independent of the frame in which it is measured.
=== Relativity of simultaneity ===
Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
From Equation 3 (the forward Lorentz transformation in terms of coordinate differences)
Δ
t
′
=
γ
(
Δ
t
−
v
Δ
x
c
2
)
{\displaystyle \Delta t'=\gamma \left(\Delta t-{\frac {v\,\Delta x}{c^{2}}}\right)}
It is clear that the two events that are simultaneous in frame S (satisfying Δt = 0), are not necessarily simultaneous in another inertial frame S′ (satisfying Δt′ = 0). Only if these events are additionally co-local in frame S (satisfying Δx = 0), will they be simultaneous in another frame S′.
The Sagnac effect can be considered a manifestation of the relativity of simultaneity. Since relativity of simultaneity is a first order effect in
v
{\displaystyle v}
, instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity.
=== Time dilation ===
The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames.
Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by Δx = 0. To find the relation between the times between these ticks as measured in both systems, Equation 3 can be used to find:
Δ
t
′
=
γ
Δ
t
{\displaystyle \Delta t'=\gamma \,\Delta t}
for events satisfying
Δ
x
=
0
.
{\displaystyle \Delta x=0\ .}
This shows that the time (Δt′) between the two ticks as seen in the frame in which the clock is moving (S′), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory.
Whenever one hears a statement to the effect that "moving clocks run slow", one should envision an inertial reference frame thickly populated with identical, synchronized clocks. As a moving clock travels through this array, its reading at any particular point is compared with a stationary clock at the same point.: 149–152
The measurements that we would get if we actually looked at a moving clock would, in general, not at all be the same thing, because the time that we would see would be delayed by the finite speed of light, i.e. the times that we see would be distorted by the Doppler effect. Measurements of relativistic effects must always be understood as having been made after finite speed-of-light effects have been factored out.: 149–152
==== Langevin's light-clock ====
Paul Langevin, an early proponent of the theory of relativity, did much to popularize the theory in the face of resistance by many physicists to Einstein's revolutionary concepts. Among his numerous contributions to the foundations of special relativity were independent work on the mass–energy relationship, a thorough examination of the twin paradox, and investigations into rotating coordinate systems. His name is frequently attached to a hypothetical construct called a "light-clock" (originally developed by Lewis and Tolman in 1909), which he used to perform a novel derivation of the Lorentz transformation.
A light-clock is imagined to be a box of perfectly reflecting walls wherein a light signal reflects back and forth from opposite faces. The concept of time dilation is frequently taught using a light-clock that is traveling in uniform inertial motion perpendicular to a line connecting the two mirrors. (Langevin himself made use of a light-clock oriented parallel to its line of motion.)
Consider the scenario illustrated in Fig. 4-3A. Observer A holds a light-clock of length
L
{\displaystyle L}
as well as an electronic timer with which she measures how long it takes a pulse to make a round trip up and down along the light-clock. Although observer A is traveling rapidly along a train, from her point of view the emission and receipt of the pulse occur at the same place, and she measures the interval using a single clock located at the precise position of these two events. For the interval between these two events, observer A finds
t
A
=
2
L
/
c
{\displaystyle t_{\text{A}}=2L/c}
. A time interval measured using a single clock that is motionless in a particular reference frame is called a proper time interval.
Fig. 4-3B illustrates these same two events from the standpoint of observer B, who is parked by the tracks as the train goes by at a speed of
v
{\displaystyle v}
. Instead of making straight up-and-down motions, observer B sees the pulses moving along a zig-zag line. However, because of the postulate of the constancy of the speed of light, the speed of the pulses along these diagonal lines is the same
c
{\displaystyle c}
that observer A saw for her up-and-down pulses. B measures the speed of the vertical component of these pulses as
±
c
2
−
v
2
,
{\textstyle \pm {\sqrt {c^{2}-v^{2}}},}
so that the total round-trip time of the pulses is
t
B
=
2
L
/
c
2
−
v
2
=
{\textstyle t_{\text{B}}=2L{\big /}{\sqrt {c^{2}-v^{2}}}={}}
t
A
/
1
−
v
2
/
c
2
{\displaystyle \textstyle t_{\text{A}}{\big /}{\sqrt {1-v^{2}/c^{2}}}}
. Note that for observer B, the emission and receipt of the light pulse occurred at different places, and he measured the interval using two stationary and synchronized clocks located at two different positions in his reference frame. The interval that B measured was therefore not a proper time interval because he did not measure it with a single resting clock.
==== Reciprocal time dilation ====
In the above description of the Langevin light-clock, the labeling of one observer as stationary and the other as in motion was completely arbitrary. One could just as well have observer B carrying the light-clock and moving at a speed of
v
{\displaystyle v}
to the left, in which case observer A would perceive B's clock as running slower than her local clock.
There is no paradox here, because there is no independent observer C who will agree with both A and B. Observer C necessarily makes his measurements from his own reference frame. If that reference frame coincides with A's reference frame, then C will agree with A's measurement of time. If C's reference frame coincides with B's reference frame, then C will agree with B's measurement of time. If C's reference frame coincides with neither A's frame nor B's frame, then C's measurement of time will disagree with both A's and B's measurement of time.
=== Twin paradox ===
The reciprocity of time dilation between two observers in separate inertial frames leads to the so-called twin paradox, articulated in its present form by Langevin in 1911. Langevin imagined an adventurer wishing to explore the future of the Earth. This traveler boards a projectile capable of traveling at 99.995% of the speed of light. After making a round-trip journey to and from a nearby star lasting only two years of his own life, he returns to an Earth that is two hundred years older.
This result appears puzzling because both the traveler and an Earthbound observer would see the other as moving, and so, because of the reciprocity of time dilation, one might initially expect that each should have found the other to have aged less. In reality, there is no paradox at all, because in order for the two observers to perform side-by-side comparisons of their elapsed proper times, the symmetry of the situation must be broken: At least one of the two observers must change their state of motion to match that of the other.
Knowing the general resolution of the paradox, however, does not immediately yield the ability to calculate correct quantitative results. Many solutions to this puzzle have been provided in the literature and have been reviewed in the Twin paradox article. We will examine in the following one such solution to the paradox.
Our basic aim will be to demonstrate that, after the trip, both twins are in perfect agreement about who aged by how much, regardless of their different experiences. Fig 4-4 illustrates a scenario where the traveling twin flies at 0.6 c to and from a star 3 ly distant. During the trip, each twin sends yearly time signals (measured in their own proper times) to the other. After the trip, the cumulative counts are compared. On the outward phase of the trip, each twin receives the other's signals at the lowered rate of
f
′
=
f
(
1
−
β
)
/
(
1
+
β
)
{\displaystyle \textstyle f'=f{\sqrt {(1-\beta )/(1+\beta )}}}
. Initially, the situation is perfectly symmetric: note that each twin receives the other's one-year signal at two years measured on their own clock. The symmetry is broken when the traveling twin turns around at the four-year mark as measured by her clock. During the remaining four years of her trip, she receives signals at the enhanced rate of
f
″
=
f
(
1
+
β
)
/
(
1
−
β
)
{\displaystyle \textstyle f''=f{\sqrt {(1+\beta )/(1-\beta )}}}
. The situation is quite different with the stationary twin. Because of light-speed delay, he does not see his sister turn around until eight years have passed on his own clock. Thus, he receives enhanced-rate signals from his sister for only a relatively brief period. Although the twins disagree in their respective measures of total time, we see in the following table, as well as by simple observation of the Minkowski diagram, that each twin is in total agreement with the other as to the total number of signals sent from one to the other. There is hence no paradox.: 152–159
=== Length contraction ===
The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage).
Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system S′, in which the rod is moving, the distances x′ to the end points of the rod must be measured simultaneously in that system S′. In other words, the measurement is characterized by Δt′ = 0, which can be combined with Equation 4 to find the relation between the lengths Δx and Δx′:
Δ
x
′
=
Δ
x
γ
{\displaystyle \Delta x'={\frac {\Delta x}{\gamma }}}
for events satisfying
Δ
t
′
=
0
.
{\displaystyle \Delta t'=0\ .}
This shows that the length (Δx′) of the rod as measured in the frame in which it is moving (S′), is shorter than its length (Δx) in its own rest frame (S).
Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system.
=== Lorentz transformation of velocities ===
Consider two frames S and S′ in standard configuration. A particle in S moves in the x direction with velocity vector
u
{\displaystyle \mathbf {u} }
. What is its velocity
u
′
{\displaystyle \mathbf {u'} }
in frame S′?
We can write
Substituting expressions for
d
x
′
{\displaystyle dx'}
and
d
t
′
{\displaystyle dt'}
from Equation 5 into Equation 8, followed by straightforward mathematical manipulations and back-substitution from Equation 7 yields the Lorentz transformation of the speed
u
{\displaystyle u}
to
u
′
{\displaystyle u'}
:
The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing
v
{\displaystyle v}
with
−
v
{\displaystyle -v}
.
For
u
{\displaystyle \mathbf {u} }
not aligned along the x-axis, we write:: 47–49
The forward and inverse transformations for this case are:
Equation 10 and Equation 14 can be interpreted as giving the resultant
u
{\displaystyle \mathbf {u} }
of the two velocities
v
{\displaystyle \mathbf {v} }
and
u
′
{\displaystyle \mathbf {u'} }
, and they replace the formula
u
=
u
′
+
v
{\displaystyle \mathbf {u=u'+v} }
. which is valid in Galilean relativity. Interpreted in such a fashion, they are commonly referred to as the relativistic velocity addition (or composition) formulas, valid for the three axes of S and S′ being aligned with each other (although not necessarily in standard configuration).: 47–49
We note the following points:
If an object (e.g., a photon) were moving at the speed of light in one frame (i.e., u = ±c or u′ = ±c), then it would also be moving at the speed of light in any other frame, moving at |v| < c.
The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c.
If both |u| and |v| (and then also |u′| and |v′|) are small with respect to the speed of light (that is, e.g., |u/c| ≪ 1), then the intuitive Galilean transformations are recovered from the transformation equations for special relativity
Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations.
There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction; and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details.
=== Thomas rotation ===
The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation.
Thomas rotation results from the relativity of simultaneity. In Fig. 4-5a, a rod of length
L
{\displaystyle L}
in its rest frame (i.e., having a proper length of
L
{\displaystyle L}
) rises vertically along the y-axis in the ground frame.
In Fig. 4-5b, the same rod is observed from the frame of a rocket moving at speed
v
{\displaystyle v}
to the right. If we imagine two clocks situated at the left and right ends of the rod that are synchronized in the frame of the rod, relativity of simultaneity causes the observer in the rocket frame to observe (not see) the clock at the right end of the rod as being advanced in time by
L
v
/
c
2
{\displaystyle Lv/c^{2}}
, and the rod is correspondingly observed as tilted.: 98–99
Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion.: 169–174
Thomas rotation provides the resolution to the well-known "meter stick and hole paradox".: 98–99
=== Causality and prohibition of motion faster than light ===
In Fig. 4-6, the time interval between the events A (the "cause") and B (the "effect") is 'timelike'; that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect).
The interval AC in the diagram is 'spacelike'; that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. But no frames are accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, paradoxes of causality would result.
For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). A variety of causal paradoxes could then be constructed.
Consider the spacetime diagrams in Fig. 4-7. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards (ct′), reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground.
Fig. 4-7a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the
−
x
′
{\displaystyle -x'}
axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent.
Fig. 4-7b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the
+
x
{\displaystyle +x}
axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, in the primed frames of C and D, B will receive the message before it was sent out, a violation of causality.
It is not necessary for signals to be instantaneous to violate causality. Even if the signal from D to C were slightly shallower than the
x
′
{\displaystyle x'}
axis (and the signal from A to B slightly steeper than the
x
{\displaystyle x}
axis), it would still be possible for B to receive his message before he had sent it. By increasing the speed of the train to near light speeds, the
c
t
′
{\displaystyle ct'}
and
x
′
{\displaystyle x'}
axes can be squeezed very close to the dashed line representing the speed of light. With this modified setup, it can be demonstrated that even signals only slightly faster than the speed of light will result in causality violation.
Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum.
This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon).
== Optical effects ==
=== Dragging effects ===
In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water?
In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light.
According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If
u
′
=
c
/
n
{\displaystyle u'=c/n}
is the speed of light in still water, and
v
{\displaystyle v}
is the speed of the water, and
u
±
{\displaystyle u_{\pm }}
is the water-borne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then
u
±
=
c
n
±
v
(
1
−
1
n
2
)
.
{\displaystyle u_{\pm }={\frac {c}{n}}\pm v\left(1-{\frac {1}{n^{2}}}\right)\ .}
Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since
n
{\displaystyle n}
depends on wavelength, the aether must be capable of sustaining different motions at the same time. A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient, that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies.
From the point of view of special relativity, Fizeau's result is nothing but an approximation to Equation 10, the relativistic formula for composition of velocities.
u
±
=
u
′
±
v
1
±
u
′
v
/
c
2
=
{\displaystyle u_{\pm }={\frac {u'\pm v}{1\pm u'v/c^{2}}}=}
c
/
n
±
v
1
±
v
/
c
n
≈
{\displaystyle {\frac {c/n\pm v}{1\pm v/cn}}\approx }
c
(
1
n
±
v
c
)
(
1
∓
v
c
n
)
≈
{\displaystyle c\left({\frac {1}{n}}\pm {\frac {v}{c}}\right)\left(1\mp {\frac {v}{cn}}\right)\approx }
c
n
±
v
(
1
−
1
n
2
)
{\displaystyle {\frac {c}{n}}\pm v\left(1-{\frac {1}{n^{2}}}\right)}
=== Relativistic aberration of light ===
Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver.
The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag.
Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include: 57–60
cos
θ
′
=
cos
θ
+
v
/
c
1
+
(
v
/
c
)
cos
θ
{\displaystyle \cos \theta '={\frac {\cos \theta +v/c}{1+(v/c)\cos \theta }}}
OR
sin
θ
′
=
sin
θ
γ
[
1
+
(
v
/
c
)
cos
θ
]
{\displaystyle \sin \theta '={\frac {\sin \theta }{\gamma [1+(v/c)\cos \theta ]}}}
OR
tan
θ
′
2
=
(
c
−
v
c
+
v
)
1
/
2
tan
θ
2
{\displaystyle \tan {\frac {\theta '}{2}}=\left({\frac {c-v}{c+v}}\right)^{1/2}\tan {\frac {\theta }{2}}}
=== Relativistic Doppler effect ===
==== Relativistic longitudinal Doppler effect ====
The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here.
Assume the receiver and the source are moving away from each other with a relative speed
v
{\displaystyle v}
as measured by an observer on the receiver or the source (The sign convention adopted here is that
v
{\displaystyle v}
is negative if the receiver and the source are moving towards each other). Assume that the source is stationary in the medium. Then
f
r
=
(
1
−
v
c
s
)
f
s
{\displaystyle f_{r}=\left(1-{\frac {v}{c_{s}}}\right)f_{s}}
where
c
s
{\displaystyle c_{s}}
is the speed of sound.
For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be
f
r
=
γ
(
1
−
β
)
f
s
=
1
−
β
1
+
β
f
s
.
{\displaystyle f_{r}=\gamma \left(1-\beta \right)f_{s}={\sqrt {\frac {1-\beta }{1+\beta }}}\,f_{s}.}
where
β
=
v
/
c
{\displaystyle \beta =v/c}
and
γ
=
1
1
−
β
2
{\displaystyle \gamma ={\frac {1}{\sqrt {1-\beta ^{2}}}}}
is the Lorentz factor.
An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source.
==== Transverse Doppler effect ====
The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.
Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver.
Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of
γ
{\displaystyle \gamma }
. In Fig. 5-3b, the light is redshifted by the same factor.
=== Measurement versus visual appearance ===
Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer.
Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. But the visual appearance of an object is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye.
For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the circular outline for all speeds, for any distance, and for all view angles, although
the surface of the sphere and the images on it will appear distorted.
Both Fig. 5-4 and Fig. 5-5 illustrate objects moving transversely to the line of sight. In Fig. 5-4, a cube is viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. At high speeds, the sphere in Fig. 5-5 takes on the appearance of a flattened disk tilted up to 45° from the line of sight. If the objects' motions are not strictly transverse but instead include a longitudinal component, exaggerated distortions in perspective may be seen. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect.
Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. In Fig. 5-6, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out.
== Dynamics ==
Section § Consequences derived from the Lorentz transformation dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself.
=== Equivalence of mass and energy ===
Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is (E/c, 0, 0, 0): it has a time component, which is the energy, and three space components, which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes (E/c, Ev/c2, 0, 0). The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2.
The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these do not talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions.
Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy.
=== Einstein's 1905 demonstration of E = mc2 ===
In his fourth of his 1905 Annus mirabilis papers, Einstein presented a heuristic argument for the equivalence of mass and energy. Although, as discussed above, subsequent scholarship has established that his arguments fell short of a broadly definitive proof, the conclusions that he reached in this paper have stood the test of time.
Einstein took as starting assumptions his recently discovered formula for relativistic Doppler shift, the laws of conservation of energy and conservation of momentum, and the relationship between the frequency of light and its energy as implied by Maxwell's equations.
Fig. 6-1 (top). Consider a system of plane waves of light having frequency
f
{\displaystyle f}
traveling in direction
ϕ
{\displaystyle \phi }
relative to the x-axis of reference frame S. The frequency (and hence energy) of the waves as measured in frame S′ that is moving along the x-axis at velocity
v
{\displaystyle v}
is given by the relativistic Doppler shift formula that Einstein had developed in his 1905 paper on special relativity:
f
′
f
=
1
−
(
v
/
c
)
cos
ϕ
1
−
v
2
/
c
2
{\displaystyle {\frac {f'}{f}}={\frac {1-(v/c)\cos {\phi }}{\sqrt {1-v^{2}/c^{2}}}}}
Fig. 6-1 (bottom). Consider an arbitrary body that is stationary in reference frame S. Let this body emit a pair of equal-energy light-pulses in opposite directions at angle
ϕ
{\displaystyle \phi }
with respect to the x-axis. Each pulse has energy
L
/
2
{\displaystyle L/2}
. Because of conservation of momentum, the body remains stationary in S after emission of the two pulses. Let
E
0
{\displaystyle E_{0}}
be the energy of the body before emission of the two pulses and
E
1
{\displaystyle E_{1}}
after their emission.
Next, consider the same system observed from frame S′ that is moving along the x-axis at speed
v
{\displaystyle v}
relative to frame S. In this frame, light from the forwards and reverse pulses will be relativistically Doppler-shifted. Let
H
0
{\displaystyle H_{0}}
be the energy of the body measured in reference frame S′ before emission of the two pulses and
H
1
{\displaystyle H_{1}}
after their emission. We obtain the following relationships:
E
0
=
E
1
+
1
2
L
+
1
2
L
=
E
1
+
L
H
0
=
H
1
+
1
2
L
1
−
(
v
/
c
)
cos
ϕ
1
−
v
2
/
c
2
+
1
2
L
1
+
(
v
/
c
)
cos
ϕ
1
−
v
2
/
c
2
=
H
1
+
L
1
−
v
2
/
c
2
{\displaystyle {\begin{aligned}E_{0}&=E_{1}+{\tfrac {1}{2}}L+{\tfrac {1}{2}}L=E_{1}+L\\[5mu]H_{0}&=H_{1}+{\tfrac {1}{2}}L{\frac {1-(v/c)\cos {\phi }}{\sqrt {1-v^{2}/c^{2}}}}+{\tfrac {1}{2}}L{\frac {1+(v/c)\cos {\phi }}{\sqrt {1-v^{2}/c^{2}}}}=H_{1}+{\frac {L}{\sqrt {1-v^{2}/c^{2}}}}\end{aligned}}}
From the above equations, we obtain the following:
The two differences of form
H
−
E
{\displaystyle H-E}
seen in the above equation have a straightforward physical interpretation. Since
H
{\displaystyle H}
and
E
{\displaystyle E}
are the energies of the arbitrary body in the moving and stationary frames,
H
0
−
E
0
{\displaystyle H_{0}-E_{0}}
and
H
1
−
E
1
{\displaystyle H_{1}-E_{1}}
represents the kinetic energies of the bodies before and after the emission of light (except for an additive constant that fixes the zero point of energy and is conventionally set to zero). Hence,
Taking a Taylor series expansion and neglecting higher order terms, he obtained
Comparing the above expression with the classical expression for kinetic energy, K.E. = 1/2mv2, Einstein then noted: "If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2."
Rindler has observed that Einstein's heuristic argument suggested merely that energy contributes to mass. In 1905, Einstein's cautious expression of the mass–energy relationship allowed for the possibility that "dormant" mass might exist that would remain behind after all the energy of a body was removed. By 1907, however, Einstein was ready to assert that all inertial mass represented a reserve of energy. "To equate all mass with energy required an act of aesthetic faith, very characteristic of Einstein.": 81–84 Einstein's bold hypothesis has been amply confirmed in the years subsequent to his original proposal.
For a variety of reasons, Einstein's original derivation is currently seldom taught. Besides the vigorous debate that continues until this day as to the formal correctness of his original derivation, the recognition of special relativity as being what Einstein called a "principle theory" has led to a shift away from reliance on electromagnetic phenomena to purely dynamic methods of proof.
=== How far can you travel from the Earth? ===
Since nothing can travel faster than light, one might conclude that a human can never travel farther from Earth than ~ 100 light years. You would easily think that a traveler would never be able to reach more than the few solar systems that exist within the limit of 100 light years from Earth. However, because of time dilation, a hypothetical spaceship can travel thousands of light years during a passenger's lifetime. If a spaceship could be built that accelerates at a constant 1g, it will, after one year, be travelling at almost the speed of light as seen from Earth. This is described by:
v
(
t
)
=
a
t
1
+
a
2
t
2
/
c
2
,
{\displaystyle v(t)={\frac {at}{\sqrt {1+a^{2}t^{2}/c^{2}}}},}
where v(t) is the velocity at a time t, a is the acceleration of the spaceship and t is the coordinate time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at v = 0.712 c and 0.946 c after three years, relative to Earth. After three years of this acceleration, with the spaceship achieving a velocity of 94.6% of the speed of light relative to Earth, time dilation will result in each second experienced on the spaceship corresponding to 3.1 seconds back on Earth. During their journey, people on Earth will experience more time than they do – since their clocks (all physical phenomena) would really be ticking 3.1 times faster than those of the spaceship. A 5-year round trip for the traveller will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for them (5 years accelerating, 5 decelerating, twice each) will land them back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at 1.1 g will take 148000 years and cover about 140000 light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest).
=== Elastic collisions ===
Examination of the collision products generated by particle accelerators around the world provides scientists evidence of the structure of the subatomic world and the natural laws governing it. Analysis of the collision products, the sum of whose masses may vastly exceed the masses of the incident particles, requires special relativity.
In Newtonian mechanics, analysis of collisions involves use of the conservation laws for mass, momentum and energy. In relativistic mechanics, mass is not independently conserved, because it has been subsumed into the total relativistic energy. We illustrate the differences that arise between the Newtonian and relativistic treatments of particle collisions by examining the simple case of two perfectly elastic colliding particles of equal mass. (Inelastic collisions are discussed in Spacetime#Conservation laws. Radioactive decay may be considered a sort of time-reversed inelastic collision.)
Elastic scattering of charged elementary particles deviates from ideality due to the production of Bremsstrahlung radiation.
==== Newtonian analysis ====
Fig. 6-2 provides a demonstration of the result, familiar to billiard players, that if a stationary ball is struck elastically by another one of the same mass (assuming no sidespin, or "English"), then after collision, the diverging paths of the two balls will subtend a right angle. (a) In the stationary frame, an incident sphere traveling at 2v strikes a stationary sphere. (b) In the center of momentum frame, the two spheres approach each other symmetrically at ±v. After elastic collision, the two spheres rebound from each other with equal and opposite velocities ±u. Energy conservation requires that |u| = |v|. (c) Reverting to the stationary frame, the rebound velocities are v ± u. The dot product (v + u) ⋅ (v − u) = v2 − u2 = 0, indicating that the vectors are orthogonal.: 26–27
==== Relativistic analysis ====
Consider the elastic collision scenario in Fig. 6-3 between a moving particle colliding with an equal mass stationary particle. Unlike the Newtonian case, the angle between the two particles after collision is less than 90°, is dependent on the angle of scattering, and becomes smaller and smaller as the velocity of the incident particle approaches the speed of light:
The relativistic momentum and total relativistic energy of a particle are given by
Conservation of momentum dictates that the sum of the momenta of the incoming particle and the stationary particle (which initially has momentum = 0) equals the sum of the momenta of the emergent particles:
Likewise, the sum of the total relativistic energies of the incoming particle and the stationary particle (which initially has total energy mc2) equals the sum of the total energies of the emergent particles:
Breaking down (6-5) into its components, replacing
v
{\displaystyle v}
with the dimensionless
β
{\displaystyle \beta }
, and factoring out common terms from (6-5) and (6-6) yields the following:
From these we obtain the following relationships:
For the symmetrical case in which
ϕ
=
θ
{\displaystyle \phi =\theta }
and
β
2
=
β
3
{\displaystyle \beta _{2}=\beta _{3}}
, (6-12) takes on the simpler form:
== Beyond the basics ==
=== Rapidity ===
Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas.
This nonlinearity is an artifact of our choice of parameters.: 47–59 We have previously noted that in an x–ct spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other.
The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 7-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the x-axis, but as twice the area of the sector swept out by the ray from the x-axis. Numerically, the angle and 2 × area measures for the unit circle are identical. Fig. 7-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 7-2 presents plots of the sinh, cosh, and tanh functions.
For the unit circle, the slope of the ray is given by
slope
=
tan
a
=
sin
a
cos
a
.
{\displaystyle {\text{slope}}=\tan a={\frac {\sin a}{\cos a}}.}
In the Cartesian plane, rotation of point (x, y) into point (x', y') by angle θ is given by
(
x
′
y
′
)
=
(
cos
θ
−
sin
θ
sin
θ
cos
θ
)
(
x
y
)
.
{\displaystyle {\begin{pmatrix}x'\\y'\\\end{pmatrix}}={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \\\end{pmatrix}}{\begin{pmatrix}x\\y\\\end{pmatrix}}.}
In a spacetime diagram, the velocity parameter
β
{\displaystyle \beta }
is the analog of slope. The rapidity, φ, is defined by: 96–99
β
≡
tanh
ϕ
≡
v
c
,
{\displaystyle \beta \equiv \tanh \phi \equiv {\frac {v}{c}},}
where
tanh
ϕ
=
sinh
ϕ
cosh
ϕ
=
e
ϕ
−
e
−
ϕ
e
ϕ
+
e
−
ϕ
.
{\displaystyle \tanh \phi ={\frac {\sinh \phi }{\cosh \phi }}={\frac {e^{\phi }-e^{-\phi }}{e^{\phi }+e^{-\phi }}}.}
The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula;: 47–59
β
=
β
1
+
β
2
1
+
β
1
β
2
=
{\displaystyle \beta ={\frac {\beta _{1}+\beta _{2}}{1+\beta _{1}\beta _{2}}}=}
tanh
ϕ
1
+
tanh
ϕ
2
1
+
tanh
ϕ
1
tanh
ϕ
2
=
{\displaystyle {\frac {\tanh \phi _{1}+\tanh \phi _{2}}{1+\tanh \phi _{1}\tanh \phi _{2}}}=}
tanh
(
ϕ
1
+
ϕ
2
)
,
{\displaystyle \tanh(\phi _{1}+\phi _{2}),}
or in other words,
ϕ
=
ϕ
1
+
ϕ
2
{\displaystyle \phi =\phi _{1}+\phi _{2}}
.
The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as
γ
=
1
1
−
β
2
=
1
1
−
tanh
2
ϕ
{\displaystyle \gamma ={\frac {1}{\sqrt {1-\beta ^{2}}}}={\frac {1}{\sqrt {1-\tanh ^{2}\phi }}}}
=
cosh
ϕ
,
{\displaystyle =\cosh \phi ,}
γ
β
=
β
1
−
β
2
=
tanh
ϕ
1
−
tanh
2
ϕ
{\displaystyle \gamma \beta ={\frac {\beta }{\sqrt {1-\beta ^{2}}}}={\frac {\tanh \phi }{\sqrt {1-\tanh ^{2}\phi }}}}
=
sinh
ϕ
.
{\displaystyle =\sinh \phi .}
Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts.
Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the x-direction may be written as
(
c
t
′
x
′
)
=
(
cosh
ϕ
−
sinh
ϕ
−
sinh
ϕ
cosh
ϕ
)
(
c
t
x
)
,
{\displaystyle {\begin{pmatrix}ct'\\x'\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi \\-\sinh \phi &\cosh \phi \end{pmatrix}}{\begin{pmatrix}ct\\x\end{pmatrix}},}
and the inverse Lorentz boost in the x-direction may be written as
(
c
t
x
)
=
(
cosh
ϕ
sinh
ϕ
sinh
ϕ
cosh
ϕ
)
(
c
t
′
x
′
)
.
{\displaystyle {\begin{pmatrix}ct\\x\end{pmatrix}}={\begin{pmatrix}\cosh \phi &\sinh \phi \\\sinh \phi &\cosh \phi \end{pmatrix}}{\begin{pmatrix}ct'\\x'\end{pmatrix}}.}
In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime.: 96–99
The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage.
=== 4‑vectors ===
Four‑vectors have been mentioned above in context of the energy–momentum 4‑vector, but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, 4‑vectors, and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation, really no more than an observation, using the field strength tensor formulation.
On the other hand, general relativity, from the outset, relies heavily on 4‑vectors, and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such 4‑vectors even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime.
==== Definition of 4-vectors ====
A 4-tuple,
A
=
(
A
0
,
A
1
,
A
2
,
A
3
)
{\displaystyle A=\left(A_{0},A_{1},A_{2},A_{3}\right)}
is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation.
If using
(
c
t
,
x
,
y
,
z
)
{\displaystyle (ct,x,y,z)}
coordinates, A is a 4–vector if it transforms (in the x-direction) according to
A
0
′
=
γ
(
A
0
−
(
v
/
c
)
A
1
)
A
1
′
=
γ
(
A
1
−
(
v
/
c
)
A
0
)
A
2
′
=
A
2
A
3
′
=
A
3
,
{\displaystyle {\begin{aligned}A_{0}'&=\gamma \left(A_{0}-(v/c)A_{1}\right)\\A_{1}'&=\gamma \left(A_{1}-(v/c)A_{0}\right)\\A_{2}'&=A_{2}\\A_{3}'&=A_{3}\end{aligned}},}
which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation.
As usual, when we write x, t, etc. we generally mean Δx, Δt etc.
The last three components of a 4–vector must be a standard vector in three-dimensional space. Therefore, a 4–vector must transform like
(
c
Δ
t
,
Δ
x
,
Δ
y
,
Δ
z
)
{\displaystyle (c\Delta t,\Delta x,\Delta y,\Delta z)}
under Lorentz transformations as well as rotations.: 36–59
==== Properties of 4-vectors ====
Closure under linear combination: If A and B are 4-vectors, then
C
=
a
A
+
a
B
{\displaystyle C=aA+aB}
is also a 4-vector.
Inner-product invariance: If A and B are 4-vectors, then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a 3-vector. In the following,
A
→
{\displaystyle {\vec {A}}}
and
B
→
{\displaystyle {\vec {B}}}
are 3-vectors:
A
⋅
B
≡
{\displaystyle A\cdot B\equiv }
A
0
B
0
−
A
1
B
1
−
A
2
B
2
−
A
3
B
3
≡
{\displaystyle A_{0}B_{0}-A_{1}B_{1}-A_{2}B_{2}-A_{3}B_{3}\equiv }
A
0
B
0
−
A
→
⋅
B
→
{\displaystyle A_{0}B_{0}-{\vec {A}}\cdot {\vec {B}}}
In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in 3-space.
Two vectors are said to be orthogonal if
A
⋅
B
=
0
{\displaystyle A\cdot B=0}
. Unlike the case with 3-vectors, orthogonal 4-vectors are not necessarily at right angles to each other. The rule is that two 4-vectors are orthogonal if they are offset by equal and opposite angles from the 45° line, which is the world line of a light ray. This implies that a lightlike 4-vector is orthogonal to itself.
Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a 4-vector with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which
A
⋅
A
=
0
{\displaystyle A\cdot A=0}
, while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval
c
2
t
2
−
x
2
{\displaystyle c^{2}t^{2}-x^{2}}
and the invariant length of the relativistic momentum vector
E
2
−
p
2
c
2
{\displaystyle E^{2}-p^{2}c^{2}}
.: 178–181 : 36–59
==== Examples of 4-vectors ====
Displacement 4-vector: Otherwise known as the spacetime separation, this is (Δt, Δx, Δy, Δz), or for infinitesimal separations, (dt, dx, dy, dz).
d
S
≡
(
d
t
,
d
x
,
d
y
,
d
z
)
{\displaystyle dS\equiv (dt,dx,dy,dz)}
Velocity 4-vector: This results when the displacement 4-vector is divided by
d
τ
{\displaystyle d\tau }
, where
d
τ
{\displaystyle d\tau }
is the proper time between the two events that yield dt, dx, dy, and dz.
V
≡
d
S
d
τ
=
(
d
t
,
d
x
,
d
y
,
d
z
)
d
t
/
γ
=
{\displaystyle V\equiv {\frac {dS}{d\tau }}={\frac {(dt,dx,dy,dz)}{dt/\gamma }}=}
γ
(
1
,
d
x
d
t
,
d
y
d
t
,
d
z
d
t
)
=
{\displaystyle \gamma \left(1,{\frac {dx}{dt}},{\frac {dy}{dt}},{\frac {dz}{dt}}\right)=}
(
γ
,
γ
v
→
)
{\displaystyle (\gamma ,\gamma {\vec {v}})}
The 4-velocity is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle.
An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found that is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles.
Since photons move on null lines,
d
τ
=
0
{\displaystyle d\tau =0}
for a photon, and a 4-velocity cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path.
Energy–momentum 4-vector:
P
≡
(
E
/
c
,
p
→
)
=
(
E
/
c
,
p
x
,
p
y
,
p
z
)
{\displaystyle P\equiv (E/c,{\vec {p}})=(E/c,p_{x},p_{y},p_{z})}
As indicated before, there are varying treatments for the energy–momentum 4-vector so that one may also see it expressed as
(
E
,
p
→
)
{\displaystyle (E,{\vec {p}})}
or
(
E
,
p
→
c
)
{\displaystyle (E,{\vec {p}}c)}
. The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy–momentum 4-vector is a conserved quantity.
Acceleration 4-vector: This results from taking the derivative of the velocity 4-vector with respect to
τ
{\displaystyle \tau }
.
A
≡
d
V
d
τ
=
{\displaystyle A\equiv {\frac {dV}{d\tau }}=}
d
d
τ
(
γ
,
γ
v
→
)
=
{\displaystyle {\frac {d}{d\tau }}(\gamma ,\gamma {\vec {v}})=}
γ
(
d
γ
d
t
,
d
(
γ
v
→
)
d
t
)
{\displaystyle \gamma \left({\frac {d\gamma }{dt}},{\frac {d(\gamma {\vec {v}})}{dt}}\right)}
Force 4-vector: This is the derivative of the momentum 4-vector with respect to
τ
.
{\displaystyle \tau .}
F
≡
d
P
d
τ
=
{\displaystyle F\equiv {\frac {dP}{d\tau }}=}
γ
(
d
E
d
t
,
d
p
→
d
t
)
=
{\displaystyle \gamma \left({\frac {dE}{dt}},{\frac {d{\vec {p}}}{dt}}\right)=}
γ
(
d
E
d
t
,
f
→
)
{\displaystyle \gamma \left({\frac {dE}{dt}},{\vec {f}}\right)}
As expected, the final components of the above 4-vectors are all standard 3-vectors corresponding to spatial 3-momentum, 3-force etc.: 178–181 : 36–59
==== 4-vectors and physical law ====
The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving 4-vectors rather than give up on conservation of momentum.
Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving 4-vectors require the use of tensors with appropriate rank, which themselves can be thought of as being built up from 4-vectors.: 186
=== Acceleration ===
Special relativity does accommodate accelerations as well as accelerating frames of reference.
It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. It is only when gravitation is significant that general relativity is required.
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.
In this section, we analyze several scenarios involving accelerated reference frames.
==== Dewan–Beran–Bell spaceship paradox ====
The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues.
In Fig. 7-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string that is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration. Will the string break?
When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer.: 106, 120–122
To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance L' = γL in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break.
Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break.: 106, 120–122
The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity.: 106, 120–122
A spacetime diagram (Fig. 7-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude
k
{\displaystyle k}
acceleration for proper time
σ
{\displaystyle \sigma }
(acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity
A
′
B
″
{\displaystyle A'B''}
turns out to be greater than the length along the line of simultaneity
A
B
{\displaystyle AB}
.
The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 7-5, the acceleration is finished, the ships will remain at a constant offset in some frame
S
′
{\displaystyle S'}
. If
x
A
{\displaystyle x_{A}}
and
x
B
=
x
A
+
L
{\displaystyle x_{B}=x_{A}+L}
are the ships' positions in
S
{\displaystyle S}
, the positions in frame
S
′
{\displaystyle S'}
are:
x
A
′
=
γ
(
x
A
−
v
t
)
x
B
′
=
γ
(
x
A
+
L
−
v
t
)
L
′
=
x
B
′
−
x
A
′
=
γ
L
{\displaystyle {\begin{aligned}x'_{A}&=\gamma \left(x_{A}-vt\right)\\x'_{B}&=\gamma \left(x_{A}+L-vt\right)\\L'&=x'_{B}-x'_{A}=\gamma L\end{aligned}}}
The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame
S
{\displaystyle S}
. As shown in Fig. 7-5, Bell's example asserts the moving lengths
A
B
{\displaystyle AB}
and
A
′
B
′
{\displaystyle A'B'}
measured in frame
S
{\displaystyle S}
to be fixed, thereby forcing the rest frame length
A
′
B
″
{\displaystyle A'B''}
in frame
S
′
{\displaystyle S'}
to increase.
==== Accelerated observer with horizon ====
Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Section "Invariant hyperbola" of the article Spacetime, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases.
Fig. 7-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter
β
{\displaystyle \beta }
approaches a limit of one as
c
t
{\displaystyle ct}
increases. Likewise,
γ
{\displaystyle \gamma }
approaches infinity.
The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows:
We remember that
β
=
c
t
/
x
{\displaystyle \beta =ct/x}
.
Since
c
2
t
2
−
x
2
=
s
2
{\displaystyle c^{2}t^{2}-x^{2}=s^{2}}
, we conclude that
β
(
c
t
)
=
c
t
/
c
2
t
2
−
s
2
{\displaystyle \beta (ct)=ct/{\sqrt {c^{2}t^{2}-s^{2}}}}
.
γ
=
1
/
1
−
β
2
=
{\displaystyle \gamma =1/{\sqrt {1-\beta ^{2}}}=}
c
2
t
2
−
s
2
/
s
{\displaystyle {\sqrt {c^{2}t^{2}-s^{2}}}/s}
From the relativistic force law,
F
=
d
p
/
d
t
=
{\displaystyle F=dp/dt=}
d
p
c
/
d
(
c
t
)
=
d
(
β
γ
m
c
2
)
/
d
(
c
t
)
{\displaystyle dpc/d(ct)=d(\beta \gamma mc^{2})/d(ct)}
.
Substituting
β
(
c
t
)
{\displaystyle \beta (ct)}
from step 2 and the expression for
γ
{\displaystyle \gamma }
from step 3 yields
F
=
m
c
2
/
s
{\displaystyle F=mc^{2}/s}
, which is a constant expression.: 110–113
Fig. 7-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines).: 110–113
After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon.: 110–113
== Relativity and unifying electromagnetism ==
Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity.
The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame.
Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus.
== Theories of relativity and quantum mechanics ==
Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research.
The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time.
In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only described the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics.
On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time.
== Status ==
Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. But at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20)
and thus accepted by the physics community. Experimental results that appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors.
Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields).
Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See Classical mechanics for a more detailed discussion.
Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory.
The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities.
The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times.
The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame.
The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle.
Particle accelerators accelerate and measure the properties of particles moving at near the speed of light, where their behavior is consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples:
Tests of relativistic energy and momentum – testing the limiting speed of particles
Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation
Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life
Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations
Hughes–Drever experiment – testing isotropy of space and mass
Modern searches for Lorentz violation – various modern tests
Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter.
Experiments to test the aether drag hypothesis – no "aether flow obstruction".
== Technical discussion of spacetime ==
=== Geometry of spacetime ===
==== Comparison between flat Euclidean space and Minkowski space ====
Special relativity uses a "flat" 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time.
In 3D space, the differential of distance (line element) ds is defined by
d
s
2
=
d
x
⋅
d
x
=
d
x
1
2
+
d
x
2
2
+
d
x
3
2
,
{\displaystyle ds^{2}=d\mathbf {x} \cdot d\mathbf {x} =dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2},}
where dx = (dx1, dx2, dx3) are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills
d
s
2
=
−
d
X
0
2
+
d
X
1
2
+
d
X
2
2
+
d
X
3
2
,
{\displaystyle ds^{2}=-dX_{0}^{2}+dX_{1}^{2}+dX_{2}^{2}+dX_{3}^{2},}
where dX = (dX0, dX1, dX2, dX3) are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime.
The actual form of ds above depends on the metric and on the choices for the X0 coordinate.
To make the time coordinate look like the space coordinates, it can be treated as imaginary: X0 = ict (this is called a Wick rotation).
According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take X0 = ct, rather than a "disguised" Euclidean metric using ict as the time coordinate.
Some authors use X0 = t, with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor.
These numerous conventions can be superseded by using natural units where c = 1. Then space and time have equivalent units, and no factors of c appear anywhere.
==== 3D spacetime ====
If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space
d
s
2
=
d
x
1
2
+
d
x
2
2
−
c
2
d
t
2
,
{\displaystyle ds^{2}=dx_{1}^{2}+dx_{2}^{2}-c^{2}dt^{2},}
we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation;
d
s
2
=
0
=
d
x
1
2
+
d
x
2
2
−
c
2
d
t
2
{\displaystyle ds^{2}=0=dx_{1}^{2}+dx_{2}^{2}-c^{2}dt^{2}}
or simply
d
x
1
2
+
d
x
2
2
=
c
2
d
t
2
,
{\displaystyle dx_{1}^{2}+dx_{2}^{2}=c^{2}dt^{2},}
which is the equation of a circle of radius c dt.
==== 4D spacetime ====
If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone:
d
s
2
=
0
=
d
x
1
2
+
d
x
2
2
+
d
x
3
2
−
c
2
d
t
2
{\displaystyle ds^{2}=0=dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}-c^{2}dt^{2}}
so
d
x
1
2
+
d
x
2
2
+
d
x
3
2
=
c
2
d
t
2
.
{\displaystyle dx_{1}^{2}+dx_{2}^{2}+dx_{3}^{2}=c^{2}dt^{2}.}
As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt.
This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star that I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance
d
=
x
1
2
+
x
2
2
+
x
3
2
{\textstyle d={\sqrt {x_{1}^{2}+x_{2}^{2}+x_{3}^{2}}}}
away and a time d/c in the past. For this reason the null dual cone is also known as the "light cone". (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".)
The cone in the −t region is the information that the point is "receiving", while the cone in the +t section is the information that the point is "sending".
The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity.
=== Physics in spacetime ===
==== Transformations of physical quantities between reference frames ====
Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation.
The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows:
(
c
t
′
x
′
y
′
z
′
)
=
(
γ
−
β
γ
0
0
−
β
γ
γ
0
0
0
0
1
0
0
0
0
1
)
(
c
t
x
y
z
)
=
(
γ
c
t
−
γ
β
x
γ
x
−
β
γ
c
t
y
z
)
.
{\displaystyle {\begin{pmatrix}ct'\\x'\\y'\\z'\end{pmatrix}}={\begin{pmatrix}\gamma &-\beta \gamma &0&0\\-\beta \gamma &\gamma &0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}{\begin{pmatrix}ct\\x\\y\\z\end{pmatrix}}={\begin{pmatrix}\gamma ct-\gamma \beta x\\\gamma x-\beta \gamma ct\\y\\z\end{pmatrix}}.}
In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four-vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used.
The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component x = (x, y, z), in a contravariant position four-vector with components:
X
ν
=
(
X
0
,
X
1
,
X
2
,
X
3
)
=
(
c
t
,
x
,
y
,
z
)
=
(
c
t
,
x
)
.
{\displaystyle X^{\nu }=(X^{0},X^{1},X^{2},X^{3})=(ct,x,y,z)=(ct,\mathbf {x} ).}
where we define X0 = ct so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally. Now the transformation of the contravariant components of the position 4-vector can be compactly written as:
X
μ
′
=
Λ
μ
′
ν
X
ν
{\displaystyle X^{\mu '}=\Lambda ^{\mu '}{}_{\nu }X^{\nu }}
where there is an implied summation on
ν
{\displaystyle \nu }
from 0 to 3, and
Λ
μ
′
ν
{\displaystyle \Lambda ^{\mu '}{}_{\nu }}
is a matrix.
More generally, all contravariant components of a four-vector
T
ν
{\displaystyle T^{\nu }}
transform from one frame to another frame by a Lorentz transformation:
T
μ
′
=
Λ
μ
′
ν
T
ν
{\displaystyle T^{\mu '}=\Lambda ^{\mu '}{}_{\nu }T^{\nu }}
Examples of other 4-vectors include the four-velocity
U
μ
{\displaystyle U^{\mu }}
, defined as the derivative of the position 4-vector with respect to proper time:
U
μ
=
d
X
μ
d
τ
=
γ
(
v
)
(
c
,
v
x
,
v
y
,
v
z
)
=
γ
(
v
)
(
c
,
v
)
.
{\displaystyle U^{\mu }={\frac {dX^{\mu }}{d\tau }}=\gamma (v)(c,v_{x},v_{y},v_{z})=\gamma (v)(c,\mathbf {v} ).}
where the Lorentz factor is:
γ
(
v
)
=
1
1
−
v
2
/
c
2
v
2
=
v
x
2
+
v
y
2
+
v
z
2
.
{\displaystyle \gamma (v)={\frac {1}{\sqrt {1-v^{2}/c^{2}}}}\qquad v^{2}=v_{x}^{2}+v_{y}^{2}+v_{z}^{2}.}
The relativistic energy
E
=
γ
(
v
)
m
c
2
{\displaystyle E=\gamma (v)mc^{2}}
and relativistic momentum
p
=
γ
(
v
)
m
v
{\displaystyle \mathbf {p} =\gamma (v)m\mathbf {v} }
of an object are respectively the timelike and spacelike components of a contravariant four-momentum vector:
P
μ
=
m
U
μ
=
m
γ
(
v
)
(
c
,
v
x
,
v
y
,
v
z
)
=
(
E
c
,
p
x
,
p
y
,
p
z
)
=
(
E
c
,
p
)
.
{\displaystyle P^{\mu }=mU^{\mu }=m\gamma (v)(c,v_{x},v_{y},v_{z})=\left({\frac {E}{c}},p_{x},p_{y},p_{z}\right)=\left({\frac {E}{c}},\mathbf {p} \right).}
where m is the invariant mass.
The four-acceleration is the proper time derivative of 4-velocity:
A
μ
=
d
U
μ
d
τ
.
{\displaystyle A^{\mu }={\frac {dU^{\mu }}{d\tau }}.}
The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix.
The four-gradient of a scalar field φ transforms covariantly rather than contravariantly:
(
1
c
∂
ϕ
∂
t
′
∂
ϕ
∂
x
′
∂
ϕ
∂
y
′
∂
ϕ
∂
z
′
)
=
(
1
c
∂
ϕ
∂
t
∂
ϕ
∂
x
∂
ϕ
∂
y
∂
ϕ
∂
z
)
(
γ
+
β
γ
0
0
+
β
γ
γ
0
0
0
0
1
0
0
0
0
1
)
,
{\displaystyle {\begin{pmatrix}{\dfrac {1}{c}}{\dfrac {\partial \phi }{\partial t'}}&{\dfrac {\partial \phi }{\partial x'}}&{\dfrac {\partial \phi }{\partial y'}}&{\dfrac {\partial \phi }{\partial z'}}\end{pmatrix}}={\begin{pmatrix}{\dfrac {1}{c}}{\dfrac {\partial \phi }{\partial t}}&{\dfrac {\partial \phi }{\partial x}}&{\dfrac {\partial \phi }{\partial y}}&{\dfrac {\partial \phi }{\partial z}}\end{pmatrix}}{\begin{pmatrix}\gamma &+\beta \gamma &0&0\\+\beta \gamma &\gamma &0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}},}
which is the transpose of:
(
∂
μ
′
ϕ
)
=
Λ
μ
′
ν
(
∂
ν
ϕ
)
∂
μ
≡
∂
∂
x
μ
.
{\displaystyle (\partial _{\mu '}\phi )=\Lambda _{\mu '}{}^{\nu }(\partial _{\nu }\phi )\qquad \partial _{\mu }\equiv {\frac {\partial }{\partial x^{\mu }}}.}
only in Cartesian coordinates. It is the covariant derivative that transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates.
More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation:
T
μ
′
=
Λ
μ
′
ν
T
ν
,
{\displaystyle T_{\mu '}=\Lambda _{\mu '}{}^{\nu }T_{\nu },}
where
Λ
μ
′
ν
{\displaystyle \Lambda _{\mu '}{}^{\nu }}
is the reciprocal matrix of
Λ
μ
′
ν
{\displaystyle \Lambda ^{\mu '}{}_{\nu }}
.
The postulates of special relativity constrain the exact form the Lorentz transformation matrices take.
More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law
T
θ
′
ι
′
⋯
κ
′
α
′
β
′
⋯
ζ
′
=
Λ
α
′
μ
Λ
β
′
ν
⋯
Λ
ζ
′
ρ
Λ
θ
′
σ
Λ
ι
′
υ
⋯
Λ
κ
′
ϕ
T
σ
υ
⋯
ϕ
μ
ν
⋯
ρ
{\displaystyle T_{\theta '\iota '\cdots \kappa '}^{\alpha '\beta '\cdots \zeta '}=\Lambda ^{\alpha '}{}_{\mu }\Lambda ^{\beta '}{}_{\nu }\cdots \Lambda ^{\zeta '}{}_{\rho }\Lambda _{\theta '}{}^{\sigma }\Lambda _{\iota '}{}^{\upsilon }\cdots \Lambda _{\kappa '}{}^{\phi }T_{\sigma \upsilon \cdots \phi }^{\mu \nu \cdots \rho }}
where
Λ
χ
′
ψ
{\displaystyle \Lambda _{\chi '}{}^{\psi }}
is the reciprocal matrix of
Λ
χ
′
ψ
{\displaystyle \Lambda ^{\chi '}{}_{\psi }}
. All tensors transform by this rule.
An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor.
The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor.
==== Metric ====
The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates), which can be arranged in a 4 × 4 matrix:
η
α
β
=
(
−
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
)
,
{\displaystyle \eta _{\alpha \beta }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}},}
which is equal to its reciprocal,
η
α
β
{\displaystyle \eta ^{\alpha \beta }}
, in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs.
The Poincaré group is the most general group of transformations that preserves the Minkowski metric:
η
α
β
=
η
μ
′
ν
′
Λ
μ
′
α
Λ
ν
′
β
{\displaystyle \eta _{\alpha \beta }=\eta _{\mu '\nu '}\Lambda ^{\mu '}{}_{\alpha }\Lambda ^{\nu '}{}_{\beta }}
and this is the physical symmetry underlying special relativity.
The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is:
T
α
S
α
=
T
α
η
α
β
S
β
=
T
α
η
α
β
S
β
=
invariant scalar
{\displaystyle T^{\alpha }S_{\alpha }=T^{\alpha }\eta _{\alpha \beta }S^{\beta }=T_{\alpha }\eta ^{\alpha \beta }S_{\beta }={\text{invariant scalar}}}
Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no Λ appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself:
|
T
|
=
T
α
T
α
{\displaystyle |\mathbf {T} |={\sqrt {T^{\alpha }T_{\alpha }}}}
One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants:
T
α
α
,
T
α
β
T
β
α
,
T
α
β
T
β
γ
T
γ
α
=
invariant scalars
,
{\displaystyle T^{\alpha }{}_{\alpha },T^{\alpha }{}_{\beta }T^{\beta }{}_{\alpha },T^{\alpha }{}_{\beta }T^{\beta }{}_{\gamma }T^{\gamma }{}_{\alpha }={\text{invariant scalars}},}
similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one does not need to perform Lorentz transformations to determine the invariants.
==== Relativistic kinematics and invariance ====
The coordinate differentials transform also contravariantly:
d
X
μ
′
=
Λ
μ
′
ν
d
X
ν
{\displaystyle dX^{\mu '}=\Lambda ^{\mu '}{}_{\nu }dX^{\nu }}
so the squared length of the differential of the position four-vector dXμ constructed using
d
X
2
=
d
X
μ
d
X
μ
=
η
μ
ν
d
X
μ
d
X
ν
=
−
(
c
d
t
)
2
+
(
d
x
)
2
+
(
d
y
)
2
+
(
d
z
)
2
{\displaystyle d\mathbf {X} ^{2}=dX^{\mu }\,dX_{\mu }=\eta _{\mu \nu }\,dX^{\mu }\,dX^{\nu }=-(c\,dt)^{2}+(dx)^{2}+(dy)^{2}+(dz)^{2}}
is an invariant. Notice that when the line element dX2 is negative that √−dX2 is the differential of proper time, while when dX2 is positive, √dX2 is differential of the proper distance.
The 4-velocity Uμ has an invariant form:
U
2
=
η
ν
μ
U
ν
U
μ
=
−
c
2
,
{\displaystyle \mathbf {U} ^{2}=\eta _{\nu \mu }U^{\nu }U^{\mu }=-c^{2}\,,}
which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces:
2
η
μ
ν
A
μ
U
ν
=
0.
{\displaystyle 2\eta _{\mu \nu }A^{\mu }U^{\nu }=0.}
So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal.
==== Relativistic dynamics and invariance ====
The invariant magnitude of the momentum 4-vector generates the energy–momentum relation:
P
2
=
η
μ
ν
P
μ
P
ν
=
−
(
E
c
)
2
+
p
2
.
{\displaystyle \mathbf {P} ^{2}=\eta ^{\mu \nu }P_{\mu }P_{\nu }=-\left({\frac {E}{c}}\right)^{2}+p^{2}.}
We can work out what this invariant is by first arguing that, since it is a scalar, it does not matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero.
P
2
=
−
(
E
rest
c
)
2
=
−
(
m
c
)
2
.
{\displaystyle \mathbf {P} ^{2}=-\left({\frac {E_{\text{rest}}}{c}}\right)^{2}=-(mc)^{2}.}
We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero.
The rest energy is related to the mass according to the celebrated equation discussed above:
E
rest
=
m
c
2
.
{\displaystyle E_{\text{rest}}=mc^{2}.}
The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames.
To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D that contains the components of the 3D force vector among its components.
If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is:
F
ν
=
d
P
ν
d
τ
=
m
A
ν
{\displaystyle F_{\nu }={\frac {dP_{\nu }}{d\tau }}=mA_{\nu }}
In the rest frame of the object, the time component of the four-force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four-force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four-force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ.
In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.
== See also ==
People
Max Planck
Hermann Minkowski
Max von Laue
Arnold Sommerfeld
Max Born
Mileva Marić
Relativity
History of special relativity
Doubly special relativity
Bondi k-calculus
Einstein synchronisation
Rietdijk–Putnam argument
Special relativity (alternative formulations)
Relativity priority dispute
Physics
Einstein's thought experiments
physical cosmology
Relativistic Euler equations
Lorentz ether theory
Moving magnet and conductor problem
Shape waves
Relativistic heat conduction
Relativistic disk
Born rigidity
Born coordinates
Mathematics
Lorentz group
Relativity in the APS formalism
Philosophy
actualism
conventionalism
Paradoxes
Ehrenfest paradox
Bell's spaceship paradox
Velocity composition paradox
Lighthouse paradox
== Notes ==
== Primary sources ==
== References ==
== Further reading ==
=== Texts by Einstein and text about history of special relativity ===
Einstein, Albert (1920). Relativity: The Special and General Theory.
Einstein, Albert (1996). The Meaning of Relativity. Fine Communications. ISBN 1-56731-136-9
Logunov, Anatoly A. (2005). Henri Poincaré and the Relativity Theory (transl. from Russian by G. Pontocorvo and V. O. Soloviev, edited by V. A. Petrov). Nauka, Moscow.
=== Textbooks ===
Charles Misner, Kip Thorne, and John Archibald Wheeler (1971) Gravitation. W. H. Freeman & Co. ISBN 0-7167-0334-3
Post, E.J., 1997 (1962) Formal Structure of Electromagnetics: General Covariance and Electromagnetics. Dover Publications.
Wolfgang Rindler (1991). Introduction to Special Relativity (2nd ed.), Oxford University Press. ISBN 978-0-19-853952-0; ISBN 0-19-853952-5
Harvey R. Brown (2005). Physical relativity: space–time structure from a dynamical perspective, Oxford University Press, ISBN 0-19-927583-1; ISBN 978-0-19-927583-0
Qadir, Asghar (1989). Relativity: An Introduction to the Special Theory. Singapore: World Scientific Publications. p. 128. Bibcode:1989rist.book.....Q. ISBN 978-9971-5-0612-4.
French, A. P. (1968). Special Relativity (M.I.T. Introductory Physics) (1st ed.). W. W. Norton & Company. ISBN 978-0393097931.
Silberstein, Ludwik (1914). The Theory of Relativity.
Lawrence Sklar (1977). Space, Time and Spacetime. University of California Press. ISBN 978-0-520-03174-6.
Lawrence Sklar (1992). Philosophy of Physics. Westview Press. ISBN 978-0-8133-0625-4.
Sergey Stepanov (2018). Relativistic World. De Gruyter. ISBN 9783110515879.
Taylor, Edwin, and John Archibald Wheeler (1992). Spacetime Physics (2nd ed.). W. H. Freeman & Co. ISBN 0-7167-2327-1.
Tipler, Paul, and Llewellyn, Ralph (2002). Modern Physics (4th ed.). W. H. Freeman & Co. ISBN 0-7167-4345-0.
=== Journal articles ===
Alvager, T.; Farley, F. J. M.; Kjellman, J.; Wallin, L.; et al. (1964). "Test of the Second Postulate of Special Relativity in the GeV region". Physics Letters. 12 (3): 260–262. Bibcode:1964PhL....12..260A. doi:10.1016/0031-9163(64)91095-9.
Darrigol, Olivier (2004). "The Mystery of the Poincaré–Einstein Connection". Isis. 95 (4): 614–26. doi:10.1086/430652. PMID 16011297. S2CID 26997100.
Wolf, Peter; Petit, Gerard (1997). "Satellite test of Special Relativity using the Global Positioning System". Physical Review A. 56 (6): 4405–09. Bibcode:1997PhRvA..56.4405W. doi:10.1103/PhysRevA.56.4405.
Special Relativity Scholarpedia
Rindler, Wolfgang (2011). "Special relativity: Kinematics". Scholarpedia. 6 (2): 8520. Bibcode:2011SchpJ...6.8520R. doi:10.4249/scholarpedia.8520.
== External links ==
=== Original works ===
Zur Elektrodynamik bewegter Körper Einstein's original work in German, Annalen der Physik, Bern 1905
On the Electrodynamics of Moving Bodies English Translation as published in the 1923 book The Principle of Relativity.
=== Special relativity for a general audience (no mathematical knowledge required) ===
Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics.
Einstein Online Archived 2010-02-01 at the Wayback Machine Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics.
Audio: Cain/Gay (2006) – Astronomy Cast. Einstein's Theory of Special Relativity
=== Special relativity explained (using simple or more advanced mathematics) ===
Bondi K-Calculus – A simple introduction to the special theory of relativity.
Greg Egan's Foundations Archived 2013-04-25 at the Wayback Machine.
The Hogg Notes on Special Relativity A good introduction to special relativity at the undergraduate level, using calculus.
Relativity Calculator: Special Relativity Archived 2013-03-21 at the Wayback Machine – An algebraic and integral calculus derivation for E = mc2.
MathPages – Reflections on Relativity A complete online book on relativity with an extensive bibliography.
Special Relativity An introduction to special relativity at the undergraduate level.
Relativity: the Special and General Theory at Project Gutenberg, by Albert Einstein
Special Relativity Lecture Notes is a standard introduction to special relativity containing illustrative explanations based on drawings and spacetime diagrams from Virginia Polytechnic Institute and State University.
Understanding Special Relativity The theory of special relativity in an easily understandable way.
An Introduction to the Special Theory of Relativity (1964) by Robert Katz, "an introduction ... that is accessible to any student who has had an introduction to general physics and some slight acquaintance with the calculus" (130 pp; pdf format).
Lecture Notes on Special Relativity by J D Cresser Department of Physics Macquarie University.
SpecialRelativity.net – An overview with visualizations and minimal mathematics.
Relativity 4-ever? The problem of superluminal motion is discussed in an entertaining manner.
=== Visualization ===
Raytracing Special Relativity Software visualizing several scenarios under the influence of special relativity.
Real Time Relativity Archived 2013-05-08 at the Wayback Machine The Australian National University. Relativistic visual effects experienced through an interactive program.
Spacetime travel A variety of visualizations of relativistic effects, from relativistic motion to black holes.
Through Einstein's Eyes Archived 2013-05-14 at the Wayback Machine The Australian National University. Relativistic visual effects explained with movies and images.
Warp Special Relativity Simulator A computer program to show the effects of traveling close to the speed of light.
Animation clip on YouTube visualizing the Lorentz transformation.
Original interactive FLASH Animations from John de Pillis illustrating Lorentz and Galilean frames, Train and Tunnel Paradox, the Twin Paradox, Wave Propagation, Clock Synchronization, etc.
lightspeed An OpenGL-based program developed to illustrate the effects of special relativity on the appearance of moving objects.
Animation showing the stars near Earth, as seen from a spacecraft accelerating rapidly to light speed. | Wikipedia/Special_theory_of_relativity |
Scientific scholarship during the Byzantine Empire played an important role in the transmission of classical knowledge to the Islamic world and to Renaissance Italy, and also in the transmission of Islamic science to Renaissance Italy. Its rich historiographical tradition preserved ancient knowledge upon which splendid art, architecture, literature and technological achievements were built. Byzantines stood behind several technological advancements.
== Classical and ecclesiastical studies ==
Byzantine science was essentially classical science. Therefore, Byzantine science was in every period closely connected with ancient-pagan philosophy and metaphysics. Despite some opposition to pagan learning, many of the most distinguished classical scholars held high office in the Church. The writings of antiquity never ceased to be cultivated in the Byzantine Empire because of the impetus given to classical studies by the Academy of Athens in the 4th and 5th centuries B.C., the vigor of the philosophical academy of Alexandria, and to the services of the University of Constantinople, which concerned itself entirely with secular subjects, to the exclusion of theology, which was taught in the Patriarchical Academy. Even the latter offered instruction in the ancient classics and included literary, philosophical, and scientific texts in its curriculum. The monastic schools concentrated upon the Bible, theology, and liturgy. Therefore, the monastic scriptoria expended most of their efforts upon the transcription of ecclesiastical manuscripts, while ancient-pagan literature was transcribed, summarized, excerpted, and annotated by laymen or clergy like Photios, Arethas of Caesarea, Eustathius of Thessalonica, and Bessarion.
Historian John Julius Norwich says that "much of what we know about antiquity—especially Hellenic and Roman literature and Roman law—would have been lost forever if it weren't for the scholars and scribes of Constantinople."
== Architecture ==
Pendentive architecture, a specific spherical form in the upper corners to support a dome, is a Byzantine invention. Although the first experimentation was made in the 200s, it was in the 6th century in the Byzantine Empire that its potential was fully achieved.
== Mathematics ==
Byzantine scientists preserved and continued the legacy of the great Ancient Greek mathematicians and put mathematics in practice. In early Byzantium (5th to 7th century) the architects and mathematicians Isidore of Miletus and Anthemius of Tralles developed mathematical formulas to construct the great Hagia Sophia church, a technological breakthrough for its time and for centuries afterwards because of its striking geometry, bold design and height. In middle Byzantium (8th to 12th century) mathematicians like Michael Psellos considered mathematics as a way to interpret the world.
== Physics ==
John Philoponus, also known as John the Grammarian, was an Alexandrian philologist, Aristotelian commentator and Christian theologian, and author of philosophical treatises and theological works. He was the first who criticized Aristotle and attacked Aristotle's theory of the free fall. His criticism of Aristotelian physics was an inspiration for Galileo Galilei many centuries later; Galileo cited Philoponus substantially in his works and followed him in refuting Aristotelian physics.
In his Commentaries on Aristotle, Philoponus wrote:
But this is completely erroneous, and our view may be corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a very small one. And so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the other.The theory of impetus was invented in the Byzantine Empire. Ship mill is an invention made by the Byzantines and was constructed in order to mill grains by using the energy of the stream of water. The technology eventually spread to the rest of Europe and was in use until c. 1800. The Byzantines knew and used the concept of hydraulics: in the 10th century the diplomat Liutprand of Cremona, when visiting the Byzantine emperor, explained that he saw the emperor sitting on a hydraulic throne and that it was "made in such a cunning manner that at one moment it was down on the ground, while at another it rose higher and was seen to be up in the air".
Paper, which the Muslims received from China in the 8th century, was being used in the Byzantine Empire by the 9th century. There were very large private libraries, and monasteries possessed huge libraries with hundreds of books that were lent to people in each monastery's region. Thus were preserved the works of classical antiquity.
== Astronomy ==
Emmanuel A. Paschos says: "A Byzantine (Roman), article from the 13th century contains advanced astronomical ideas and pre-Copernican diagrams. The models are geocentric but contain improvements on the trajectories of the Moon and Mercury." One known astronomer was Nicephorus Gregoras, who was active in the 14th century.
== Medicine ==
Medicine was one of the sciences in which the Byzantines improved on their Greco-Roman predecessors, starting from Galen. As a result, Byzantine medicine had an influence on Islamic medicine as well as the medicine of the Renaissance. The concept of the hospital appeared in Byzantine Empire as an institution to offer medical care and possibility of a cure for the patients because of the ideals of Christian charity.
Although the concept of uroscopy was known to Galen, he did not see the importance of using it to diagnose disease. It was Byzantine physicians, such as Theophilus Protospatharius, who realised the diagnostic potential of uroscopy in a time when no microscope or stethoscope existed. That practice eventually spread to the rest of Europe. The illuminated manuscript Vienna Dioscurides (6th century), and the works of Byzantine doctors such as Paul of Aegina (7th century) and Nicholas Myrepsos (late 13th century), continued to be used as the authoritative texts by Europeans through the Renaissance. Myrepsos invented the Aurea Alexandrina, which was a kind of opiate or antidote.
The first known example of separating conjoined twins happened in the Byzantine Empire in the 10th century when a pair of conjoined twins from Armenia came to Constantinople. Many years later one of them died, so the surgeons in Constantinople decided to remove the body of the dead one. The result was partly successful, as the surviving twin lived three days before dying, a result so impressive that it was mentioned a century and a half later by historians. The next case of separating conjoined twins did not occur until 1689 in Germany.
== Weaponry ==
Greek fire was an incendiary weapon used by the Byzantine Empire. The Byzantines typically used it in naval battles to great effect as it could continue burning even on water. It provided a technological advantage and was responsible for many key Byzantine military victories, most notably the salvation of Constantinople from two Arab sieges, thus securing the empire's survival. Greek fire proper however was invented in c. 672 and is ascribed by the chronicler Theophanes to Kallinikos, an architect from Heliopolis in the former province of Phoenice. It has been argued that no single person invented the Greek fire but that it was "invented by the chemists in Constantinople who had inherited the discoveries of the Alexandrian chemical school...".
The grenade first appeared in the Byzantine Empire, where rudimentary incendiary grenades made of ceramic jars holding glass or nails were made and used on battlefields. The first examples of hand-held flamethrowers occurred in the Byzantine Empire in the 10th century, where infantry units were equipped with hand pumps and swivel tubes used to project the flame. The counterweight trebuchet was invented in the Byzantine Empire during the reign of Alexios I Komnenos (1081–1118) under the Komnenian restoration when the Byzantines used this new-developed siege weaponry to devastate citadels and fortifications. This siege artillery marked the apogee of siege weaponry before the use of the cannon. From the Byzantines, the armies of Europe and Asia eventually learned and adopted this siege weaponry.
== Byzantine and Islamic science ==
During the Middle Ages, there was frequently an exchange of works between Byzantine and Islamic science. The Byzantine Empire initially provided the medieval Islamic world with Ancient and early Medieval Greek texts on astronomy, mathematics and philosophy for translation into Arabic as the Byzantine Empire was the leading center of scientific scholarship in the region at the beginning of the Middle Ages. Later as the caliphate and other medieval Islamic cultures became the leading centers of scientific knowledge, Byzantine scientists such as Gregory Chioniades, who had visited the famous Maragheh observatory, translated books on Islamic astronomy, mathematics and science into Medieval Greek, including for example the works of Ja'far ibn Muhammad Abu Ma'shar al-Balkhi, Ibn Yunus, Al-Khazini (who was of Byzantine Greek descent), Muhammad ibn Mūsā al-Khwārizmī and Nasīr al-Dīn al-Tūsī (such as the Zij-i Ilkhani and other Zij treatises) among others.
There were also some Byzantine scientists who used Arabic transliterations to describe certain scientific concepts instead of the equivalent Ancient Greek terms (such as the use of the Arabic talei instead of the Ancient Greek horoscopus). Byzantine science thus played an important role in transmitting ancient Greek knowledge to Western Europe and the Islamic world, and also transmitting Arabic knowledge to Western Europe. Some historians suspect that Copernicus or another European author had access to an Arabic astronomical text, resulting in the transmission of the Tusi couple, an astronomical model developed by Nasir al-Din al-Tusi that later appeared in the work of Nicolaus Copernicus. Byzantine scientists also became acquainted with Sassanid and Indian astronomy through citations in some Arabic works.
A mechanical sundial device consisting of complex gears made by the Byzantines has been excavated which indicates that the Antikythera mechanism, a sort of analogue device used in astronomy and invented around the late second century BC, was utilized in the Byzantine period. J. R. Partington writes that
Constantinople was full of inventors and craftsmen. The "philosopher" Leo of Thessalonika made for the Emperor Theophilos (829–842) a golden tree, the branches of which carried artificial birds which flapped their wings and sang, a model lion which moved and roared, and a bejewelled clockwork lady who walked. These mechanical toys continued the tradition represented in the treatise of Heron of Alexandria (c. A.D. 125), which was well-known to the Byzantines.
Such mechanical devices reached a high level of sophistication and were made to impress visitors. Leo the Mathematician has also been credited with the system of beacons, a sort of optical telegraph, stretching across Anatolia from Cilicia to Constantinople, which gave warning of enemy raids and was used as diplomatic communication.
== Humanism and Renaissance ==
During the 12th century the Byzantines produced their model of early Renaissance humanism as a renaissance of interest in classical authors, however, during the centuries before (9–12), Renaissance humanism and wanting for classical learning was prominent during the Macedonian Renaissance, and continued into what we see now as the 12th century Renaissance under the Komnenoi. In Eustathius of Thessalonica Byzantine humanism found its most characteristic expression. During the 13th and 14th centuries, a period of intense creative activity, Byzantine humanism approached its zenith, and manifested a striking analogy to the contemporaneous Italian humanism. Byzantine humanism believed in the vitality of classical civilization, and of its sciences, and its proponents occupied themselves with scientific sciences.
Despite the political, and military decline of these last two centuries, the empire saw a flourishing of science and literature, often described as the "Palaeologean" or "Last Byzantine Renaissance". Some of this era's most eminent representatives are: Maximus Planudes, Manuel Moschopoulus, Demetrius Triclinius and Thomas Magister. The academy at Trebizond, highly influenced by Persian sciences, became a renowned center for the study of astronomy, mathematics, and medicine attracted the interest of almost all scholars. In the final century of the empire, Byzantine grammarians were those principally responsible for carrying in person and in writing ancient Greek grammatical and literary studies to early Renaissance Italy, and among them Manuel Chrysoloras was involved over the never achieved union of the Churches.
== See also ==
List of Byzantine inventions
Byzantine scholars in Renaissance
List of Byzantine scholars
Science in the Middle Ages
== References ==
== Sources ==
Lazaris, Stavros, ed. (2020). A Companion to Byzantine Science. Brill. ISBN 978-90-0441460-0. | Wikipedia/Byzantine_science |
Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization.
Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others.
Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (increasingly called 'biomedical physics') has a much wider scope and may include research in any applications of physics to medicine from the study of biomolecular structure to microscopy and nanomedicine.
== Mission statement of medical physicists ==
In hospital medical physics departments, the mission statement for medical physicists as adopted by the European Federation of Organisations for Medical Physics (EFOMP) is the following:
Medical Physicists will contribute to maintaining and improving the quality, safety and cost-effectiveness of healthcare services through patient-oriented activities requiring expert action, involvement or advice regarding the specification, selection, acceptance testing, commissioning, quality assurance/control and optimised clinical use of medical devices and regarding patient risks and protection from associated physical agents (e.g., x-rays, electromagnetic fields, laser light, radionuclides) including the prevention of unintended or accidental exposures; all activities will be based on current best evidence or own scientific research when the available evidence is not sufficient. The scope includes risks to volunteers in biomedical research, carers and comforters. The scope often includes risks to workers and public particularly when these impact patient risk
The term "physical agents" refers to ionising and non-ionising electromagnetic radiations, static electric and magnetic fields, ultrasound, laser light and any other Physical Agent associated with medical e.g., x-rays in computerised tomography (CT), gamma rays/radionuclides in nuclear medicine, magnetic fields and radio-frequencies in magnetic resonance imaging (MRI), ultrasound in ultrasound imaging and Doppler measurements.
This mission includes the following 11 key activities:
Scientific problem solving service: Comprehensive problem solving service involving recognition of less than optimal performance or optimised use of medical devices, identification and elimination of possible causes or misuse, and confirmation that proposed solutions have restored device performance and use to acceptable status. All activities are to be based on current best scientific evidence or own research when the available evidence is not sufficient.
Dosimetry measurements: Measurement of doses had by patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures (e.g., for legal or employment purposes); selection, calibration and maintenance of dosimetry related instrumentation; independent checking of dose related quantities provided by dose reporting devices (including software devices); measurement of dose related quantities required as inputs to dose reporting or estimating devices (including software). Measurements to be based on current recommended techniques and protocols. Includes dosimetry of all physical agents.
Patient safety/risk management (including volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures. Surveillance of medical devices and evaluation of clinical protocols to ensure the ongoing protection of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures from the deleterious effects of physical agents in accordance with the latest published evidence or own research when the available evidence is not sufficient. Includes the development of risk assessment protocols.
Occupational and public safety/risk management (when there is an impact on medical exposure or own safety). Surveillance of medical devices and evaluation of clinical protocols with respect to protection of workers and public when impacting the exposure of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures or responsibility with respect to own safety. Includes the development of risk assessment protocols in conjunction with other experts involved in occupational / public risks.
Clinical medical device management: Specification, selection, acceptance testing, commissioning and quality assurance/ control of medical devices in accordance with the latest published European or International recommendations and the management and supervision of associated programmes. Testing to be based on current recommended techniques and protocols.
Clinical involvement: Carrying out, participating in and supervising everyday radiation protection and quality control procedures to ensure ongoing effective and optimised use of medical radiological devices and including patient specific optimization.
Development of service quality and cost-effectiveness: Leading the introduction of new medical radiological devices into clinical service, the introduction of new medical physics services and participating in the introduction/development of clinical protocols/techniques whilst giving due attention to economic issues.
Expert consultancy: Provision of expert advice to outside clients (e.g., clinics with no in-house medical physics expertise).
Education of healthcare professionals (including medical physics trainees: Contributing to quality healthcare professional education through knowledge transfer activities concerning the technical-scientific knowledge, skills and competences supporting the clinically effective, safe, evidence-based and economical use of medical radiological devices. Participation in the education of medical physics students and organisation of medical physics residency programmes.
Health technology assessment (HTA): Taking responsibility for the physics component of health technology assessments related to medical radiological devices and /or the medical uses of radioactive substances/sources.
Innovation: Developing new or modifying existing devices (including software) and protocols for the solution of hitherto unresolved clinical problems.
== Medical biophysics and biomedical physics ==
Some education institutions house departments or programs bearing the title "medical biophysics" or "biomedical physics" or "applied physics in medicine". Generally, these fall into one of two categories: interdisciplinary departments that house biophysics, radiobiology, and medical physics under a single umbrella; and undergraduate programs that prepare students for further study in medical physics, biophysics, or medicine.
Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.
== Areas of specialty ==
The International Organization for Medical Physics (IOMP) recognizes main areas of medical physics employment and focus.
=== Medical imaging physics ===
Medical imaging physics is also known as diagnostic and interventional radiology physics.
Clinical (both "in-house" and "consulting") physicists typically deal with areas of testing, optimization, and quality assurance of diagnostic radiology physics areas such as radiographic X-rays, fluoroscopy, mammography, angiography, and computed tomography, as well as non-ionizing radiation modalities such as ultrasound, and MRI. They may also be engaged with radiation protection issues such as dosimetry (for staff and patients). In addition, many imaging physicists are often also involved with nuclear medicine systems, including single photon emission computed tomography (SPECT) and positron emission tomography (PET).
Sometimes, imaging physicists may be engaged in clinical areas, but for research and teaching purposes, such as quantifying intravascular ultrasound as a possible method of imaging a particular vascular object.
=== Therapeutic medical physics ===
Radiation therapeutic physics is also known as radiotherapy physics or radiation oncologist physics.
The majority of medical physicists currently working in the US, Canada, and some western countries are of this group. A radiation therapy physicist typically deals with linear accelerator (Linac) systems and kilovoltage x-ray treatment units on a daily basis, as well as other modalities such as TomoTherapy, gamma knife, Cyberknife, proton therapy, and brachytherapy.
The academic and research side of therapeutic physics may encompass fields such as boron neutron capture therapy, sealed source radiotherapy, terahertz radiation, high-intensity focused ultrasound (including lithotripsy), optical radiation lasers, ultraviolet etc. including photodynamic therapy, as well as nuclear medicine including unsealed source radiotherapy, and photomedicine, which is the use of light to treat and diagnose disease.
=== Nuclear medicine physics ===
Nuclear medicine is a branch of medicine that uses radiation to provide information about the functioning of a person's specific organs or to treat disease. The thyroid, bones, heart, liver and many other organs can be easily imaged, and disorders in their function revealed. In some cases radiation sources can be used to treat diseased organs, or tumours. Five Nobel laureates have been intimately involved with the use of radioactive tracers in medicine.
Over 10,000 hospitals worldwide use radioisotopes in medicine, and about 90% of the procedures are for diagnosis. The most common radioisotope used in diagnosis is technetium-99m, with some 30 million procedures per year, accounting for 80% of all nuclear medicine procedures worldwide.
=== Health physics ===
Health physics is also known as radiation safety or radiation protection. Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the science and practice of radiation protection and safety.
Background radiation
Radiation protection
Dosimetry
Health physics
Radiological protection of patients
=== Non-ionizing medical radiation physics ===
Some aspects of non-ionizing radiation physics may be considered under radiation protection or diagnostic imaging physics. Imaging modalities include MRI, optical imaging and ultrasound. Safety considerations include these areas and lasers
Lasers and applications in medicine
=== Physiological measurement ===
Physiological measurements have also been used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods. Measurement methods include electrocardiography Many of these areas may be covered by other specialities, for example medical engineering or vascular science.
=== Healthcare informatics and computational physics ===
Other closely related fields to medical physics include fields which deal with medical data, information technology and computer science for medicine.
Information and communication in medicine
Medical informatics
Image processing, display and visualization
Computer-aided diagnosis
Picture archiving and communication systems (PACS)
Standards: DICOM, ISO, IHE
Hospital information systems
e-Health
Telemedicine
Digital operating room
Workflow, patient-specific modeling
Medicine on the Internet of Things
Distant monitoring and telehomecare
=== Areas of research and academic development ===
Non-clinical physicists may or may not focus on the above areas from an academic and research point of view, but their scope of specialization may also encompass lasers and ultraviolet systems (such as photodynamic therapy), fMRI and other methods for functional imaging as well as molecular imaging, electrical impedance tomography, diffuse optical imaging, optical coherence tomography, and dual energy X-ray absorptiometry.
== Legislative and advisory bodies ==
=== International ===
ICRU: International Commission on Radiation Units and Measurements
ICRP: International Commission on Radiological Protection
IOMP: International Organization for Medical Physics
IAEA: International Atomic Energy Agency
=== United States of America ===
NCRP: National Council on Radiation Protection & Measurements
NRC: Nuclear Regulatory Commission
FDA: Food and Drug Administration
AAPM: American Association of Physicists in Medicine
=== United Kingdom ===
IPEM: Institute of Physics and Engineering in Medicine
MHRA: Medicines and Healthcare products Regulatory Agency
=== Other ===
AMPI: Association of Medical Physicists of India
CCPM: Canadian College of Physicists in Medicine
EFOMP: European Federation of Organisations for Medical Physics
ACPSEM: Australasian College of Physical Scientists and Engineers in Medicine
== References ==
== External links ==
Human Health Campus, The official website of the International Atomic Energy Agency dedicated to Professionals in Radiation Medicine. This site is managed by the Division of Human Health, Department of Nuclear Sciences and Applications
Australasian College of Physical Scientists and Engineers in Medicine (ACPSEM)
Canadian Organization of Medical Physicist - Organisation canadienne des physiciens médicaux
The American Association of Physicists in Medicine
Romanian College of Medical Physicists
medicalphysicsweb.org from the Institute of Physics
AIP Medical Physics portal
Institute of Physics & Engineering in Medicine (IPEM) - UK
European Federation of Organizations for Medical Physics (EFOMP)
International Organization for Medical Physics (IOMP) | Wikipedia/Medical_physics |
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as experiments by Galileo Galilei, to more complicated ones, such as the Large Hadron Collider.
== Overview ==
Experimental physics is a branch of physics that is concerned with data acquisition, data-acquisition methods, and the detailed conceptualization (beyond simple thought experiments) and realization of laboratory experiments. It is often contrasted with theoretical physics, which is more concerned with predicting and explaining the physical behaviour of nature than with acquiring empirical data.
Although experimental and theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relationship. The former provides data about the universe, which can then be analyzed in order to be understood, while the latter provides explanations for the data and thus offers insight into how to better acquire data and set up experiments. Theoretical physics can also offer insight into what data is needed in order to gain a better understanding of the universe, and into what experiments to devise in order to obtain it.
The tension between experimental and theoretical aspects of physics was expressed by James Clerk Maxwell as "It is not till we attempt to bring the theoretical part of our training into contact with the practical that we begin to experience the full effect of what Faraday has called 'mental inertia' - not only the difficulty of recognizing, among the concrete objects before us, the abstract relation which we have learned from books, but the distracting pain of wrenching the mind away from the symbols to the objects, and from the objects back to the symbols. This however is the price we have to pay for new ideas."
== History ==
As a distinct field, experimental physics was established in early modern Europe, during what is known as the Scientific Revolution, by physicists such as Galileo Galilei, Christiaan Huygens, Johannes Kepler, Blaise Pascal and Sir Isaac Newton. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, which is the key idea in the modern scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newton's laws of motion. In Galileo's Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship (as a moving frame) and how that ship's cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate an early form of the conservation of momentum.
Experimental physics is considered to have reached a high point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton (1643–1727). In 1687, Newton published the Principia, detailing two comprehensive and successful physical laws: Newton's laws of motion, from which arise classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. Both laws agreed well with experiment. The Principia also included several theories in fluid dynamics.
From the late 17th century onward, thermodynamics was developed by physicist and chemist Robert Boyle, Thomas Young, and many others. In 1733, Daniel Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Benjamin Thompson (Count Rumford) demonstrated the conversion of mechanical work into heat, and in 1847 James Prescott Joule stated the law of conservation of energy, in the form of heat as well as mechanical energy. Ludwig Boltzmann, in the nineteenth century, is responsible for the modern form of statistical mechanics.
Besides classical mechanics and thermodynamics, another great field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by scientists such as Boyle, Stephen Gray, and Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical charge and current. By 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom.
It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a compass needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields and electricity could generate each other. In 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwell's equations also predicted correctly that light is an electromagnetic wave. Starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries. By the 19th century, the sciences had segmented into multiple fields with specialized researchers and the field of physics, although logically pre-eminent, no longer could claim sole ownership of the entire field of scientific research.
== Current experiments ==
Some examples of prominent experimental physics projects are:
Relativistic Heavy Ion Collider which collides heavy ions such as gold ions (it is the first heavy ion collider) and protons, it is located at Brookhaven National Laboratory, on Long Island, USA.
HERA, which collides electrons or positrons and protons, and is part of DESY, located in Hamburg, Germany.
LHC, or the Large Hadron Collider, which completed construction in 2008 but suffered a series of setbacks. The LHC began operations in 2008, but was shut down for maintenance until the summer of 2009. It is the world's most energetic collider upon completion, it is located at CERN, on the French-Swiss border near Geneva. The collider became fully operational March 29, 2010 a year and a half later than originally planned.
LIGO, the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Currently two LIGO observatories exist: LIGO Livingston Observatory in Livingston, Louisiana, and LIGO Hanford Observatory near Richland, Washington.
JWST, or the James Webb Space Telescope, launched in 2021. It will be the successor to the Hubble Space Telescope. It will survey the sky in the infrared region. The main goals of the JWST will be in order to understand the initial stages of the universe, galaxy formation as well as the formations of stars and planets, and the origins of life.
Mississippi State Axion Search (2016 completion), Light Shining Through a Wall Experiment (LSW); EM Source: .7m, 50W continuous radio wave emitter
== Method ==
Experimental physics uses two main methods of experimental research, controlled experiments, and natural experiments. Controlled experiments are often used in laboratories as laboratories can offer a controlled environment. Natural experiments are used, for example, in astrophysics when observing celestial objects where control of the variables in effect is impossible.
== Famous experiments ==
Famous experiments include:
== Experimental techniques ==
Some well-known experimental techniques include:
== Prominent experimental physicists ==
Famous experimental physicists include:
== Timelines ==
See the timelines below for listings of physics experiments.
Timeline of atomic and subatomic physics
Timeline of classical mechanics
Timeline of electromagnetism and classical optics
Timeline of gravitational physics and relativity
Timeline of nuclear fusion
Timeline of particle discoveries
Timeline of particle physics technology
Timeline of states of matter and phase transitions
Timeline of thermodynamics
== See also ==
Physics
Engineering
Experimental science
Measuring instrument
Pulse programming
== References ==
== Further reading ==
Taylor, John R. (1987). An Introduction to Error Analysis (2nd ed.). University Science Books. ISBN 978-0-935702-75-0.
== External links ==
Media related to Experimental physics at Wikimedia Commons | Wikipedia/Experimental_physics |
Mathematical Methods in the Physical Sciences is a 1966 textbook by mathematician Mary L. Boas intended to develop skills in mathematical problem solving needed for junior to senior-graduate courses in engineering, physics, and chemistry. The book provides a comprehensive survey of analytic techniques and provides careful statements of important theorems while omitting most detailed proofs. Each section contains a large number of problems, with selected answers. Numerical computational approaches using computers are outside the scope of the book.
The book, now in its third edition, was still widely used in university classrooms as of 1999
and is frequently cited in other textbooks and scientific papers.
== Chapters ==
Infinite series, power series
Complex numbers
Linear algebra
Partial differentiation
Multiple integrals
Vector analysis
Fourier series and transforms
Ordinary differential equations
Calculus of variations
Tensor analysis
Special functions
Series solution of differential equations; Legendre, Bessel, Hermite, and Laguerre functions
Partial differential equations
Functions of a complex variable
Integral transforms
Probability and statistics
== References ==
== Further reading == | Wikipedia/Mathematical_Methods_in_the_Physical_Sciences |
In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics.
As with classical mechanics, the subject can be divided into "kinematics"; the description of motion by specifying positions, velocities and accelerations, and "dynamics"; a full description by considering energies, momenta, and angular momenta and their conservation laws, and forces acting on particles or exerted by particles. There is however a subtlety; what appears to be "moving" and what is "at rest"—which is termed by "statics" in classical mechanics—depends on the relative motion of observers who measure in frames of reference.
Some definitions and concepts from classical mechanics do carry over to SR, such as force as the time derivative of momentum (Newton's second law), the work done by a particle as the line integral of force exerted on the particle along a path, and power as the time derivative of work done. However, there are a number of significant modifications to the remaining definitions and formulae. SR states that motion is relative and the laws of physics are the same for all experimenters irrespective of their inertial reference frames. In addition to modifying notions of space and time, SR forces one to reconsider the concepts of mass, momentum, and energy all of which are important constructs in Newtonian mechanics. SR shows that these concepts are all different aspects of the same physical quantity in much the same way that it shows space and time to be interrelated.
The equations become more complicated in the more familiar three-dimensional vector calculus formalism, due to the nonlinearity in the Lorentz factor, which accurately accounts for relativistic velocity dependence and the speed limit of all particles and fields. However, they have a simpler and elegant form in four-dimensional spacetime, which includes flat Minkowski space (SR) and curved spacetime (GR), because three-dimensional vectors derived from space and scalars derived from time can be collected into four vectors, or four-dimensional tensors. The six-component angular momentum tensor is sometimes called a bivector because in the 3D viewpoint it is two vectors (one of these, the conventional angular momentum, being an axial vector).
== Relativistic kinematics ==
The relativistic four-velocity, that is the four-vector representing velocity in relativity, is defined as follows:
U
=
d
X
d
τ
=
(
c
d
t
d
τ
,
d
x
d
τ
)
{\displaystyle {\boldsymbol {\mathbf {U} }}={\frac {d{\boldsymbol {\mathbf {X} }}}{d\tau }}=\left({\frac {cdt}{d\tau }},{\frac {d\mathbf {x} }{d\tau }}\right)}
In the above,
τ
{\displaystyle {\tau }}
is the proper time of the path through spacetime, called the world-line, followed by the object velocity the above represents, and
X
=
(
c
t
,
x
)
{\displaystyle {\boldsymbol {\mathbf {X} }}=(ct,\mathbf {x} )}
is the four-position; the coordinates of an event. Due to time dilation, the proper time is the time between two events in a frame of reference where they take place at the same location. The proper time is related to coordinate time t by:
d
τ
d
t
=
1
γ
(
v
)
{\displaystyle {\frac {d\tau }{dt}}={\frac {1}{\gamma (\mathbf {v} )}}}
where
γ
(
v
)
{\displaystyle {\gamma }(\mathbf {v} )}
is the Lorentz factor:
γ
(
v
)
=
1
1
−
v
⋅
v
/
c
2
⇌
γ
(
v
)
=
1
1
−
(
v
/
c
)
2
.
{\displaystyle \gamma (\mathbf {v} )={\frac {1}{\sqrt {1-\mathbf {v} \cdot \mathbf {v} /c^{2}}}}\,\rightleftharpoons \,\gamma (v)={\frac {1}{\sqrt {1-(v/c)^{2}}}}.}
(either version may be quoted) so it follows:
U
=
γ
(
v
)
(
c
,
v
)
{\displaystyle {\boldsymbol {\mathbf {U} }}=\gamma (\mathbf {v} )(c,\mathbf {v} )}
The first three terms, excepting the factor of
γ
(
v
)
{\displaystyle {\gamma (\mathbf {v} )}}
, is the velocity as seen by the observer in their own reference frame. The
γ
(
v
)
{\displaystyle {\gamma (\mathbf {v} )}}
is determined by the velocity
v
{\displaystyle \mathbf {v} }
between the observer's reference frame and the object's frame, which is the frame in which its proper time is measured. This quantity is invariant under Lorentz transformation, so to check to see what an observer in a different reference frame sees, one simply multiplies the velocity four-vector by the Lorentz transformation matrix between the two reference frames.
== Relativistic dynamics ==
=== Rest mass and relativistic mass ===
The mass of an object as measured in its own frame of reference is called its rest mass or invariant mass and is sometimes written
m
0
{\displaystyle m_{0}}
. If an object moves with velocity
v
{\displaystyle \mathbf {v} }
in some other reference frame, the quantity
m
=
γ
(
v
)
m
0
{\displaystyle m=\gamma (\mathbf {v} )m_{0}}
is often called the object's "relativistic mass" in that frame.
Some authors use
m
{\displaystyle m}
to denote rest mass, but for the sake of clarity this article will follow the convention of using
m
{\displaystyle m}
for relativistic mass and
m
0
{\displaystyle m_{0}}
for rest mass.
Lev Okun has suggested that the concept of relativistic mass "has no rational justification today" and should no longer be taught.
Other physicists, including Wolfgang Rindler and T. R. Sandin, contend that the concept is useful.
See mass in special relativity for more information on this debate.
A particle whose rest mass is zero is called massless. Photons and gravitons are thought to be massless, and neutrinos are nearly so.
=== Relativistic energy and momentum ===
There are a couple of (equivalent) ways to define momentum and energy in SR. One method uses conservation laws. If these laws are to remain valid in SR they must be true in every possible reference frame. However, if one does some simple thought experiments using the Newtonian definitions of momentum and energy, one sees that these quantities are not conserved in SR. One can rescue the idea of conservation by making some small modifications to the definitions to account for relativistic velocities. It is these new definitions which are taken as the correct ones for momentum and energy in SR.
The four-momentum of an object is straightforward, identical in form to the classical momentum, but replacing 3-vectors with 4-vectors:
P
=
m
0
U
=
(
E
/
c
,
p
)
{\displaystyle {\boldsymbol {\mathbf {P} }}=m_{0}{\boldsymbol {\mathbf {U} }}=(E/c,\mathbf {p} )}
The energy and momentum of an object with invariant mass
m
0
{\displaystyle m_{0}}
, moving with velocity
v
{\displaystyle \mathbf {v} }
with respect to a given frame of reference, are respectively given by
E
=
γ
(
v
)
m
0
c
2
p
=
γ
(
v
)
m
0
v
{\displaystyle {\begin{aligned}E&=\gamma (\mathbf {v} )m_{0}c^{2}\\\mathbf {p} &=\gamma (\mathbf {v} )m_{0}\mathbf {v} \end{aligned}}}
The factor
γ
{\displaystyle \gamma }
comes from the definition of the four-velocity described above. The appearance of
γ
{\displaystyle \gamma }
may be stated in an alternative way, which will be explained in the next section.
The kinetic energy,
K
{\displaystyle K}
, is defined as
K
=
(
γ
−
1
)
m
0
c
2
=
E
−
m
0
c
2
,
{\displaystyle K=(\gamma -1)m_{0}c^{2}=E-m_{0}c^{2}\,,}
and the speed as a function of kinetic energy is given by
v
=
c
1
−
(
m
0
c
2
K
+
m
0
c
2
)
2
=
c
K
(
K
+
2
m
0
c
2
)
K
+
m
0
c
2
=
c
(
E
−
m
0
c
2
)
(
E
+
m
0
c
2
)
E
=
p
c
2
E
.
{\displaystyle v=c{\sqrt {1-\left({\frac {m_{0}c^{2}}{K+m_{0}c^{2}}}\right)^{2}}}={\frac {c{\sqrt {K(K+2m_{0}c^{2})}}}{K+m_{0}c^{2}}}={\frac {c{\sqrt {(E-m_{0}c^{2})(E+m_{0}c^{2})}}}{E}}={\frac {pc^{2}}{E}}\,.}
The spatial momentum may be written as
p
=
m
v
{\displaystyle \mathbf {p} =m\mathbf {v} }
, preserving the form from Newtonian mechanics with relativistic mass substituted for Newtonian mass. However, this substitution fails for some quantities, including force and kinetic energy. Moreover, the relativistic mass is not invariant under Lorentz transformations, while the rest mass is. For this reason, many people prefer to use the rest mass and account for
γ
{\displaystyle \gamma }
explicitly through the 4-velocity or coordinate time.
A simple relation between energy, momentum, and velocity may be obtained from the definitions of energy and momentum by multiplying the energy by
v
{\displaystyle \mathbf {v} }
, multiplying the momentum by
c
2
{\displaystyle c^{2}}
, and noting that the two expressions are equal. This yields
p
c
2
=
E
v
{\displaystyle \mathbf {p} c^{2}=E\mathbf {v} }
v
{\displaystyle \mathbf {v} }
may then be eliminated by dividing this equation by
c
{\displaystyle c}
and squaring,
(
p
c
)
2
=
E
2
(
v
/
c
)
2
{\displaystyle (pc)^{2}=E^{2}(v/c)^{2}}
dividing the definition of energy by
γ
{\displaystyle \gamma }
and squaring,
E
2
(
1
−
(
v
/
c
)
2
)
=
(
m
0
c
2
)
2
{\displaystyle E^{2}\left(1-(v/c)^{2}\right)=\left(m_{0}c^{2}\right)^{2}}
and substituting:
E
2
−
(
p
c
)
2
=
(
m
0
c
2
)
2
{\displaystyle E^{2}-(pc)^{2}=\left(m_{0}c^{2}\right)^{2}}
This is the relativistic energy–momentum relation.
While the energy
E
{\displaystyle E}
and the momentum
p
{\displaystyle \mathbf {p} }
depend on the frame of reference in which they are measured, the quantity
E
2
−
(
p
c
)
2
{\displaystyle E^{2}-(pc)^{2}}
is invariant. Its value is
−
c
2
{\displaystyle -c^{2}}
times the squared magnitude of the 4-momentum vector.
The invariant mass of a system may be written as
m
0
tot
=
E
tot
2
−
(
p
tot
c
)
2
c
2
{\displaystyle {m_{0}}_{\text{tot}}={\frac {\sqrt {E_{\text{tot}}^{2}-(p_{\text{tot}}c)^{2}}}{c^{2}}}}
Due to kinetic energy and binding energy, this quantity is different from the sum of the rest masses of the particles of which the system is composed. Rest mass is not a conserved quantity in special relativity, unlike the situation in Newtonian physics. However, even if an object is changing internally, so long as it does not exchange energy or momentum with its surroundings, its rest mass will not change and can be calculated with the same result in any reference frame.
=== Mass–energy equivalence ===
The relativistic energy–momentum equation holds for all particles, even for massless particles for which m0 = 0. In this case:
E
=
p
c
{\displaystyle E=pc}
When substituted into Ev = c2p, this gives v = c: massless particles (such as photons) always travel at the speed of light.
Notice that the rest mass of a composite system will generally be slightly different from the sum of the rest masses of its parts since, in its rest frame, their kinetic energy will increase its mass and their (negative) binding energy will decrease its mass. In particular, a hypothetical "box of light" would have rest mass even though made of particles which do not since their momenta would cancel.
Looking at the above formula for invariant mass of a system, one sees that, when a single massive object is at rest (v = 0, p = 0), there is a non-zero mass remaining: m0 = E/c2.
The corresponding energy, which is also the total energy when a single particle is at rest, is referred to as "rest energy". In systems of particles which are seen from a moving inertial frame, total energy increases and so does momentum. However, for single particles the rest mass remains constant, and for systems of particles the invariant mass remain constant, because in both cases, the energy and momentum increases subtract from each other, and cancel. Thus, the invariant mass of systems of particles is a calculated constant for all observers, as is the rest mass of single particles.
=== The mass of systems and conservation of invariant mass ===
For systems of particles, the energy–momentum equation requires summing the momentum vectors of the particles:
E
2
−
p
⋅
p
c
2
=
m
0
2
c
4
{\displaystyle E^{2}-\mathbf {p} \cdot \mathbf {p} c^{2}=m_{0}^{2}c^{4}}
The inertial frame in which the momenta of all particles sums to zero is called the center of momentum frame. In this special frame, the relativistic energy–momentum equation has p = 0, and thus gives the invariant mass of the system as merely the total energy of all parts of the system, divided by c2
m
0
,
s
y
s
t
e
m
=
∑
n
E
n
/
c
2
{\displaystyle m_{0,\,{\rm {system}}}=\sum _{n}E_{n}/c^{2}}
This is the invariant mass of any system which is measured in a frame where it has zero total momentum, such as a bottle of hot gas on a scale. In such a system, the mass which the scale weighs is the invariant mass, and it depends on the total energy of the system. It is thus more than the sum of the rest masses of the molecules, but also includes all the totaled energies in the system as well. Like energy and momentum, the invariant mass of isolated systems cannot be changed so long as the system remains totally closed (no mass or energy allowed in or out), because the total relativistic energy of the system remains constant so long as nothing can enter or leave it.
An increase in the energy of such a system which is caused by translating the system to an inertial frame which is not the center of momentum frame, causes an increase in energy and momentum without an increase in invariant mass. E = m0c2, however, applies only to isolated systems in their center-of-momentum frame where momentum sums to zero.
Taking this formula at face value, we see that in relativity, mass is simply energy by another name (and measured in different units). In 1927 Einstein remarked about special relativity, "Under this theory mass is not an unalterable magnitude, but a magnitude dependent on (and, indeed, identical with) the amount of energy."
=== Closed (isolated) systems ===
In a "totally-closed" system (i.e., isolated system) the total energy, the total momentum, and hence the total invariant mass are conserved. Einstein's formula for change in mass translates to its simplest ΔE = Δmc2 form, however, only in non-closed systems in which energy is allowed to escape (for example, as heat and light), and thus invariant mass is reduced. Einstein's equation shows that such systems must lose mass, in accordance with the above formula, in proportion to the energy they lose to the surroundings. Conversely, if one can measure the differences in mass between a system before it undergoes a reaction which releases heat and light, and the system after the reaction when heat and light have escaped, one can estimate the amount of energy which escapes the system.
==== Chemical and nuclear reactions ====
In both nuclear and chemical reactions, such energy represents the difference in binding energies of electrons in atoms (for chemistry) or between nucleons in nuclei (in atomic reactions). In both cases, the mass difference between reactants and (cooled) products measures the mass of heat and light which will escape the reaction, and thus (using the equation) give the equivalent energy of heat and light which may be emitted if the reaction proceeds.
In chemistry, the mass differences associated with the emitted energy are around 10−9 of the molecular mass. However, in nuclear reactions the energies are so large that they are associated with mass differences, which can be estimated in advance, if the products and reactants have been weighed (atoms can be weighed indirectly by using atomic masses, which are always the same for each nuclide). Thus, Einstein's formula becomes important when one has measured the masses of different atomic nuclei. By looking at the difference in masses, one can predict which nuclei have stored energy that can be released by certain nuclear reactions, providing important information which was useful in the development of nuclear energy and, consequently, the nuclear bomb. Historically, for example, Lise Meitner was able to use the mass differences in nuclei to estimate that there was enough energy available to make nuclear fission a favorable process. The implications of this special form of Einstein's formula have thus made it one of the most famous equations in all of science.
==== Center of momentum frame ====
The equation E = m0c2 applies only to isolated systems in their center of momentum frame. It has been popularly misunderstood to mean that mass may be converted to energy, after which the mass disappears. However, popular explanations of the equation as applied to systems include open (non-isolated) systems for which heat and light are allowed to escape, when they otherwise would have contributed to the mass (invariant mass) of the system.
Historically, confusion about mass being "converted" to energy has been aided by confusion between mass and "matter", where matter is defined as fermion particles. In such a definition, electromagnetic radiation and kinetic energy (or heat) are not considered "matter". In some situations, matter may indeed be converted to non-matter forms of energy (see above), but in all these situations, the matter and non-matter forms of energy still retain their original mass.
For isolated systems (closed to all mass and energy exchange), mass never disappears in the center of momentum frame, because energy cannot disappear. Instead, this equation, in context, means only that when any energy is added to, or escapes from, a system in the center-of-momentum frame, the system will be measured as having gained or lost mass, in proportion to energy added or removed. Thus, in theory, if an atomic bomb were placed in a box strong enough to hold its blast, and detonated upon a scale, the mass of this closed system would not change, and the scale would not move. Only when a transparent "window" was opened in the super-strong plasma-filled box, and light and heat were allowed to escape in a beam, and the bomb components to cool, would the system lose the mass associated with the energy of the blast. In a 21 kiloton bomb, for example, about a gram of light and heat is created. If this heat and light were allowed to escape, the remains of the bomb would lose a gram of mass, as it cooled. In this thought-experiment, the light and heat carry away the gram of mass, and would therefore deposit this gram of mass in the objects that absorb them.
=== Angular momentum ===
In relativistic mechanics, the time-varying mass moment
N
=
m
(
x
−
t
v
)
{\displaystyle \mathbf {N} =m\left(\mathbf {x} -t\mathbf {v} \right)}
and orbital 3-angular momentum
L
=
x
×
p
{\displaystyle \mathbf {L} =\mathbf {x} \times \mathbf {p} }
of a point-like particle are combined into a four-dimensional bivector in terms of the 4-position X and the 4-momentum P of the particle:
M
=
X
∧
P
{\displaystyle \mathbf {M} =\mathbf {X} \wedge \mathbf {P} }
where ∧ denotes the exterior product. This tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system. So, for an assembly of discrete particles one sums the angular momentum tensors over the particles, or integrates the density of angular momentum over the extent of a continuous mass distribution.
Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields.
=== Force ===
In special relativity, Newton's second law does not hold in the form F = ma, but it does if it is expressed as
F
=
d
p
d
t
{\displaystyle \mathbf {F} ={\frac {d\mathbf {p} }{dt}}}
where p = γ(v)m0v is the momentum as defined above and m0 is the invariant mass. Thus, the force is given by
F
=
γ
3
m
0
a
∥
+
γ
m
0
a
⊥
w
h
e
r
e
γ
=
γ
(
v
)
{\displaystyle \mathbf {F} =\gamma ^{3}m_{0}\,\mathbf {a} _{\parallel }+\gamma m_{0}\,\mathbf {a} _{\perp }\ \mathrm {where} \ \gamma =\gamma (\mathbf {v} )}
Consequently, in some old texts, γ(v)3m0 is referred to as the longitudinal mass, and γ(v)m0 is referred to as the transverse mass, which is numerically the same as the relativistic mass. See mass in special relativity.
If one inverts this to calculate acceleration from force, one gets
a
=
1
m
0
γ
(
v
)
(
F
−
(
v
⋅
F
)
v
c
2
)
.
{\displaystyle \mathbf {a} ={\frac {1}{m_{0}\gamma (\mathbf {v} )}}\left(\mathbf {F} -{\frac {(\mathbf {v} \cdot \mathbf {F} )\mathbf {v} }{c^{2}}}\right)\,.}
The force described in this section is the classical 3-D force which is not a four-vector. This 3-D force is the appropriate concept of force since it is the force which obeys Newton's third law of motion. It should not be confused with the so-called four-force which is merely the 3-D force in the comoving frame of the object transformed as if it were a four-vector. However, the density of 3-D force (linear momentum transferred per unit four-volume) is a four-vector (density of weight +1) when combined with the negative of the density of power transferred.
=== Torque ===
The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:
Γ
=
d
M
d
τ
=
X
∧
F
{\displaystyle {\boldsymbol {\Gamma }}={\frac {d\mathbf {M} }{d\tau }}=\mathbf {X} \wedge \mathbf {F} }
or in tensor components:
Γ
α
β
=
X
α
F
β
−
X
β
F
α
{\displaystyle \Gamma _{\alpha \beta }=X_{\alpha }F_{\beta }-X_{\beta }F_{\alpha }}
where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass.
=== Kinetic energy ===
The work-energy theorem says the change in kinetic energy is equal to the work done on the body. In special relativity:
Δ
K
=
W
=
[
γ
1
−
γ
0
]
m
0
c
2
.
{\displaystyle {\begin{aligned}\Delta K=W=[\gamma _{1}-\gamma _{0}]m_{0}c^{2}.\end{aligned}}}
If in the initial state the body was at rest, so v0 = 0 and γ0(v0) = 1, and in the final state it has speed v1 = v, setting γ1(v1) = γ(v), the kinetic energy is then;
K
=
[
γ
(
v
)
−
1
]
m
0
c
2
,
{\displaystyle K=[\gamma (v)-1]m_{0}c^{2}\,,}
a result that can be directly obtained by subtracting the rest energy m0c2 from the total relativistic energy γ(v)m0c2.
=== Newtonian limit ===
The Lorentz factor γ(v) can be expanded into a Taylor series or binomial series for (v/c)2 < 1, obtaining:
γ
=
1
1
−
(
v
/
c
)
2
=
∑
n
=
0
∞
(
v
c
)
2
n
∏
k
=
1
n
(
2
k
−
1
2
k
)
=
1
+
1
2
(
v
c
)
2
+
3
8
(
v
c
)
4
+
5
16
(
v
c
)
6
+
⋯
{\displaystyle \gamma ={\dfrac {1}{\sqrt {1-(v/c)^{2}}}}=\sum _{n=0}^{\infty }\left({\dfrac {v}{c}}\right)^{2n}\prod _{k=1}^{n}\left({\dfrac {2k-1}{2k}}\right)=1+{\dfrac {1}{2}}\left({\dfrac {v}{c}}\right)^{2}+{\dfrac {3}{8}}\left({\dfrac {v}{c}}\right)^{4}+{\dfrac {5}{16}}\left({\dfrac {v}{c}}\right)^{6}+\cdots }
and consequently
E
−
m
0
c
2
=
1
2
m
0
v
2
+
3
8
m
0
v
4
c
2
+
5
16
m
0
v
6
c
4
+
⋯
;
{\displaystyle E-m_{0}c^{2}={\frac {1}{2}}m_{0}v^{2}+{\frac {3}{8}}{\frac {m_{0}v^{4}}{c^{2}}}+{\frac {5}{16}}{\frac {m_{0}v^{6}}{c^{4}}}+\cdots ;}
p
=
m
0
v
+
1
2
m
0
v
2
v
c
2
+
3
8
m
0
v
4
v
c
4
+
5
16
m
0
v
6
v
c
6
+
⋯
.
{\displaystyle \mathbf {p} =m_{0}\mathbf {v} +{\frac {1}{2}}{\frac {m_{0}v^{2}\mathbf {v} }{c^{2}}}+{\frac {3}{8}}{\frac {m_{0}v^{4}\mathbf {v} }{c^{4}}}+{\frac {5}{16}}{\frac {m_{0}v^{6}\mathbf {v} }{c^{6}}}+\cdots .}
For velocities much smaller than that of light, one can neglect the terms with c2 and higher in the denominator. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities.
== See also ==
== References ==
== Further reading ==
General scope and special/general relativity
P.M. Whelan; M.J. Hodgeson (1978). Essential Principles of Physics (2nd ed.). John Murray. ISBN 0-7195-3382-1.
G. Woan (2010). The Cambridge Handbook of Physics Formulas. Cambridge University Press. ISBN 978-0-521-57507-2.
P.A. Tipler; G. Mosca (2008). Physics for Scientists and Engineers: With Modern Physics (6th ed.). W.H. Freeman and Co. ISBN 978-1-4292-0265-7.
R.G. Lerner; G.L. Trigg (2005). Encyclopaedia of Physics (2nd ed.). VHC Publishers, Hans Warlimont, Springer. ISBN 978-0-07-025734-4.
Concepts of Modern Physics (4th Edition), A. Beiser, Physics, McGraw-Hill (International), 1987, ISBN 0-07-100144-1
C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. ISBN 0-07-051400-3.
T. Frankel (2012). The Geometry of Physics (3rd ed.). Cambridge University Press. ISBN 978-1-107-60260-1.
L.H. Greenberg (1978). Physics with Modern Applications. Holt-Saunders International W.B. Saunders and Co. ISBN 0-7216-4247-0.
A. Halpern (1988). 3000 Solved Problems in Physics, Schaum Series. Mc Graw Hill. ISBN 978-0-07-025734-4.
Electromagnetism and special relativity
G.A.G. Bennet (1974). Electricity and Modern Physics (2nd ed.). Edward Arnold (UK). ISBN 0-7131-2459-8.
I.S. Grant; W.R. Phillips; Manchester Physics (2008). Electromagnetism (2nd ed.). John Wiley & Sons. ISBN 978-0-471-92712-9.
D.J. Griffiths (2007). Introduction to Electrodynamics (3rd ed.). Pearson Education, Dorling Kindersley. ISBN 978-81-7758-293-2.
Classical mechanics and special relativity
J.R. Forshaw; A.G. Smith (2009). Dynamics and Relativity. Wiley. ISBN 978-0-470-01460-8.
D. Kleppner; R.J. Kolenkow (2010). An Introduction to Mechanics. Cambridge University Press. ISBN 978-0-521-19821-9.
L.N. Hand; J.D. Finch (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
P.J. O'Donnell (2015). Essential Dynamics and Relativity. CRC Press. ISBN 978-1-4665-8839-4.
General relativity
D. McMahon (2006). Relativity DeMystified. Mc Graw Hill. ISBN 0-07-145545-0.
J.A. Wheeler; C. Misner; K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. ISBN 0-7167-0344-0.
J.A. Wheeler; I. Ciufolini (1995). Gravitation and Inertia. Princeton University Press. ISBN 978-0-691-03323-5.
R.J.A. Lambourne (2010). Relativity, Gravitation, and Cosmology. Cambridge University Press. ISBN 978-0-521-13138-4. | Wikipedia/Relativistic_mechanics |
The World, also called Treatise on the Light (French title: Traité du monde et de la lumière), is a book by René Descartes (1596–1650). Written between 1629 and 1633, it contains a nearly complete version of his philosophy, from method, to metaphysics, to physics and biology.
Descartes espoused mechanical philosophy, a form of natural philosophy popular in the 17th century. He thought everything physical in the universe to be made of tiny "corpuscles" of matter. Corpuscularianism is closely related to atomism. The main difference was that Descartes maintained that there could be no vacuum, and all matter was constantly swirling to prevent a void as corpuscles moved through other matter. The World presents a corpuscularian cosmology in which swirling vortices explain, among other phenomena, the creation of the Solar System and the circular motion of planets around the Sun.
The World rests on the heliocentric view, first explicated in Western Europe by Copernicus. Descartes delayed the book's release upon news of the Roman Inquisition's conviction of Galileo for "suspicion of heresy" and sentencing to house arrest. Descartes discussed his work on the book, and his decision not to release it, in letters with another philosopher, Marin Mersenne.
Some material from The World was revised for publication as Principia philosophiae or Principles of Philosophy (1644), a Latin textbook at first intended by Descartes to replace the Aristotelian textbooks then used in universities. In the Principles the heliocentric tone was softened slightly with a relativist frame of reference. The last chapter of The World was published separately as De Homine (On Man) in 1662. The rest of The World was finally published in 1664, and the entire text in 1677.
== Contents of The World ==
On the Difference Between our Sensations and the Things That Produce Them
In What the Heat and Light of Fire Consists
On Hardness and Liquidity
On the Void, and How it Happens that Our Senses Are Not Aware of Certain Bodies
On the Number of Elements and on Their Qualities
Description of a New World, and on the Qualities of the Matter of Which it is Composed
On the Laws of Nature of this New World
On the Formation of the Sun and the Stars of the New World
On the Origin and the Course of the Planets and Comets in General; and of Comets in Particular
On the Planets in General, and in Particular on the Earth and Moon
On Weight
On the Ebb and Flow of the Sea
On Light
On the Properties of Light
That the Face of the Heaven of That New World Must Appear to Its Inhabitants Completely like That of Our World
== The void and particles in nature ==
Before Descartes begins to describe his theories in physics, he introduces the reader to the idea that there is no relationship between our sensations and what creates these sensations, thereby casting doubt on the Aristotelian belief that such a relationship existed. Next he describes how fire is capable of breaking wood apart into its minuscule parts through the rapid motion of the particles of fire within the flames. This rapid motion of particles is what gives fire its heat, since Descartes claims heat is nothing more than just the motion of particles, and what causes it to produce light.
According to Descartes, the motion, or agitation, of these particles is what gives substances their properties (i.e. their fluidity and hardness). Fire is the most fluid and has enough energy to render most other bodies fluid whereas the particles of air lack the force necessary to do the same. Hard bodies have particles that are all equally hard to separate from the whole.
Based on his observations of how resistant nature is to a vacuum, Descartes deduced that all particles in nature are packed together such that there is no void or empty space between them.
Descartes describes substances as consisting only of three elementary elements: fire, air and earth, from which the properties of any substance can be characterized by its composition of these elements, the size and arrangement of the particles in the substance, and the motion of its particles.
== Cartesian laws of motion ==
Descartes asserts several laws governing the motion of these particles and all other objects in nature:
“…each particular part of matter always continues in the same state unless collision with others forces it to change its state.”
“…when one of these bodies pushes another, it cannot give the 2nd any motion, except by losing as much of its own motion at the same time…”
“…when a body is moving…each of its parts individually tends always to continue moving along a straight line” (Gaukroger)
Descartes in Principles of Philosophy added to these his laws on elastic collision.
== The Cartesian universe ==
Descartes elaborates on how the universe could have started from utter chaos and with these basic laws could have had its particles arranged so as to resemble the universe we observe today. Once the particles in the chaotic universe began to move, the overall motion would have been circular because there is no void in nature, so whenever a single particle moves, another particle must also move to occupy the space where the previous particle once was. This type of circular motion, or vortex, would have created what Descartes observed to be the orbits of the planets about the Sun with the heavier objects spinning out towards the outside of the vortex and the lighter objects remaining closer to the center. To explain this, Descartes used the analogy of a river that carried both floating debris (leaves, feathers, etc.) and heavy boats. If the river abruptly arrived at a sharp bend, the boats would follow Descartes third law of motion and hit the shore of the river since the flow of the particles in the river would not have enough force to change the direction of the boat. However, the much lighter floating debris would follow the river since the particles in the river would have sufficient force to change the direction of the debris. In the heavens, it’s the circular flow of celestial particles, or aether, that causes the motion of the planets to be circular.
As to the reason why heavy objects on Earth fall, Descartes explained this through the agitation of the particles in the atmosphere. The particles of the aether have greater agitation than the particles of air, which in turn have greater agitation than the particles that compose terrestrial objects (e.g. stones). The greater agitation of the aether prevents the particles of air from escaping into the heavens, just as the agitation of air particles forces terrestrial bodies, whose particles have far less agitation than those of air, to descend towards the world.
== Cartesian theory on light ==
With his laws of motion set forth and the universe operating under these laws, Descartes next begins to describe his theory on the nature of light. Descartes believed that light traveled instantaneously - a common belief at the time – as an impulse across all the adjacent particles in nature, since Descartes believed nature was without a void. To illustrate this, Descartes used the example of a stick being pushed against some body. Just as the force which is felt at one end of the stick is instantly transferred and felt at the other end, so is the impulse of light that is sent across the heavens and through the atmosphere from luminous bodies to our eyes. Descartes attributed light to have 12 distinct properties:
Light extends radially in all direction from luminous bodies
Light extends out to any distance
Light travels instantaneously
Light travels ordinarily in straight lines or rays
Several rays can come from different points and meet at the same point
Several rays can start at the same point and travel in different directions
Several rays can pass through the same point without impeding each other
If the rays are of very unequal force, then they can sometimes impede one another
Also:
9) and 10) Rays can be diverted by reflection or by refraction
11) and 12) The force of a ray can be augmented or diminished by the disposition of the matter that receives it.
== Notes ==
== References ==
Descartes, René, Le Monde, L'Homme, critical edition with an introduction and notes by Annie Bitbol-Hespériès, Paris: Seuil, 1996.
Descartes, René. Le Monde, ou Traite de la lumiere. Translation and introduction by Michael Sean Mahoney. New York: Abaris Books, 1979. (French and English text on facing pages) Mahoney's English translation
Descartes, René. The World and Other Writings. Trans. Stephen Gaukroger. New York: Cambridge University Press, 1998.
Melchert, Norman (2002). The Great Conversation: A Historical Introduction to Philosophy. McGraw Hill. ISBN 0-19-517510-7.
== External links ==
Online version | Wikipedia/Cartesian_physics |
In mathematical analysis and in probability theory, a σ-algebra ("sigma algebra") is part of the formalism for defining sets that can be measured. In calculus and analysis, for example, σ-algebras are used to define the concept of sets with area or volume. In probability theory, they are used to define events with a well-defined probability. In this way, σ-algebras help to formalize the notion of size.
In formal terms, a σ-algebra (also σ-field, where the σ comes from the German "Summe", meaning "sum") on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
is called a measurable space.
The set X is understood to be an ambient space (such as the 2D plane or the set of outcomes when rolling a six-sided die {1,2,3,4,5,6}), and the collection Σ is a choice of subsets declared to have a well-defined size. The closure requirements for σ-algebras are designed to capture our intuitive ideas about how sizes combine: if there is a well-defined probability that an event occurs, there should be a well-defined probability that it does not occur (closure under complements); if several sets have a well-defined size, so should their combination (countable unions); if several events have a well-defined probability of occurring, so should the event where they all occur simultaneously (countable intersections).
The definition of σ-algebra resembles other mathematical structures such as a topology (which is required to be closed under all unions but only finite intersections, and which doesn't necessarily contain all complements of its sets) or a set algebra (which is closed only under finite unions and intersections).
== Examples of σ-algebras ==
If
X
=
{
a
,
b
,
c
,
d
}
{\displaystyle X=\{a,b,c,d\}}
one possible σ-algebra on
X
{\displaystyle X}
is
Σ
=
{
∅
,
{
a
,
b
}
,
{
c
,
d
}
,
{
a
,
b
,
c
,
d
}
}
,
{\displaystyle \Sigma =\{\varnothing ,\{a,b\},\{c,d\},\{a,b,c,d\}\},}
where
∅
{\displaystyle \varnothing }
is the empty set. In general, a finite algebra is always a σ-algebra.
If
{
A
1
,
A
2
,
A
3
,
…
}
,
{\displaystyle \{A_{1},A_{2},A_{3},\ldots \},}
is a countable partition of
X
{\displaystyle X}
then the collection of all unions of sets in the partition (including the empty set) is a σ-algebra.
A more useful example is the set of subsets of the real line formed by starting with all open intervals and adding in all countable unions, countable intersections, and relative complements and continuing this process (by transfinite iteration through all countable ordinals) until the relevant closure properties are achieved (a construction known as the Borel hierarchy).
== Motivation ==
There are at least three key motivators for σ-algebras: defining measures, manipulating limits of sets, and managing partial information characterized by sets.
=== Measure ===
A measure on
X
{\displaystyle X}
is a function that assigns a non-negative real number to subsets of
X
;
{\displaystyle X;}
this can be thought of as making precise a notion of "size" or "volume" for sets. We want the size of the union of disjoint sets to be the sum of their individual sizes, even for an infinite sequence of disjoint sets.
One would like to assign a size to every subset of
X
,
{\displaystyle X,}
but in many natural settings, this is not possible. For example, the axiom of choice implies that when the size under consideration is the ordinary notion of length for subsets of the real line, then there exist sets for which no size exists, for example, the Vitali sets. For this reason, one considers instead a smaller collection of privileged subsets of
X
.
{\displaystyle X.}
These subsets will be called the measurable sets. They are closed under operations that one would expect for measurable sets, that is, the complement of a measurable set is a measurable set and the countable union of measurable sets is a measurable set. Non-empty collections of sets with these properties are called σ-algebras.
=== Limits of sets ===
Many uses of measure, such as the probability concept of almost sure convergence, involve limits of sequences of sets. For this, closure under countable unions and intersections is paramount. Set limits are defined as follows on σ-algebras.
The limit supremum or outer limit of a sequence
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
of subsets of
X
{\displaystyle X}
is
lim sup
n
→
∞
A
n
=
⋂
n
=
1
∞
⋃
m
=
n
∞
A
m
=
⋂
n
=
1
∞
A
n
∪
A
n
+
1
∪
⋯
.
{\displaystyle \limsup _{n\to \infty }A_{n}=\bigcap _{n=1}^{\infty }\bigcup _{m=n}^{\infty }A_{m}=\bigcap _{n=1}^{\infty }A_{n}\cup A_{n+1}\cup \cdots .}
It consists of all points
x
{\displaystyle x}
that are in infinitely many of these sets (or equivalently, that are in cofinally many of them). That is,
x
∈
lim sup
n
→
∞
A
n
{\displaystyle x\in \limsup _{n\to \infty }A_{n}}
if and only if there exists an infinite subsequence
A
n
1
,
A
n
2
,
…
{\displaystyle A_{n_{1}},A_{n_{2}},\ldots }
(where
n
1
<
n
2
<
⋯
{\displaystyle n_{1}<n_{2}<\cdots }
) of sets that all contain
x
;
{\displaystyle x;}
that is, such that
x
∈
A
n
1
∩
A
n
2
∩
⋯
.
{\displaystyle x\in A_{n_{1}}\cap A_{n_{2}}\cap \cdots .}
The limit infimum or inner limit of a sequence
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
of subsets of
X
{\displaystyle X}
is
lim inf
n
→
∞
A
n
=
⋃
n
=
1
∞
⋂
m
=
n
∞
A
m
=
⋃
n
=
1
∞
A
n
∩
A
n
+
1
∩
⋯
.
{\displaystyle \liminf _{n\to \infty }A_{n}=\bigcup _{n=1}^{\infty }\bigcap _{m=n}^{\infty }A_{m}=\bigcup _{n=1}^{\infty }A_{n}\cap A_{n+1}\cap \cdots .}
It consists of all points that are in all but finitely many of these sets (or equivalently, that are eventually in all of them). That is,
x
∈
lim inf
n
→
∞
A
n
{\displaystyle x\in \liminf _{n\to \infty }A_{n}}
if and only if there exists an index
N
∈
N
{\displaystyle N\in \mathbb {N} }
such that
A
N
,
A
N
+
1
,
…
{\displaystyle A_{N},A_{N+1},\ldots }
all contain
x
;
{\displaystyle x;}
that is, such that
x
∈
A
N
∩
A
N
+
1
∩
⋯
.
{\displaystyle x\in A_{N}\cap A_{N+1}\cap \cdots .}
The inner limit is always a subset of the outer limit:
lim inf
n
→
∞
A
n
⊆
lim sup
n
→
∞
A
n
.
{\displaystyle \liminf _{n\to \infty }A_{n}~\subseteq ~\limsup _{n\to \infty }A_{n}.}
If these two sets are equal then their limit
lim
n
→
∞
A
n
{\displaystyle \lim _{n\to \infty }A_{n}}
exists and is equal to this common set:
lim
n
→
∞
A
n
:=
lim inf
n
→
∞
A
n
=
lim sup
n
→
∞
A
n
.
{\displaystyle \lim _{n\to \infty }A_{n}:=\liminf _{n\to \infty }A_{n}=\limsup _{n\to \infty }A_{n}.}
=== Sub σ-algebras ===
In much of probability, especially when conditional expectation is involved, one is concerned with sets that represent only part of all the possible information that can be observed. This partial information can be characterized with a smaller σ-algebra which is a subset of the principal σ-algebra; it consists of the collection of subsets relevant only to and determined only by the partial information. Formally, if
Σ
,
Σ
′
{\displaystyle \Sigma ,\Sigma '}
are σ-algebras on
X
{\displaystyle X}
, then
Σ
′
{\displaystyle \Sigma '}
is a sub σ-algebra of
Σ
{\displaystyle \Sigma }
if
Σ
′
⊆
Σ
{\displaystyle \Sigma '\subseteq \Sigma }
.
The Bernoulli process provides a simple example. This consists of a sequence of random coin flips, coming up Heads (
H
{\displaystyle H}
) or Tails (
T
{\displaystyle T}
), of unbounded length. The sample space Ω consists of all possible infinite sequences of
H
{\displaystyle H}
or
T
:
{\displaystyle T:}
Ω
=
{
H
,
T
}
∞
=
{
(
x
1
,
x
2
,
x
3
,
…
)
:
x
i
∈
{
H
,
T
}
,
i
≥
1
}
.
{\displaystyle \Omega =\{H,T\}^{\infty }=\{(x_{1},x_{2},x_{3},\dots ):x_{i}\in \{H,T\},i\geq 1\}.}
The full sigma algebra can be generated from an ascending sequence of subalgebras, by considering the information that might be obtained after observing some or all of the first
n
{\displaystyle n}
coin flips. This sequence of subalgebras is given by
G
n
=
{
A
×
{
Ω
}
:
A
⊆
{
H
,
T
}
n
}
{\displaystyle {\mathcal {G}}_{n}=\{A\times \{\Omega \}:A\subseteq \{H,T\}^{n}\}}
Each of these is finer than the last, and so can be ordered as a filtration
G
0
⊆
G
1
⊆
G
2
⊆
⋯
⊆
G
∞
{\displaystyle {\mathcal {G}}_{0}\subseteq {\mathcal {G}}_{1}\subseteq {\mathcal {G}}_{2}\subseteq \cdots \subseteq {\mathcal {G}}_{\infty }}
The first subalgebra
G
0
=
{
∅
,
Ω
}
{\displaystyle {\mathcal {G}}_{0}=\{\varnothing ,\Omega \}}
is the trivial algebra: it has only two elements in it, the empty set and the total space. The second subalgebra
G
1
{\displaystyle {\mathcal {G}}_{1}}
has four elements: the two in
G
0
{\displaystyle {\mathcal {G}}_{0}}
plus two more: sequences that start with
H
{\displaystyle H}
and sequences that start with
T
{\displaystyle T}
. Each subalgebra is finer than the last. The
n
{\displaystyle n}
'th subalgebra contains
2
n
+
1
{\displaystyle 2^{n+1}}
elements: it divides the total space
Ω
{\displaystyle \Omega }
into all of the possible sequences that might have been observed after
n
{\displaystyle n}
flips, including the possible non-observation of some of the flips.
The limiting algebra
G
∞
{\displaystyle {\mathcal {G}}_{\infty }}
is the smallest σ-algebra containing all the others. It is the algebra generated by the product topology or weak topology on the product space
{
H
,
T
}
∞
.
{\displaystyle \{H,T\}^{\infty }.}
== Definition and properties ==
=== Definition ===
Let
X
{\displaystyle X}
be some set, and let
P
(
X
)
{\displaystyle P(X)}
represent its power set, the set of all subsets of
X
{\displaystyle X}
. Then a subset
Σ
⊆
P
(
X
)
{\displaystyle \Sigma \subseteq P(X)}
is called a σ-algebra if and only if it satisfies the following three properties:
X
{\displaystyle X}
is in
Σ
{\displaystyle \Sigma }
.
Σ
{\displaystyle \Sigma }
is closed under complementation: If some set
A
{\displaystyle A}
is in
Σ
,
{\displaystyle \Sigma ,}
then so is its complement,
X
∖
A
.
{\displaystyle X\setminus A.}
Σ
{\displaystyle \Sigma }
is closed under countable unions: If
A
1
,
A
2
,
A
3
,
…
{\displaystyle A_{1},A_{2},A_{3},\ldots }
are in
Σ
,
{\displaystyle \Sigma ,}
then so is
A
=
A
1
∪
A
2
∪
A
3
∪
⋯
.
{\displaystyle A=A_{1}\cup A_{2}\cup A_{3}\cup \cdots .}
From these properties, it follows that the σ-algebra is also closed under countable intersections (by applying De Morgan's laws).
It also follows that the empty set
∅
{\displaystyle \varnothing }
is in
Σ
,
{\displaystyle \Sigma ,}
since by (1)
X
{\displaystyle X}
is in
Σ
{\displaystyle \Sigma }
and (2) asserts that its complement, the empty set, is also in
Σ
.
{\displaystyle \Sigma .}
Moreover, since
{
X
,
∅
}
{\displaystyle \{X,\varnothing \}}
satisfies all 3 conditions, it follows that
{
X
,
∅
}
{\displaystyle \{X,\varnothing \}}
is the smallest possible σ-algebra on
X
.
{\displaystyle X.}
The largest possible σ-algebra on
X
{\displaystyle X}
is
P
(
X
)
.
{\displaystyle P(X).}
Elements of the σ-algebra are called measurable sets. An ordered pair
(
X
,
Σ
)
,
{\displaystyle (X,\Sigma ),}
where
X
{\displaystyle X}
is a set and
Σ
{\displaystyle \Sigma }
is a σ-algebra over
X
,
{\displaystyle X,}
is called a measurable space. A function between two measurable spaces is called a measurable function if the preimage of every measurable set is measurable. The collection of measurable spaces forms a category, with the measurable functions as morphisms. Measures are defined as certain types of functions from a σ-algebra to
[
0
,
∞
]
.
{\displaystyle [0,\infty ].}
A σ-algebra is both a π-system and a Dynkin system (λ-system). The converse is true as well, by Dynkin's theorem (see below).
=== Dynkin's π-λ theorem ===
This theorem (or the related monotone class theorem) is an essential tool for proving many results about properties of specific σ-algebras. It capitalizes on the nature of two simpler classes of sets, namely the following.
A π-system
P
{\displaystyle P}
is a collection of subsets of
X
{\displaystyle X}
that is closed under finitely many intersections, and
A Dynkin system (or λ-system)
D
{\displaystyle D}
is a collection of subsets of
X
{\displaystyle X}
that contains
X
{\displaystyle X}
and is closed under complement and under countable unions of disjoint subsets.
Dynkin's π-λ theorem says, if
P
{\displaystyle P}
is a π-system and
D
{\displaystyle D}
is a Dynkin system that contains
P
,
{\displaystyle P,}
then the σ-algebra
σ
(
P
)
{\displaystyle \sigma (P)}
generated by
P
{\displaystyle P}
is contained in
D
.
{\displaystyle D.}
Since certain π-systems are relatively simple classes, it may not be hard to verify that all sets in
P
{\displaystyle P}
enjoy the property under consideration while, on the other hand, showing that the collection
D
{\displaystyle D}
of all subsets with the property is a Dynkin system can also be straightforward. Dynkin's π-λ Theorem then implies that all sets in
σ
(
P
)
{\displaystyle \sigma (P)}
enjoy the property, avoiding the task of checking it for an arbitrary set in
σ
(
P
)
.
{\displaystyle \sigma (P).}
One of the most fundamental uses of the π-λ theorem is to show equivalence of separately defined measures or integrals. For example, it is used to equate a probability for a random variable
X
{\displaystyle X}
with the Lebesgue-Stieltjes integral typically associated with computing the probability:
P
(
X
∈
A
)
=
∫
A
F
(
d
x
)
{\displaystyle \mathbb {P} (X\in A)=\int _{A}\,F(dx)}
for all
A
{\displaystyle A}
in the Borel σ-algebra on
R
,
{\displaystyle \mathbb {R} ,}
where
F
(
x
)
{\displaystyle F(x)}
is the cumulative distribution function for
X
,
{\displaystyle X,}
defined on
R
,
{\displaystyle \mathbb {R} ,}
while
P
{\displaystyle \mathbb {P} }
is a probability measure, defined on a σ-algebra
Σ
{\displaystyle \Sigma }
of subsets of some sample space
Ω
.
{\displaystyle \Omega .}
=== Combining σ-algebras ===
Suppose
{
Σ
α
:
α
∈
A
}
{\displaystyle \textstyle \left\{\Sigma _{\alpha }:\alpha \in {\mathcal {A}}\right\}}
is a collection of σ-algebras on a space
X
.
{\displaystyle X.}
Meet
The intersection of a collection of σ-algebras is a σ-algebra. To emphasize its character as a σ-algebra, it often is denoted by:
⋀
α
∈
A
Σ
α
.
{\displaystyle \bigwedge _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }.}
Sketch of Proof: Let
Σ
∗
{\displaystyle \Sigma ^{*}}
denote the intersection. Since
X
{\displaystyle X}
is in every
Σ
α
,
Σ
∗
{\displaystyle \Sigma _{\alpha },\Sigma ^{*}}
is not empty. Closure under complement and countable unions for every
Σ
α
{\displaystyle \Sigma _{\alpha }}
implies the same must be true for
Σ
∗
.
{\displaystyle \Sigma ^{*}.}
Therefore,
Σ
∗
{\displaystyle \Sigma ^{*}}
is a σ-algebra.
Join
The union of a collection of σ-algebras is not generally a σ-algebra, or even an algebra, but it generates a σ-algebra known as the join which typically is denoted
⋁
α
∈
A
Σ
α
=
σ
(
⋃
α
∈
A
Σ
α
)
.
{\displaystyle \bigvee _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }=\sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).}
A π-system that generates the join is
P
=
{
⋂
i
=
1
n
A
i
:
A
i
∈
Σ
α
i
,
α
i
∈
A
,
n
≥
1
}
.
{\displaystyle {\mathcal {P}}=\left\{\bigcap _{i=1}^{n}A_{i}:A_{i}\in \Sigma _{\alpha _{i}},\alpha _{i}\in {\mathcal {A}},\ n\geq 1\right\}.}
Sketch of Proof: By the case
n
=
1
,
{\displaystyle n=1,}
it is seen that each
Σ
α
⊂
P
,
{\displaystyle \Sigma _{\alpha }\subset {\mathcal {P}},}
so
⋃
α
∈
A
Σ
α
⊆
P
.
{\displaystyle \bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\subseteq {\mathcal {P}}.}
This implies
σ
(
⋃
α
∈
A
Σ
α
)
⊆
σ
(
P
)
{\displaystyle \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)\subseteq \sigma ({\mathcal {P}})}
by the definition of a σ-algebra generated by a collection of subsets. On the other hand,
P
⊆
σ
(
⋃
α
∈
A
Σ
α
)
{\displaystyle {\mathcal {P}}\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right)}
which, by Dynkin's π-λ theorem, implies
σ
(
P
)
⊆
σ
(
⋃
α
∈
A
Σ
α
)
.
{\displaystyle \sigma ({\mathcal {P}})\subseteq \sigma \left(\bigcup _{\alpha \in {\mathcal {A}}}\Sigma _{\alpha }\right).}
=== σ-algebras for subspaces ===
Suppose
Y
{\displaystyle Y}
is a subset of
X
{\displaystyle X}
and let
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
be a measurable space.
The collection
{
Y
∩
B
:
B
∈
Σ
}
{\displaystyle \{Y\cap B:B\in \Sigma \}}
is a σ-algebra of subsets of
Y
.
{\displaystyle Y.}
Suppose
(
Y
,
Λ
)
{\displaystyle (Y,\Lambda )}
is a measurable space. The collection
{
A
⊆
X
:
A
∩
Y
∈
Λ
}
{\displaystyle \{A\subseteq X:A\cap Y\in \Lambda \}}
is a σ-algebra of subsets of
X
.
{\displaystyle X.}
=== Relation to σ-ring ===
A σ-algebra
Σ
{\displaystyle \Sigma }
is just a σ-ring that contains the universal set
X
.
{\displaystyle X.}
A σ-ring need not be a σ-algebra, as for example measurable subsets of zero Lebesgue measure in the real line are a σ-ring, but not a σ-algebra since the real line has infinite measure and thus cannot be obtained by their countable union. If, instead of zero measure, one takes measurable subsets of finite Lebesgue measure, those are a ring but not a σ-ring, since the real line can be obtained by their countable union yet its measure is not finite.
=== Typographic note ===
σ-algebras are sometimes denoted using calligraphic capital letters, or the Fraktur typeface. Thus
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
may be denoted as
(
X
,
F
)
{\displaystyle \scriptstyle (X,\,{\mathcal {F}})}
or
(
X
,
F
)
.
{\displaystyle \scriptstyle (X,\,{\mathfrak {F}}).}
== Particular cases and examples ==
=== Separable σ-algebras ===
A separable
σ
{\displaystyle \sigma }
-algebra (or separable
σ
{\displaystyle \sigma }
-field) is a
σ
{\displaystyle \sigma }
-algebra
F
{\displaystyle {\mathcal {F}}}
that is a separable space when considered as a metric space with metric
ρ
(
A
,
B
)
=
μ
(
A
△
B
)
{\displaystyle \rho (A,B)=\mu (A{\mathbin {\triangle }}B)}
for
A
,
B
∈
F
{\displaystyle A,B\in {\mathcal {F}}}
and a given finite measure
μ
{\displaystyle \mu }
(and with
△
{\displaystyle \triangle }
being the symmetric difference operator). Any
σ
{\displaystyle \sigma }
-algebra generated by a countable collection of sets is separable, but the converse need not hold. For example, the Lebesgue
σ
{\displaystyle \sigma }
-algebra is separable (since every Lebesgue measurable set is equivalent to some Borel set) but not countably generated (since its cardinality is higher than continuum).
A separable measure space has a natural pseudometric that renders it separable as a pseudometric space. The distance between two sets is defined as the measure of the symmetric difference of the two sets. The symmetric difference of two distinct sets can have measure zero; hence the pseudometric as defined above need not to be a true metric. However, if sets whose symmetric difference has measure zero are identified into a single equivalence class, the resulting quotient set can be properly metrized by the induced metric. If the measure space is separable, it can be shown that the corresponding metric space is, too.
=== Simple set-based examples ===
Let
X
{\displaystyle X}
be any set.
The family consisting only of the empty set and the set
X
,
{\displaystyle X,}
called the minimal or trivial σ-algebra over
X
.
{\displaystyle X.}
The power set of
X
,
{\displaystyle X,}
called the discrete σ-algebra.
The collection
{
∅
,
A
,
X
∖
A
,
X
}
{\displaystyle \{\varnothing ,A,X\setminus A,X\}}
is a simple σ-algebra generated by the subset
A
.
{\displaystyle A.}
The collection of subsets of
X
{\displaystyle X}
which are countable or whose complements are countable is a σ-algebra (which is distinct from the power set of
X
{\displaystyle X}
if and only if
X
{\displaystyle X}
is uncountable). This is the σ-algebra generated by the singletons of
X
.
{\displaystyle X.}
Note: "countable" includes finite or empty.
The collection of all unions of sets in a countable partition of
X
{\displaystyle X}
is a σ-algebra.
=== Stopping time sigma-algebras ===
A stopping time
τ
{\displaystyle \tau }
can define a
σ
{\displaystyle \sigma }
-algebra
F
τ
,
{\displaystyle {\mathcal {F}}_{\tau },}
the
so-called stopping time sigma-algebra, which in a filtered probability space describes the information up to the random time
τ
{\displaystyle \tau }
in the sense that, if the filtered probability space is interpreted as a random experiment, the maximum information that can be found out about the experiment from arbitrarily often repeating it until the time
τ
{\displaystyle \tau }
is
F
τ
.
{\displaystyle {\mathcal {F}}_{\tau }.}
== σ-algebras generated by families of sets ==
=== σ-algebra generated by an arbitrary family ===
Let
F
{\displaystyle F}
be an arbitrary family of subsets of
X
.
{\displaystyle X.}
Then there exists a unique smallest σ-algebra which contains every set in
F
{\displaystyle F}
(even though
F
{\displaystyle F}
may or may not itself be a σ-algebra). It is, in fact, the intersection of all σ-algebras containing
F
.
{\displaystyle F.}
(See intersections of σ-algebras above.) This σ-algebra is denoted
σ
(
F
)
{\displaystyle \sigma (F)}
and is called the σ-algebra generated by
F
.
{\displaystyle F.}
If
F
{\displaystyle F}
is empty, then
σ
(
∅
)
=
{
∅
,
X
}
.
{\displaystyle \sigma (\varnothing )=\{\varnothing ,X\}.}
Otherwise
σ
(
F
)
{\displaystyle \sigma (F)}
consists of all the subsets of
X
{\displaystyle X}
that can be made from elements of
F
{\displaystyle F}
by a countable number of complement, union and intersection operations.
For a simple example, consider the set
X
=
{
1
,
2
,
3
}
.
{\displaystyle X=\{1,2,3\}.}
Then the σ-algebra generated by the single subset
{
1
}
{\displaystyle \{1\}}
is
σ
(
{
1
}
)
=
{
∅
,
{
1
}
,
{
2
,
3
}
,
{
1
,
2
,
3
}
}
.
{\displaystyle \sigma (\{1\})=\{\varnothing ,\{1\},\{2,3\},\{1,2,3\}\}.}
By an abuse of notation, when a collection of subsets contains only one element,
A
,
{\displaystyle A,}
σ
(
A
)
{\displaystyle \sigma (A)}
may be written instead of
σ
(
{
A
}
)
;
{\displaystyle \sigma (\{A\});}
in the prior example
σ
(
{
1
}
)
{\displaystyle \sigma (\{1\})}
instead of
σ
(
{
{
1
}
}
)
.
{\displaystyle \sigma (\{\{1\}\}).}
Indeed, using
σ
(
A
1
,
A
2
,
…
)
{\displaystyle \sigma \left(A_{1},A_{2},\ldots \right)}
to mean
σ
(
{
A
1
,
A
2
,
…
}
)
{\displaystyle \sigma \left(\left\{A_{1},A_{2},\ldots \right\}\right)}
is also quite common.
There are many families of subsets that generate useful σ-algebras. Some of these are presented here.
=== σ-algebra generated by a function ===
If
f
{\displaystyle f}
is a function from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
and
B
{\displaystyle B}
is a
σ
{\displaystyle \sigma }
-algebra of subsets of
Y
,
{\displaystyle Y,}
then the
σ
{\displaystyle \sigma }
-algebra generated by the function
f
,
{\displaystyle f,}
denoted by
σ
(
f
)
,
{\displaystyle \sigma (f),}
is the collection of all inverse images
f
−
1
(
S
)
{\displaystyle f^{-1}(S)}
of the sets
S
{\displaystyle S}
in
B
.
{\displaystyle B.}
That is,
σ
(
f
)
=
{
f
−
1
(
S
)
:
S
∈
B
}
.
{\displaystyle \sigma (f)=\left\{f^{-1}(S)\,:\,S\in B\right\}.}
A function
f
{\displaystyle f}
from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
is measurable with respect to a σ-algebra
Σ
{\displaystyle \Sigma }
of subsets of
X
{\displaystyle X}
if and only if
σ
(
f
)
{\displaystyle \sigma (f)}
is a subset of
Σ
.
{\displaystyle \Sigma .}
One common situation, and understood by default if
B
{\displaystyle B}
is not specified explicitly, is when
Y
{\displaystyle Y}
is a metric or topological space and
B
{\displaystyle B}
is the collection of Borel sets on
Y
.
{\displaystyle Y.}
If
f
{\displaystyle f}
is a function from
X
{\displaystyle X}
to
R
n
{\displaystyle \mathbb {R} ^{n}}
then
σ
(
f
)
{\displaystyle \sigma (f)}
is generated by the family of subsets which are inverse images of intervals/rectangles in
R
n
:
{\displaystyle \mathbb {R} ^{n}:}
σ
(
f
)
=
σ
(
{
f
−
1
(
[
a
1
,
b
1
]
×
⋯
×
[
a
n
,
b
n
]
)
:
a
i
,
b
i
∈
R
}
)
.
{\displaystyle \sigma (f)=\sigma \left(\left\{f^{-1}(\left[a_{1},b_{1}\right]\times \cdots \times \left[a_{n},b_{n}\right]):a_{i},b_{i}\in \mathbb {R} \right\}\right).}
A useful property is the following. Assume
f
{\displaystyle f}
is a measurable map from
(
X
,
Σ
X
)
{\displaystyle \left(X,\Sigma _{X}\right)}
to
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
and
g
{\displaystyle g}
is a measurable map from
(
X
,
Σ
X
)
{\displaystyle \left(X,\Sigma _{X}\right)}
to
(
T
,
Σ
T
)
.
{\displaystyle \left(T,\Sigma _{T}\right).}
If there exists a measurable map
h
{\displaystyle h}
from
(
T
,
Σ
T
)
{\displaystyle \left(T,\Sigma _{T}\right)}
to
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
such that
f
(
x
)
=
h
(
g
(
x
)
)
{\displaystyle f(x)=h(g(x))}
for all
x
,
{\displaystyle x,}
then
σ
(
f
)
⊆
σ
(
g
)
.
{\displaystyle \sigma (f)\subseteq \sigma (g).}
If
S
{\displaystyle S}
is finite or countably infinite or, more generally,
(
S
,
Σ
S
)
{\displaystyle \left(S,\Sigma _{S}\right)}
is a standard Borel space (for example, a separable complete metric space with its associated Borel sets), then the converse is also true. Examples of standard Borel spaces include
R
n
{\displaystyle \mathbb {R} ^{n}}
with its Borel sets and
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
with the cylinder σ-algebra described below.
=== Borel and Lebesgue σ-algebras ===
An important example is the Borel algebra over any topological space: the σ-algebra generated by the open sets (or, equivalently, by the closed sets). This σ-algebra is not, in general, the whole power set. For a non-trivial example that is not a Borel set, see the Vitali set or Non-Borel sets.
On the Euclidean space
R
n
,
{\displaystyle \mathbb {R} ^{n},}
another σ-algebra is of importance: that of all Lebesgue measurable sets. This σ-algebra contains more sets than the Borel σ-algebra on
R
n
{\displaystyle \mathbb {R} ^{n}}
and is preferred in integration theory, as it gives a complete measure space.
=== Product σ-algebra ===
Let
(
X
1
,
Σ
1
)
{\displaystyle \left(X_{1},\Sigma _{1}\right)}
and
(
X
2
,
Σ
2
)
{\displaystyle \left(X_{2},\Sigma _{2}\right)}
be two measurable spaces. The σ-algebra for the corresponding product space
X
1
×
X
2
{\displaystyle X_{1}\times X_{2}}
is called the product σ-algebra and is defined by
Σ
1
×
Σ
2
=
σ
(
{
B
1
×
B
2
:
B
1
∈
Σ
1
,
B
2
∈
Σ
2
}
)
.
{\displaystyle \Sigma _{1}\times \Sigma _{2}=\sigma \left(\left\{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\right\}\right).}
Observe that
{
B
1
×
B
2
:
B
1
∈
Σ
1
,
B
2
∈
Σ
2
}
{\displaystyle \{B_{1}\times B_{2}:B_{1}\in \Sigma _{1},B_{2}\in \Sigma _{2}\}}
is a π-system.
The Borel σ-algebra for
R
n
{\displaystyle \mathbb {R} ^{n}}
is generated by half-infinite rectangles and by finite rectangles. For example,
B
(
R
n
)
=
σ
(
{
(
−
∞
,
b
1
]
×
⋯
×
(
−
∞
,
b
n
]
:
b
i
∈
R
}
)
=
σ
(
{
(
a
1
,
b
1
]
×
⋯
×
(
a
n
,
b
n
]
:
a
i
,
b
i
∈
R
}
)
.
{\displaystyle {\mathcal {B}}(\mathbb {R} ^{n})=\sigma \left(\left\{(-\infty ,b_{1}]\times \cdots \times (-\infty ,b_{n}]:b_{i}\in \mathbb {R} \right\}\right)=\sigma \left(\left\{\left(a_{1},b_{1}\right]\times \cdots \times \left(a_{n},b_{n}\right]:a_{i},b_{i}\in \mathbb {R} \right\}\right).}
For each of these two examples, the generating family is a π-system.
=== σ-algebra generated by cylinder sets ===
Suppose
X
⊆
R
T
=
{
f
:
f
(
t
)
∈
R
,
t
∈
T
}
{\displaystyle X\subseteq \mathbb {R} ^{\mathbb {T} }=\{f:f(t)\in \mathbb {R} ,\ t\in \mathbb {T} \}}
is a set of real-valued functions. Let
B
(
R
)
{\displaystyle {\mathcal {B}}(\mathbb {R} )}
denote the Borel subsets of
R
.
{\displaystyle \mathbb {R} .}
A cylinder subset of
X
{\displaystyle X}
is a finitely restricted set defined as
C
t
1
,
…
,
t
n
(
B
1
,
…
,
B
n
)
=
{
f
∈
X
:
f
(
t
i
)
∈
B
i
,
1
≤
i
≤
n
}
.
{\displaystyle C_{t_{1},\dots ,t_{n}}(B_{1},\dots ,B_{n})=\left\{f\in X:f(t_{i})\in B_{i},1\leq i\leq n\right\}.}
Each
{
C
t
1
,
…
,
t
n
(
B
1
,
…
,
B
n
)
:
B
i
∈
B
(
R
)
,
1
≤
i
≤
n
}
{\displaystyle \left\{C_{t_{1},\dots ,t_{n}}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\right\}}
is a π-system that generates a σ-algebra
Σ
t
1
,
…
,
t
n
.
{\displaystyle \textstyle \Sigma _{t_{1},\dots ,t_{n}}.}
Then the family of subsets
F
X
=
⋃
n
=
1
∞
⋃
t
i
∈
T
,
i
≤
n
Σ
t
1
,
…
,
t
n
{\displaystyle {\mathcal {F}}_{X}=\bigcup _{n=1}^{\infty }\bigcup _{t_{i}\in \mathbb {T} ,i\leq n}\Sigma _{t_{1},\dots ,t_{n}}}
is an algebra that generates the cylinder σ-algebra for
X
.
{\displaystyle X.}
This σ-algebra is a subalgebra of the Borel σ-algebra determined by the product topology of
R
T
{\displaystyle \mathbb {R} ^{\mathbb {T} }}
restricted to
X
.
{\displaystyle X.}
An important special case is when
T
{\displaystyle \mathbb {T} }
is the set of natural numbers and
X
{\displaystyle X}
is a set of real-valued sequences. In this case, it suffices to consider the cylinder sets
C
n
(
B
1
,
…
,
B
n
)
=
(
B
1
×
⋯
×
B
n
×
R
∞
)
∩
X
=
{
(
x
1
,
x
2
,
…
,
x
n
,
x
n
+
1
,
…
)
∈
X
:
x
i
∈
B
i
,
1
≤
i
≤
n
}
,
{\displaystyle C_{n}\left(B_{1},\dots ,B_{n}\right)=\left(B_{1}\times \cdots \times B_{n}\times \mathbb {R} ^{\infty }\right)\cap X=\left\{\left(x_{1},x_{2},\ldots ,x_{n},x_{n+1},\ldots \right)\in X:x_{i}\in B_{i},1\leq i\leq n\right\},}
for which
Σ
n
=
σ
(
{
C
n
(
B
1
,
…
,
B
n
)
:
B
i
∈
B
(
R
)
,
1
≤
i
≤
n
}
)
{\displaystyle \Sigma _{n}=\sigma \left(\{C_{n}\left(B_{1},\dots ,B_{n}\right):B_{i}\in {\mathcal {B}}(\mathbb {R} ),1\leq i\leq n\}\right)}
is a non-decreasing sequence of σ-algebras.
=== Ball σ-algebra ===
The ball σ-algebra is the smallest σ-algebra containing all the open (and/or closed) balls. This is never larger than the Borel σ-algebra. Note that the two σ-algebra are equal for separable spaces. For some nonseparable spaces, some maps are ball measurable even though they are not Borel measurable, making use of the ball σ-algebra useful in the analysis of such maps.
=== σ-algebra generated by random variable or vector ===
Suppose
(
Ω
,
Σ
,
P
)
{\displaystyle (\Omega ,\Sigma ,\mathbb {P} )}
is a probability space. If
Y
:
Ω
→
R
n
{\displaystyle \textstyle Y:\Omega \to \mathbb {R} ^{n}}
is measurable with respect to the Borel σ-algebra on
R
n
{\displaystyle \mathbb {R} ^{n}}
then
Y
{\displaystyle Y}
is called a random variable (
n
=
1
{\displaystyle n=1}
) or random vector (
n
>
1
{\displaystyle n>1}
). The σ-algebra generated by
Y
{\displaystyle Y}
is
σ
(
Y
)
=
{
Y
−
1
(
A
)
:
A
∈
B
(
R
n
)
}
.
{\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in {\mathcal {B}}\left(\mathbb {R} ^{n}\right)\right\}.}
=== σ-algebra generated by a stochastic process ===
Suppose
(
Ω
,
Σ
,
P
)
{\displaystyle (\Omega ,\Sigma ,\mathbb {P} )}
is a probability space and
R
T
{\displaystyle \mathbb {R} ^{\mathbb {T} }}
is the set of real-valued functions on
T
.
{\displaystyle \mathbb {T} .}
If
Y
:
Ω
→
X
⊆
R
T
{\displaystyle \textstyle Y:\Omega \to X\subseteq \mathbb {R} ^{\mathbb {T} }}
is measurable with respect to the cylinder σ-algebra
σ
(
F
X
)
{\displaystyle \sigma \left({\mathcal {F}}_{X}\right)}
(see above) for
X
{\displaystyle X}
then
Y
{\displaystyle Y}
is called a stochastic process or random process. The σ-algebra generated by
Y
{\displaystyle Y}
is
σ
(
Y
)
=
{
Y
−
1
(
A
)
:
A
∈
σ
(
F
X
)
}
=
σ
(
{
Y
−
1
(
A
)
:
A
∈
F
X
}
)
,
{\displaystyle \sigma (Y)=\left\{Y^{-1}(A):A\in \sigma \left({\mathcal {F}}_{X}\right)\right\}=\sigma \left(\left\{Y^{-1}(A):A\in {\mathcal {F}}_{X}\right\}\right),}
the σ-algebra generated by the inverse images of cylinder sets.
== See also ==
Measurable function – Kind of mathematical function
Sample space – Set of all possible outcomes or results of a statistical trial or experiment
Sigma-additive set function – Mapping function
Sigma-ring – Family of sets closed under countable unions
== References ==
== External links ==
"Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Sigma Algebra from PlanetMath. | Wikipedia/Sigma_algebra |
In category theory, a branch of mathematics, a monad is a triple
(
T
,
η
,
μ
)
{\displaystyle (T,\eta ,\mu )}
consisting of a functor T from a category to itself and two natural transformations
η
,
μ
{\displaystyle \eta ,\mu }
that satisfy the conditions like associativity. For example, if
F
,
G
{\displaystyle F,G}
are functors adjoint to each other, then
T
=
G
∘
F
{\displaystyle T=G\circ F}
together with
η
,
μ
{\displaystyle \eta ,\mu }
determined by the adjoint relation is a monad.
In concise terms, a monad is a monoid in the category of endofunctors of some fixed category (an endofunctor is a functor mapping a category to itself). According to John Baez, a monad can be considered at least in two ways:
A monad as a generalized monoid; this is clear since a monad is a monoid in a certain category,
A monad as a tool for studying algebraic gadgets; for example, a group can be described by a certain monad.
Monads are used in the theory of pairs of adjoint functors, and they generalize closure operators on partially ordered sets to arbitrary categories. Monads are also useful in the theory of datatypes, the denotational semantics of imperative programming languages, and in functional programming languages, allowing languages without mutable state to do things such as simulate for-loops; see Monad (functional programming).
A monad is also called, especially in old literature, a triple, triad, standard construction and fundamental construction.
== Introduction and definition ==
A monad is a certain type of endofunctor. For example, if
F
{\displaystyle F}
and
G
{\displaystyle G}
are a pair of adjoint functors, with
F
{\displaystyle F}
left adjoint to
G
{\displaystyle G}
, then the composition
G
∘
F
{\displaystyle G\circ F}
is a monad. If
F
{\displaystyle F}
and
G
{\displaystyle G}
are inverse to each other, the corresponding monad is the identity functor. In general, adjunctions are not equivalences—they relate categories of different natures. The monad theory matters as part of the effort to capture what it is that adjunctions 'preserve'. The other half of the theory, of what can be learned likewise from consideration of
F
∘
G
{\displaystyle F\circ G}
, is discussed under the dual theory of comonads.
=== Formal definition ===
Throughout this article,
C
{\displaystyle C}
denotes a category. A monad on
C
{\displaystyle C}
consists of an endofunctor
T
:
C
→
C
{\displaystyle T\colon C\to C}
together with two natural transformations:
η
:
1
C
→
T
{\displaystyle \eta \colon 1_{C}\to T}
(where
1
C
{\displaystyle 1_{C}}
denotes the identity functor on
C
{\displaystyle C}
) and
μ
:
T
2
→
T
{\displaystyle \mu \colon T^{2}\to T}
(where
T
2
{\displaystyle T^{2}}
is the functor
T
∘
T
{\displaystyle T\circ T}
from
C
{\displaystyle C}
to
C
{\displaystyle C}
). These are required to fulfill the following conditions (sometimes called coherence conditions):
μ
∘
T
μ
=
μ
∘
μ
T
{\displaystyle \mu \circ T\mu =\mu \circ \mu T}
(as natural transformations
T
3
→
T
{\displaystyle T^{3}\to T}
); here
T
μ
{\displaystyle T\mu }
and
μ
T
{\displaystyle \mu T}
are formed by "horizontal composition".
μ
∘
T
η
=
μ
∘
η
T
=
1
T
{\displaystyle \mu \circ T\eta =\mu \circ \eta T=1_{T}}
(as natural transformations
T
→
T
{\displaystyle T\to T}
; here
1
T
{\displaystyle 1_{T}}
denotes the identity transformation from
T
{\displaystyle T}
to
T
{\displaystyle T}
).
We can rewrite these conditions using the following commutative diagrams:
See the article on natural transformations for the explanation of the notations
T
μ
{\displaystyle T\mu }
and
μ
T
{\displaystyle \mu T}
, or see below the commutative diagrams not using these notions:
The first axiom is akin to the associativity in monoids if we think of
μ
{\displaystyle \mu }
as the monoid's binary operation, and the second axiom is akin to the existence of an identity element (which we think of as given by
η
{\displaystyle \eta }
). Indeed, a monad on
C
{\displaystyle C}
can alternatively be defined as a monoid in the category
E
n
d
C
{\displaystyle \mathbf {End} _{C}}
whose objects are the endofunctors of
C
{\displaystyle C}
and whose morphisms are the natural transformations between them, with the monoidal structure induced by the composition of endofunctors.
=== The power set monad ===
The power set monad is a monad
P
{\displaystyle {\mathcal {P}}}
on the category
S
e
t
{\displaystyle \mathbf {Set} }
: For a set
A
{\displaystyle A}
let
T
(
A
)
{\displaystyle T(A)}
be the power set of
A
{\displaystyle A}
and for a function
f
:
A
→
B
{\displaystyle f\colon A\to B}
let
T
(
f
)
{\displaystyle T(f)}
be the function between the power sets induced by taking direct images under
f
{\displaystyle f}
. For every set
A
{\displaystyle A}
, we have a map
η
A
:
A
→
T
(
A
)
{\displaystyle \eta _{A}\colon A\to T(A)}
, which assigns to every
a
∈
A
{\displaystyle a\in A}
the singleton
{
a
}
{\displaystyle \{a\}}
. The function
μ
A
:
T
(
T
(
A
)
)
→
T
(
A
)
{\displaystyle \mu _{A}\colon T(T(A))\to T(A)}
takes a set of sets to its union. These data describe a monad.
=== Remarks ===
The axioms of a monad are formally similar to the monoid axioms. In fact, monads are special cases of monoids, namely they are precisely the monoids among endofunctors
End
(
C
)
{\displaystyle \operatorname {End} (C)}
, which is equipped with the multiplication given by composition of endofunctors.
Composition of monads is not, in general, a monad. For example, the double power set functor
P
∘
P
{\displaystyle {\mathcal {P}}\circ {\mathcal {P}}}
does not admit any monad structure.
=== Comonads ===
The categorical dual definition is a formal definition of a comonad (or cotriple); this can be said quickly in the terms that a comonad for a category
C
{\displaystyle C}
is a monad for the opposite category
C
o
p
{\displaystyle C^{\mathrm {op} }}
. It is therefore a functor
U
{\displaystyle U}
from
C
{\displaystyle C}
to itself, with a set of axioms for counit and comultiplication that come from reversing the arrows everywhere in the definition just given.
Monads are to monoids as comonads are to comonoids. Every set is a comonoid in a unique way, so comonoids are less familiar in abstract algebra than monoids; however, comonoids in the category of vector spaces with its usual tensor product are important and widely studied under the name of coalgebras.
=== Terminological history ===
The notion of monad was invented by Roger Godement in 1958 under the name "standard construction". Monad has been called "dual standard construction", "triple", "monoid" and "triad". The term "monad" is used at latest 1967, by Jean Bénabou.
== Examples ==
=== Identity ===
The identity functor on a category
C
{\displaystyle C}
is a monad. Its multiplication and unit are the identity function on the objects of
C
{\displaystyle C}
.
=== Monads arising from adjunctions ===
Any adjunction
F
:
C
⇄
D
:
G
{\displaystyle F:C\rightleftarrows D:G}
gives rise to a monad on C. This very widespread construction works as follows: the endofunctor is the composite
T
=
G
∘
F
.
{\displaystyle T=G\circ F.}
This endofunctor is quickly seen to be a monad, where the unit map stems from the unit map
id
C
→
G
∘
F
{\displaystyle \operatorname {id} _{C}\to G\circ F}
of the adjunction, and the multiplication map is constructed using the counit map of the adjunction:
T
2
=
G
∘
F
∘
G
∘
F
→
G
∘
counit
∘
F
G
∘
F
=
T
.
{\displaystyle T^{2}=G\circ F\circ G\circ F\xrightarrow {G\circ {\text{counit}}\circ F} G\circ F=T.}
In fact, any monad can be found as an explicit adjunction of functors using the Eilenberg–Moore category
C
T
{\displaystyle C^{T}}
(the category of
T
{\displaystyle T}
-algebras).
==== Double dualization ====
The double dualization monad, for a fixed field k arises from the adjunction
(
−
)
∗
:
V
e
c
t
k
⇄
V
e
c
t
k
o
p
:
(
−
)
∗
{\displaystyle (-)^{*}:\mathbf {Vect} _{k}\rightleftarrows \mathbf {Vect} _{k}^{op}:(-)^{*}}
where both functors are given by sending a vector space V to its dual vector space
V
∗
:=
Hom
(
V
,
k
)
{\displaystyle V^{*}:=\operatorname {Hom} (V,k)}
. The associated monad sends a vector space V to its double dual
V
∗
∗
{\displaystyle V^{**}}
. This monad is discussed, in much greater generality, by Kock (1970).
==== Closure operators on partially ordered sets ====
For categories arising from partially ordered sets
(
P
,
≤
)
{\displaystyle (P,\leq )}
(with a single morphism from
x
{\displaystyle x}
to
y
{\displaystyle y}
if and only if
x
≤
y
{\displaystyle x\leq y}
), then the formalism becomes much simpler: adjoint pairs are Galois connections and monads are closure operators.
==== Free-forgetful adjunctions ====
For example, let
G
{\displaystyle G}
be the forgetful functor from the category Grp of groups to the category Set of sets, and let
F
{\displaystyle F}
be the free group functor from the category of sets to the category of groups. Then
F
{\displaystyle F}
is left adjoint of
G
{\displaystyle G}
. In this case, the associated monad
T
=
G
∘
F
{\displaystyle T=G\circ F}
takes a set
X
{\displaystyle X}
and returns the underlying set of the free group
F
r
e
e
(
X
)
{\displaystyle \mathrm {Free} (X)}
.
The unit map of this monad is given by the maps
X
→
T
(
X
)
{\displaystyle X\to T(X)}
including any set
X
{\displaystyle X}
into the set
F
r
e
e
(
X
)
{\displaystyle \mathrm {Free} (X)}
in the natural way, as strings of length 1. Further, the multiplication of this monad is the map
T
(
T
(
X
)
)
→
T
(
X
)
{\displaystyle T(T(X))\to T(X)}
made out of a natural concatenation or 'flattening' of 'strings of strings'. This amounts to two natural transformations.
The preceding example about free groups can be generalized to any type of algebra in the sense of a variety of algebras in universal algebra. Thus, every such type of algebra gives rise to a monad on the category of sets. Importantly, the algebra type can be recovered from the monad (as the category of Eilenberg–Moore algebras), so monads can also be seen as generalizing varieties of universal algebras.
Another monad arising from an adjunction is when
T
{\displaystyle T}
is the endofunctor on the category of vector spaces which maps a vector space
V
{\displaystyle V}
to its tensor algebra
T
(
V
)
{\displaystyle T(V)}
, and which maps linear maps to their tensor product. We then have a natural transformation corresponding to the embedding of
V
{\displaystyle V}
into its tensor algebra, and a natural transformation corresponding to the map from
T
(
T
(
V
)
)
{\displaystyle T(T(V))}
to
T
(
V
)
{\displaystyle T(V)}
obtained by simply expanding all tensor products.
=== Codensity monads ===
Under mild conditions, functors not admitting a left adjoint also give rise to a monad, the so-called codensity monad. For example, the inclusion
F
i
n
S
e
t
⊂
S
e
t
{\displaystyle \mathbf {FinSet} \subset \mathbf {Set} }
does not admit a left adjoint. Its codensity monad is the monad on sets sending any set X to the set of ultrafilters on X. This and similar examples are discussed in Leinster (2013).
=== Monads used in denotational semantics ===
The following monads over the category of sets are used in denotational semantics of imperative programming languages, and analogous constructions are used in functional programming.
==== The maybe monad ====
The endofunctor of the maybe or partiality monad adds a disjoint point:
(
−
)
∗
:
S
e
t
→
S
e
t
{\displaystyle (-)_{*}:\mathbf {Set} \to \mathbf {Set} }
X
↦
X
∪
{
∗
}
{\displaystyle X\mapsto X\cup \{*\}}
The unit is given by the inclusion of a set
X
{\displaystyle X}
into
X
∗
{\displaystyle X_{*}}
:
η
X
:
X
→
X
∗
{\displaystyle \eta _{X}:X\to X_{*}}
x
↦
x
{\displaystyle x\mapsto x}
The multiplication maps elements of
X
{\displaystyle X}
to themselves, and the two disjoint points in
(
X
∗
)
∗
{\displaystyle (X_{*})_{*}}
to the one in
X
∗
{\displaystyle X_{*}}
.
In both functional programming and denotational semantics, the maybe monad models partial computations, that is, computations that may fail.
==== The state monad ====
Given a set
S
{\displaystyle S}
, the endofunctor of the state monad maps each set
X
{\displaystyle X}
to the set of functions
S
→
S
×
X
{\displaystyle S\to S\times X}
. The component of the unit at
X
{\displaystyle X}
maps each element
x
∈
X
{\displaystyle x\in X}
to the function
η
X
(
x
)
:
S
→
S
×
X
{\displaystyle \eta _{X}(x):S\to S\times X}
s
↦
(
s
,
x
)
{\displaystyle s\mapsto (s,x)}
The multiplication maps the function
f
:
S
→
S
×
(
S
→
S
×
X
)
,
s
↦
(
s
′
,
f
′
)
{\displaystyle f:S\to S\times (S\to S\times X),s\mapsto (s',f')}
to the function
μ
X
(
f
)
:
S
→
S
×
X
{\displaystyle \mu _{X}(f):S\to S\times X}
s
↦
f
′
(
s
′
)
{\displaystyle s\mapsto f'(s')}
In functional programming and denotational semantics, the state monad models stateful computations.
==== The environment monad ====
Given a set
E
{\displaystyle E}
, the endofunctor of the reader or environment monad maps each set
X
{\displaystyle X}
to the set of functions
E
→
X
{\displaystyle E\to X}
. Thus, the endofunctor of this monad is exactly the hom functor
H
o
m
(
E
,
−
)
{\displaystyle \mathrm {Hom} (E,-)}
. The component of the unit at
X
{\displaystyle X}
maps each element
x
∈
X
{\displaystyle x\in X}
to the constant function
e
↦
x
{\displaystyle e\mapsto x}
.
In functional programming and denotational semantics, the environment monad models computations with access to some read-only data.
==== The list and set monads ====
The list or nondeterminism monad maps a set X to the set of finite sequences (i.e., lists) with elements from X. The unit maps an element x in X to the singleton list [x]. The multiplication concatenates a list of lists into a single list.
In functional programming, the list monad is used to model nondeterministic computations. The covariant powerset monad is also known as the set monad, and is also used to model nondeterministic computation.
== Algebras for a monad ==
Given a monad
(
T
,
η
,
μ
)
{\displaystyle (T,\eta ,\mu )}
on a category
C
{\displaystyle C}
, it is natural to consider
T
{\displaystyle T}
-algebras, i.e., objects of
C
{\displaystyle C}
acted upon by
T
{\displaystyle T}
in a way which is compatible with the unit and multiplication of the monad. More formally, a
T
{\displaystyle T}
-algebra
(
x
,
h
)
{\displaystyle (x,h)}
is an object
x
{\displaystyle x}
of
C
{\displaystyle C}
together with an arrow
h
:
T
x
→
x
{\displaystyle h\colon Tx\to x}
of
C
{\displaystyle C}
called the structure map of the algebra such that the diagrams
commute.
A morphism
f
:
(
x
,
h
)
→
(
x
′
,
h
′
)
{\displaystyle f\colon (x,h)\to (x',h')}
of
T
{\displaystyle T}
-algebras is an arrow
f
:
x
→
x
′
{\displaystyle f\colon x\to x'}
of
C
{\displaystyle C}
such that the diagram commutes.
T
{\displaystyle T}
-algebras form a category called the Eilenberg–Moore category and denoted by
C
T
{\displaystyle C^{T}}
.
=== Examples ===
==== Algebras over the free group monad ====
For example, for the free group monad discussed above, a
T
{\displaystyle T}
-algebra is a set
X
{\displaystyle X}
together with a map from the free group generated by
X
{\displaystyle X}
towards
X
{\displaystyle X}
subject to associativity and unitality conditions. Such a structure is equivalent to saying that
X
{\displaystyle X}
is a group itself.
==== Algebras over the distribution monad ====
Another example is the distribution monad
D
{\displaystyle {\mathcal {D}}}
on the category of sets. It is defined by sending a set
X
{\displaystyle X}
to the set of functions
f
:
X
→
[
0
,
1
]
{\displaystyle f:X\to [0,1]}
with finite support and such that their sum is equal to
1
{\displaystyle 1}
. In set-builder notation, this is the set
D
(
X
)
=
{
f
:
X
→
[
0
,
1
]
:
#
supp
(
f
)
<
+
∞
∑
x
∈
X
f
(
x
)
=
1
}
{\displaystyle {\mathcal {D}}(X)=\left\{f:X\to [0,1]:{\begin{matrix}\#{\text{supp}}(f)<+\infty \\\sum _{x\in X}f(x)=1\end{matrix}}\right\}}
By inspection of the definitions, it can be shown that algebras over the distribution monad are equivalent to convex sets, i.e., sets equipped with operations
x
+
r
y
{\displaystyle x+_{r}y}
for
r
∈
[
0
,
1
]
{\displaystyle r\in [0,1]}
subject to axioms resembling the behavior of convex linear combinations
r
x
+
(
1
−
r
)
y
{\displaystyle rx+(1-r)y}
in Euclidean space.
==== Algebras over the symmetric monad ====
Another useful example of a monad is the symmetric algebra functor on the category of
R
{\displaystyle R}
-modules for a commutative ring
R
{\displaystyle R}
.
Sym
∙
(
−
)
:
Mod
(
R
)
→
Mod
(
R
)
{\displaystyle {\text{Sym}}^{\bullet }(-):{\text{Mod}}(R)\to {\text{Mod}}(R)}
sending an
R
{\displaystyle R}
-module
M
{\displaystyle M}
to the direct sum of symmetric tensor powers
Sym
∙
(
M
)
=
⨁
k
=
0
∞
Sym
k
(
M
)
{\displaystyle {\text{Sym}}^{\bullet }(M)=\bigoplus _{k=0}^{\infty }{\text{Sym}}^{k}(M)}
where
Sym
0
(
M
)
=
R
{\displaystyle {\text{Sym}}^{0}(M)=R}
. For example,
Sym
∙
(
R
⊕
n
)
≅
R
[
x
1
,
…
,
x
n
]
{\displaystyle {\text{Sym}}^{\bullet }(R^{\oplus n})\cong R[x_{1},\ldots ,x_{n}]}
where the
R
{\displaystyle R}
-algebra on the right is considered as a module. Then, an algebra over this monad are commutative
R
{\displaystyle R}
-algebras. There are also algebras over the monads for the alternating tensors
Alt
∙
(
−
)
{\displaystyle {\text{Alt}}^{\bullet }(-)}
and total tensor functors
T
∙
(
−
)
{\displaystyle T^{\bullet }(-)}
giving anti-symmetric
R
{\displaystyle R}
-algebras, and free
R
{\displaystyle R}
-algebras, so
Alt
∙
(
R
⊕
n
)
=
R
(
x
1
,
…
,
x
n
)
T
∙
(
R
⊕
n
)
=
R
⟨
x
1
,
…
,
x
n
⟩
{\displaystyle {\begin{aligned}{\text{Alt}}^{\bullet }(R^{\oplus n})&=R(x_{1},\ldots ,x_{n})\\{\text{T}}^{\bullet }(R^{\oplus n})&=R\langle x_{1},\ldots ,x_{n}\rangle \end{aligned}}}
where the first ring is the free anti-symmetric algebra over
R
{\displaystyle R}
in
n
{\displaystyle n}
-generators and the second ring is the free algebra over
R
{\displaystyle R}
in
n
{\displaystyle n}
-generators.
==== Commutative algebras in E-infinity ring spectra ====
There is an analogous construction for commutative
S
{\displaystyle \mathbb {S} }
-algebraspg 113 which gives commutative
A
{\displaystyle A}
-algebras for a commutative
S
{\displaystyle \mathbb {S} }
-algebra
A
{\displaystyle A}
. If
M
A
{\displaystyle {\mathcal {M}}_{A}}
is the category of
A
{\displaystyle A}
-modules, then the functor
P
:
M
A
→
M
A
{\displaystyle \mathbb {P} :{\mathcal {M}}_{A}\to {\mathcal {M}}_{A}}
is the monad given by
P
(
M
)
=
⋁
j
≥
0
M
j
/
Σ
j
{\displaystyle \mathbb {P} (M)=\bigvee _{j\geq 0}M^{j}/\Sigma _{j}}
where
M
j
=
M
∧
A
⋯
∧
A
M
{\displaystyle M^{j}=M\wedge _{A}\cdots \wedge _{A}M}
j
{\displaystyle j}
-times. Then there is an associated category
C
A
{\displaystyle {\mathcal {C}}_{A}}
of commutative
A
{\displaystyle A}
-algebras from the category of algebras over this monad.
== Monads and adjunctions ==
As was mentioned above, any adjunction gives rise to a monad. Conversely, every monad arises from some adjunction, namely the free–forgetful adjunction
T
(
−
)
:
C
⇄
C
T
:
forget
{\displaystyle T(-):C\rightleftarrows C^{T}:{\text{forget}}}
whose left adjoint sends an object X to the free T-algebra T(X). However, there are usually several distinct adjunctions giving rise to a monad: let
A
d
j
(
C
,
T
)
{\displaystyle \mathbf {Adj} (C,T)}
be the category whose objects are the adjunctions
(
F
,
G
,
e
,
ε
)
{\displaystyle (F,G,e,\varepsilon )}
such that
(
G
F
,
e
,
G
ε
F
)
=
(
T
,
η
,
μ
)
{\displaystyle (GF,e,G\varepsilon F)=(T,\eta ,\mu )}
and whose arrows are the morphisms of adjunctions that are the identity on
C
{\displaystyle C}
. Then the above free–forgetful adjunction involving the Eilenberg–Moore category
C
T
{\displaystyle C^{T}}
is a terminal object in
A
d
j
(
C
,
T
)
{\displaystyle \mathbf {Adj} (C,T)}
. An initial object is the Kleisli category, which is by definition the full subcategory of
C
T
{\displaystyle C^{T}}
consisting only of free T-algebras, i.e., T-algebras of the form
T
(
x
)
{\displaystyle T(x)}
for some object x of C.
=== Monadic adjunctions ===
Given any adjunction
(
F
:
C
→
D
,
G
:
D
→
C
,
η
,
ε
)
{\displaystyle (F:C\to D,G:D\to C,\eta ,\varepsilon )}
with associated monad T, the functor G can be factored as
D
⟶
G
~
C
T
→
forget
C
,
{\displaystyle D{\overset {\widetilde {G}}{\longrightarrow }}C^{T}\xrightarrow {\text{forget}} C,}
i.e., G(Y) can be naturally endowed with a T-algebra structure for any Y in D. The adjunction is called a monadic adjunction if the first functor
G
~
{\displaystyle {\tilde {G}}}
yields an equivalence of categories between D and the Eilenberg–Moore category
C
T
{\displaystyle C^{T}}
. By extension, a functor
G
:
D
→
C
{\displaystyle G\colon D\to C}
is said to be monadic if it has a left adjoint F forming a monadic adjunction. For example, the free–forgetful adjunction between groups and sets is monadic, since algebras over the associated monad are groups, as was mentioned above. In general, knowing that an adjunction is monadic allows one to reconstruct objects in D out of objects in C and the T-action.
=== Beck's monadicity theorem ===
Beck's monadicity theorem gives a necessary and sufficient condition for an adjunction to be monadic. A simplified version of this theorem states that G is monadic if it is conservative (or G reflects isomorphisms, i.e., a morphism in D is an isomorphism if and only if its image under G is an isomorphism in C) and C has and G preserves coequalizers.
For example, the forgetful functor from the category of compact Hausdorff spaces to sets is monadic. However the forgetful functor from all topological spaces to sets is not conservative since there are continuous bijective maps (between non-compact or non-Hausdorff spaces) that fail to be homeomorphisms. Thus, this forgetful functor is not monadic.
The dual version of Beck's theorem, characterizing comonadic adjunctions, is relevant in different fields such as topos theory and topics in algebraic geometry related to descent. A first example of a comonadic adjunction is the adjunction
−
⊗
A
B
:
M
o
d
A
⇄
M
o
d
B
:
forget
{\displaystyle -\otimes _{A}B:\mathbf {Mod} _{A}\rightleftarrows \mathbf {Mod} _{B}:\operatorname {forget} }
for a ring homomorphism
A
→
B
{\displaystyle A\to B}
between commutative rings. This adjunction is comonadic, by Beck's theorem, if and only if B is faithfully flat as an A-module. It thus allows to descend B-modules, equipped with a descent datum (i.e., an action of the comonad given by the adjunction) to A-modules. The resulting theory of faithfully flat descent is widely applied in algebraic geometry.
== Uses ==
Monads are used in functional programming to express types of sequential computation (sometimes with side-effects). See monads in functional programming, and the more mathematically oriented Wikibook module b:Haskell/Category theory.
Monads are used in the denotational semantics of impure functional and imperative programming languages.
In categorical logic, an analogy has been drawn between the monad-comonad theory, and modal logic via closure operators, interior algebras, and their relation to models of S4 and intuitionistic logics.
== Generalization ==
It is possible to define monads in a 2-category
C
{\displaystyle C}
. Monads described above are monads for
C
=
C
a
t
{\displaystyle C=\mathbf {Cat} }
.
== See also ==
Distributive law between monads
Lawvere theory
Monad (functional programming)
Polyad
Strong monad
Giry monad
Monoidal monad
== References ==
== Further reading ==
Barr, Michael; Wells, Charles (1999), Category Theory for Computing Science (PDF)
Godement, Roger (1958), Topologie Algébrique et Théorie des Faisceaux., Actualités Sci. Ind., Publ. Math. Univ. Strasbourg, vol. 1252, Paris: Hermann, pp. viii+283 pp
Kock, Anders (1970), "On Double Dualization Monads", Mathematica Scandinavica, 27: 151, doi:10.7146/math.scand.a-10995
Leinster, Tom (2013), "Codensity and the ultrafilter monad" (PDF), Theory and Applications of Categories, 28: 332–370, arXiv:1209.3606, Bibcode:2012arXiv1209.3606L
MacLane, Saunders (1978), Categories for the Working Mathematician, Graduate Texts in Mathematics, vol. 5, doi:10.1007/978-1-4757-4721-8, ISBN 978-1-4419-3123-8
Pedicchio, Maria Cristina; Tholen, Walter, eds. (2004). Categorical Foundations. Special Topics in Order, Topology, Algebra, and Sheaf Theory. Encyclopedia of Mathematics and Its Applications. Vol. 97. Cambridge: Cambridge University Press. ISBN 0-521-83414-7. Zbl 1034.18001.
Perrone, Paolo (2024), "Chapter 5. Monads and Comonads", Starting Category Theory, World Scientific, doi:10.1142/9789811286018_0005, ISBN 978-981-12-8600-1
Riehl, Emily (2017), Category Theory in Context, Courier Dover Publications, ISBN 9780486820804
Turi, Daniele (1996–2001), Category Theory Lecture Notes (PDF)
https://mathoverflow.net/questions/55182/what-is-known-about-the-category-of-monads-on-set
Ross Street, The formal theory of monads [1]
== External links ==
Monads, a YouTube video of five short lectures (with one appendix).
John Baez's This Week's Finds in Mathematical Physics (Week 89) covers monads in 2-categories.
Monads and comonads, video tutorial.
https://medium.com/@felix.kuehl/a-monad-is-just-a-monoid-in-the-category-of-endofunctors-lets-actually-unravel-this-f5d4b7dbe5d6 | Wikipedia/T-algebra |
Algebraic may refer to any subject related to algebra in mathematics and related branches like algebraic number theory and algebraic topology. The word algebra itself has several meanings.
Algebraic may also refer to:
Algebraic data type, a datatype in computer programming each of whose values is data from other datatypes wrapped in one of the constructors of the datatype
Algebraic numbers, a complex number that is a root of a non-zero polynomial in one variable with integer coefficients
Algebraic functions, functions satisfying certain polynomials
Algebraic element, an element of a field extension which is a root of some polynomial over the base field
Algebraic extension, a field extension such that every element is an algebraic element over the base field
Algebraic definition, a definition in mathematical logic which is given using only equalities between terms
Algebraic structure, a set with one or more finitary operations defined on it
Algebraic, the order of entering operations when using a calculator (contrast reverse Polish notation)
Algebraic sum, a summation of quantities that takes into account their signs; e.g. the algebraic sum of 4, 3, and -8 is -1.
== See also ==
Algebra (disambiguation)
Algebraic notation (disambiguation)
All pages with titles beginning with algebraic | Wikipedia/Algebraic_(disambiguation) |
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras where associativity of multiplication is assumed, and non-associative algebras, where associativity is not assumed (but not excluded, either). Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra.
Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
== Definition and motivation ==
=== Motivating examples ===
=== Definition ===
Let K be a field, and let A be a vector space over K equipped with an additional binary operation from A × A to A, denoted here by · (that is, if x and y are any two elements of A, then x · y is an element of A that is called the product of x and y). Then A is an algebra over K if the following identities hold for all elements x, y, z in A , and all elements (often called scalars) a and b in K:
Right distributivity: (x + y) · z = x · z + y · z
Left distributivity: z · (x + y) = z · x + z · y
Compatibility with scalars: (ax) · (by) = (ab) (x · y).
These three axioms are another way of saying that the binary operation is bilinear. An algebra over K is sometimes also called a K-algebra, and K is called the base field of A. The binary operation is often referred to as multiplication in A. The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra.
When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general, for non-commutative operations left distributivity and right distributivity are not equivalent, and require separate proofs.
== Basic concepts ==
=== Algebra homomorphisms ===
Given K-algebras A and B, a homomorphism of K-algebras or K-algebra homomorphism is a K-linear map f: A → B such that f(xy) = f(x) f(y) for all x, y in A. If A and B are unital, then a homomorphism satisfying f(1A) = 1B is said to be a unital homomorphism. The space of all K-algebra homomorphisms between A and B is frequently written as
H
o
m
K
-alg
(
A
,
B
)
.
{\displaystyle \mathbf {Hom} _{K{\text{-alg}}}(A,B).}
A K-algebra isomorphism is a bijective K-algebra homomorphism.
=== Subalgebras and ideals ===
A subalgebra of an algebra over a field K is a linear subspace that has the property that the product of any two of its elements is again in the subspace. In other words, a subalgebra of an algebra is a non-empty subset of elements that is closed under addition, multiplication, and scalar multiplication. In symbols, we say that a subset L of a K-algebra A is a subalgebra if for every x, y in L and c in K, we have that x · y, x + y, and cx are all in L.
In the above example of the complex numbers viewed as a two-dimensional algebra over the real numbers, the one-dimensional real line is a subalgebra.
A left ideal of a K-algebra is a linear subspace that has the property that any element of the subspace multiplied on the left by any element of the algebra produces an element of the subspace. In symbols, we say that a subset L of a K-algebra A is a left ideal if for every x and y in L, z in A and c in K, we have the following three statements.
x + y is in L (L is closed under addition),
cx is in L (L is closed under scalar multiplication),
z · x is in L (L is closed under left multiplication by arbitrary elements).
If (3) were replaced with x · z is in L, then this would define a right ideal. A two-sided ideal is a subset that is both a left and a right ideal. The term ideal on its own is usually taken to mean a two-sided ideal. Of course when the algebra is commutative, then all of these notions of ideal are equivalent. Conditions (1) and (2) together are equivalent to L being a linear subspace of A. It follows from condition (3) that every left or right ideal is a subalgebra.
This definition is different from the definition of an ideal of a ring, in that here we require the condition (2). Of course if the algebra is unital, then condition (3) implies condition (2).
=== Extension of scalars ===
If we have a field extension F/K, which is to say a bigger field F that contains K, then there is a natural way to construct an algebra over F from any algebra over K. It is the same construction one uses to make a vector space over a bigger field, namely the tensor product
V
F
:=
V
⊗
K
F
{\displaystyle V_{F}:=V\otimes _{K}F}
. So if A is an algebra over K, then
A
F
{\displaystyle A_{F}}
is an algebra over F.
== Kinds of algebras and examples ==
Algebras over fields come in many different types. These types are specified by insisting on some further axioms, such as commutativity or associativity of the multiplication operation, which are not required in the broad definition of an algebra. The theories corresponding to the different types of algebras are often very different.
=== Unital algebra ===
An algebra is unital or unitary if it has a unit or identity element I with Ix = x = xI for all x in the algebra.
=== Zero algebra ===
An algebra is called a zero algebra if uv = 0 for all u, v in the algebra, not to be confused with the algebra with one element. It is inherently non-unital (except in the case of only one element), associative and commutative.
A unital zero algebra is the direct sum
K
⊕
V
{\displaystyle K\oplus V}
of a field
K
{\displaystyle K}
and a
K
{\displaystyle K}
-vector space
V
{\displaystyle V}
, that is equipped by the only multiplication that is zero on the vector space (or module), and makes it an unital algebra.
More precisely, every element of the algebra may be uniquely written as
k
+
v
{\displaystyle k+v}
with
k
∈
K
{\displaystyle k\in K}
and
v
∈
V
{\displaystyle v\in V}
, and the product is the only bilinear operation such that
v
w
=
0
{\displaystyle vw=0}
for every
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
{\displaystyle V}
. So, if
k
1
,
k
2
∈
K
{\displaystyle k_{1},k_{2}\in K}
and
v
1
,
v
2
∈
V
{\displaystyle v_{1},v_{2}\in V}
, one has
(
k
1
+
v
1
)
(
k
2
+
v
2
)
=
k
1
k
2
+
(
k
1
v
2
+
k
2
v
1
)
.
{\displaystyle (k_{1}+v_{1})(k_{2}+v_{2})=k_{1}k_{2}+(k_{1}v_{2}+k_{2}v_{1}).}
A classical example of unital zero algebra is the algebra of dual numbers, the unital zero R-algebra built from a one dimensional real vector space.
This definition extends verbatim to the definition of a unital zero algebra over a commutative ring, with the replacement of "field" and "vector space" with "commutative ring" and "module".
Unital zero algebras allow the unification of the theory of submodules of a given module and the theory of ideals of a unital algebra. Indeed, the submodules of a module
V
{\displaystyle V}
correspond exactly to the ideals of
K
⊕
V
{\displaystyle K\oplus V}
that are contained in
V
{\displaystyle V}
.
For example, the theory of Gröbner bases was introduced by Bruno Buchberger for ideals in a polynomial ring R = K[x1, ..., xn] over a field. The construction of the unital zero algebra over a free R-module allows extending this theory as a Gröbner basis theory for submodules of a free module. This extension allows, for computing a Gröbner basis of a submodule, to use, without any modification, any algorithm and any software for computing Gröbner bases of ideals.
Similarly, unital zero algebras allow to deduce straightforwardly the Lasker–Noether theorem for modules (over a commutative ring) from the original Lasker–Noether theorem for ideals.
=== Associative algebra ===
Examples of associative algebras include
the algebra of all n-by-n matrices over a field (or commutative ring) K. Here the multiplication is ordinary matrix multiplication.
group algebras, where a group serves as a basis of the vector space and algebra multiplication extends group multiplication.
the commutative algebra K[x] of all polynomials over K (see polynomial ring).
algebras of functions, such as the R-algebra of all real-valued continuous functions defined on the interval [0,1], or the C-algebra of all holomorphic functions defined on some fixed open set in the complex plane. These are also commutative.
Incidence algebras are built on certain partially ordered sets.
algebras of linear operators, for example on a Hilbert space. Here the algebra multiplication is given by the composition of operators. These algebras also carry a topology; many of them are defined on an underlying Banach space, which turns them into Banach algebras. If an involution is given as well, we obtain B*-algebras and C*-algebras. These are studied in functional analysis.
=== Non-associative algebra ===
A non-associative algebra (or distributive algebra) over a field K is a K-vector space A equipped with a K-bilinear map
A
×
A
→
A
{\displaystyle A\times A\rightarrow A}
. The usage of "non-associative" here is meant to convey that associativity is not assumed, but it does not mean it is prohibited – that is, it means "not necessarily associative".
Examples detailed in the main article include:
Euclidean space R3 with multiplication given by the vector cross product
Octonions
Lie algebras
Jordan algebras
Alternative algebras
Flexible algebras
Power-associative algebras
== Algebras and rings ==
The definition of an associative K-algebra with unit is also frequently given in an alternative way. In this case, an algebra over a field K is a ring A together with a ring homomorphism
η
:
K
→
Z
(
A
)
,
{\displaystyle \eta \colon K\to Z(A),}
where Z(A) is the center of A. Since η is a ring homomorphism, then one must have either that A is the zero ring, or that η is injective. This definition is equivalent to that above, with scalar multiplication
K
×
A
→
A
{\displaystyle K\times A\to A}
given by
(
k
,
a
)
↦
η
(
k
)
a
.
{\displaystyle (k,a)\mapsto \eta (k)a.}
Given two such associative unital K-algebras A and B, a unital K-algebra homomorphism f: A → B is a ring homomorphism that commutes with the scalar multiplication defined by η, which one may write as
f
(
k
a
)
=
k
f
(
a
)
{\displaystyle f(ka)=kf(a)}
for all
k
∈
K
{\displaystyle k\in K}
and
a
∈
A
{\displaystyle a\in A}
. In other words, the following diagram commutes:
K
η
A
↙
η
B
↘
A
f
⟶
B
{\displaystyle {\begin{matrix}&&K&&\\&\eta _{A}\swarrow &\,&\eta _{B}\searrow &\\A&&{\begin{matrix}f\\\longrightarrow \end{matrix}}&&B\end{matrix}}}
== Structure coefficients ==
For algebras over a field, the bilinear multiplication from A × A to A is completely determined by the multiplication of basis elements of A.
Conversely, once a basis for A has been chosen, the products of basis elements can be set arbitrarily, and then extended in a unique way to a bilinear operator on A, i.e., so the resulting multiplication satisfies the algebra laws.
Thus, given the field K, any finite-dimensional algebra can be specified up to isomorphism by giving its dimension (say n), and specifying n3 structure coefficients ci,j,k, which are scalars.
These structure coefficients determine the multiplication in A via the following rule:
e
i
e
j
=
∑
k
=
1
n
c
i
,
j
,
k
e
k
{\displaystyle \mathbf {e} _{i}\mathbf {e} _{j}=\sum _{k=1}^{n}c_{i,j,k}\mathbf {e} _{k}}
where e1,...,en form a basis of A.
Note however that several different sets of structure coefficients can give rise to isomorphic algebras.
In mathematical physics, the structure coefficients are generally written with upper and lower indices, so as to distinguish their transformation properties under coordinate transformations. Specifically, lower indices are covariant indices, and transform via pullbacks, while upper indices are contravariant, transforming under pushforwards. Thus, the structure coefficients are often written ci,jk, and their defining rule is written using the Einstein notation as
eiej = ci,jkek.
If you apply this to vectors written in index notation, then this becomes
(xy)k = ci,jkxiyj.
If K is only a commutative ring and not a field, then the same process works if A is a free module over K. If it isn't, then the multiplication is still completely determined by its action on a set that spans A; however, the structure constants can't be specified arbitrarily in this case, and knowing only the structure constants does not specify the algebra up to isomorphism.
== Classification of low-dimensional unital associative algebras over the complex numbers ==
Two-dimensional, three-dimensional and four-dimensional unital associative algebras over the field of complex numbers were completely classified up to isomorphism by Eduard Study.
There exist two such two-dimensional algebras. Each algebra consists of linear combinations (with complex coefficients) of two basis elements, 1 (the identity element) and a. According to the definition of an identity element,
1
⋅
1
=
1
,
1
⋅
a
=
a
,
a
⋅
1
=
a
.
{\displaystyle \textstyle 1\cdot 1=1\,,\quad 1\cdot a=a\,,\quad a\cdot 1=a\,.}
It remains to specify
a
a
=
1
{\displaystyle \textstyle aa=1}
for the first algebra,
a
a
=
0
{\displaystyle \textstyle aa=0}
for the second algebra.
There exist five such three-dimensional algebras. Each algebra consists of linear combinations of three basis elements, 1 (the identity element), a and b. Taking into account the definition of an identity element, it is sufficient to specify
a
a
=
a
,
b
b
=
b
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=b\,,\quad ab=ba=0}
for the first algebra,
a
a
=
a
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=a\,,\quad bb=0\,,\quad ab=ba=0}
for the second algebra,
a
a
=
b
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=b\,,\quad bb=0\,,\quad ab=ba=0}
for the third algebra,
a
a
=
1
,
b
b
=
0
,
a
b
=
−
b
a
=
b
{\displaystyle \textstyle aa=1\,,\quad bb=0\,,\quad ab=-ba=b}
for the fourth algebra,
a
a
=
0
,
b
b
=
0
,
a
b
=
b
a
=
0
{\displaystyle \textstyle aa=0\,,\quad bb=0\,,\quad ab=ba=0}
for the fifth algebra.
The fourth of these algebras is non-commutative, and the others are commutative.
== Generalization: algebra over a ring ==
In some areas of mathematics, such as commutative algebra, it is common to consider the more general concept of an algebra over a ring, where a commutative ring R replaces the field K. The only part of the definition that changes is that A is assumed to be an R-module (instead of a K-vector space).
=== Associative algebras over rings ===
A ring A is always an associative algebra over its center, and over the integers. A classical example of an algebra over its center is the split-biquaternion algebra, which is isomorphic to
H
×
H
{\displaystyle \mathbb {H} \times \mathbb {H} }
, the direct product of two quaternion algebras. The center of that ring is
R
×
R
{\displaystyle \mathbb {R} \times \mathbb {R} }
, and hence it has the structure of an algebra over its center, which is not a field. Note that the split-biquaternion algebra is also naturally an 8-dimensional
R
{\displaystyle \mathbb {R} }
-algebra.
In commutative algebra, if A is a commutative ring, then any unital ring homomorphism
R
→
A
{\displaystyle R\to A}
defines an R-module structure on A, and this is what is known as the R-algebra structure. So a ring comes with a natural
Z
{\displaystyle \mathbb {Z} }
-module structure, since one can take the unique homomorphism
Z
→
A
{\displaystyle \mathbb {Z} \to A}
. On the other hand, not all rings can be given the structure of an algebra over a field (for example the integers). See Field with one element for a description of an attempt to give to every ring a structure that behaves like an algebra over a field.
== See also ==
Algebra over an operad
Alternative algebra
Clifford algebra
Composition algebra
Differential algebra
Free algebra
Geometric algebra
Max-plus algebra
Mutation (algebra)
Operator algebra
Zariski's lemma
== Notes ==
== References ==
Hazewinkel, Michiel; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0. | Wikipedia/Algebra_over_a_commutative_ring |
In mathematics, and more specifically in abstract algebra, a *-algebra (or involutive algebra; read as "star-algebra") is a mathematical structure consisting of two involutive rings R and A, where R is commutative and A has the structure of an associative algebra over R. Involutive algebras generalize the idea of a number system equipped with conjugation, for example the complex numbers and complex conjugation, matrices over the complex numbers and conjugate transpose, and linear operators over a Hilbert space and Hermitian adjoints.
However, it may happen that an algebra admits no involution.
== Definitions ==
=== *-ring ===
In mathematics, a *-ring is a ring with a map * : A → A that is an antiautomorphism and an involution.
More precisely, * is required to satisfy the following properties:
(x + y)* = x* + y*
(x y)* = y* x*
1* = 1
(x*)* = x
for all x, y in A.
This is also called an involutive ring, involutory ring, and ring with involution. The third axiom is implied by the second and fourth axioms, making it redundant.
Elements such that x* = x are called self-adjoint.
Archetypical examples of a *-ring are fields of complex numbers and algebraic numbers with complex conjugation as the involution. One can define a sesquilinear form over any *-ring.
Also, one can define *-versions of algebraic objects, such as ideal and subring, with the requirement to be *-invariant: x ∈ I ⇒ x* ∈ I and so on.
*-rings are unrelated to star semirings in the theory of computation.
=== *-algebra ===
A *-algebra A is a *-ring, with involution * that is an associative algebra over a commutative *-ring R with involution ′, such that (r x)* = r′ x* ∀r ∈ R, x ∈ A.
The base *-ring R is often the complex numbers (with ′ acting as complex conjugation).
It follows from the axioms that * on A is conjugate-linear in R, meaning
(λ x + μ y)* = λ′ x* + μ′ y*
for λ, μ ∈ R, x, y ∈ A.
A *-homomorphism f : A → B is an algebra homomorphism that is compatible with the involutions of A and B, i.e.,
f(a*) = f(a)* for all a in A.
=== Philosophy of the *-operation ===
The *-operation on a *-ring is analogous to complex conjugation on the complex numbers. The *-operation on a *-algebra is analogous to taking adjoints in complex matrix algebras.
=== Notation ===
The * involution is a unary operation written with a postfixed star glyph centered above or near the mean line:
x ↦ x*, or
x ↦ x∗ (TeX: x^*),
but not as "x∗"; see the asterisk article for details.
== Examples ==
Any commutative ring becomes a *-ring with the trivial (identical) involution.
The most familiar example of a *-ring and a *-algebra over reals is the field of complex numbers C where * is just complex conjugation.
More generally, a field extension made by adjunction of a square root (such as the imaginary unit √−1) is a *-algebra over the original field, considered as a trivially-*-ring. The * flips the sign of that square root.
A quadratic integer ring (for some D) is a commutative *-ring with the * defined in the similar way; quadratic fields are *-algebras over appropriate quadratic integer rings.
Quaternions, split-complex numbers, dual numbers, and possibly other hypercomplex number systems form *-rings (with their built-in conjugation operation) and *-algebras over reals (where * is trivial). None of the three is a complex algebra.
Hurwitz quaternions form a non-commutative *-ring with the quaternion conjugation.
The matrix algebra of n × n matrices over R with * given by the transposition.
The matrix algebra of n × n matrices over C with * given by the conjugate transpose.
Its generalization, the Hermitian adjoint in the algebra of bounded linear operators on a Hilbert space also defines a *-algebra.
The polynomial ring R[x] over a commutative trivially-*-ring R is a *-algebra over R with P *(x) = P (−x).
If (A, +, ×, *) is simultaneously a *-ring, an algebra over a ring R (commutative), and (r x)* = r (x*) ∀r ∈ R, x ∈ A, then A is a *-algebra over R (where * is trivial).
As a partial case, any *-ring is a *-algebra over integers.
Any commutative *-ring is a *-algebra over itself and, more generally, over any its *-subring.
For a commutative *-ring R, its quotient by any its *-ideal is a *-algebra over R.
For example, any commutative trivially-*-ring is a *-algebra over its dual numbers ring, a *-ring with non-trivial *, because the quotient by ε = 0 makes the original ring.
The same about a commutative ring K and its polynomial ring K[x]: the quotient by x = 0 restores K.
In Hecke algebra, an involution is important to the Kazhdan–Lusztig polynomial.
The endomorphism ring of an elliptic curve becomes a *-algebra over the integers, where the involution is given by taking the dual isogeny. A similar construction works for abelian varieties with a polarization, in which case it is called the Rosati involution (see Milne's lecture notes on abelian varieties).
Involutive Hopf algebras are important examples of *-algebras (with the additional structure of a compatible comultiplication); the most familiar example being:
The group Hopf algebra: a group ring, with involution given by g ↦ g−1.
== Non-Example ==
Not every algebra admits an involution:
Regard the 2×2 matrices over the complex numbers. Consider the following subalgebra:
A
:=
{
(
a
b
0
0
)
:
a
,
b
∈
C
}
{\displaystyle {\mathcal {A}}:=\left\{{\begin{pmatrix}a&b\\0&0\end{pmatrix}}:a,b\in \mathbb {C} \right\}}
Any nontrivial antiautomorphism necessarily has the form:
φ
z
[
(
1
0
0
0
)
]
=
(
1
z
0
0
)
φ
z
[
(
0
1
0
0
)
]
=
(
0
0
0
0
)
{\displaystyle \varphi _{z}\left[{\begin{pmatrix}1&0\\0&0\end{pmatrix}}\right]={\begin{pmatrix}1&z\\0&0\end{pmatrix}}\quad \varphi _{z}\left[{\begin{pmatrix}0&1\\0&0\end{pmatrix}}\right]={\begin{pmatrix}0&0\\0&0\end{pmatrix}}}
for any complex number
z
∈
C
{\displaystyle z\in \mathbb {C} }
.
It follows that any nontrivial antiautomorphism fails to be involutive:
φ
z
2
[
(
0
1
0
0
)
]
=
(
0
0
0
0
)
≠
(
0
1
0
0
)
{\displaystyle \varphi _{z}^{2}\left[{\begin{pmatrix}0&1\\0&0\end{pmatrix}}\right]={\begin{pmatrix}0&0\\0&0\end{pmatrix}}\neq {\begin{pmatrix}0&1\\0&0\end{pmatrix}}}
Concluding that the subalgebra admits no involution.
== Additional structures ==
Many properties of the transpose hold for general *-algebras:
The Hermitian elements form a Jordan algebra;
The skew Hermitian elements form a Lie algebra;
If 2 is invertible in the *-ring, then the operators 1/2(1 + *) and 1/2(1 − *) are orthogonal idempotents, called symmetrizing and anti-symmetrizing, so the algebra decomposes as a direct sum of modules (vector spaces if the *-ring is a field) of symmetric and anti-symmetric (Hermitian and skew Hermitian) elements. These spaces do not, generally, form associative algebras, because the idempotents are operators, not elements of the algebra.
=== Skew structures ===
Given a *-ring, there is also the map −* : x ↦ −x*.
It does not define a *-ring structure (unless the characteristic is 2, in which case −* is identical to the original *), as 1 ↦ −1, neither is it antimultiplicative, but it satisfies the other axioms (linear, involution) and hence is quite similar to *-algebra where x ↦ x*.
Elements fixed by this map (i.e., such that a = −a*) are called skew Hermitian.
For the complex numbers with complex conjugation, the real numbers are the Hermitian elements, and the imaginary numbers are the skew Hermitian.
== See also ==
Semigroup with involution
B*-algebra
C*-algebra
Dagger category
von Neumann algebra
Baer ring
Operator algebra
Conjugate (algebra)
Cayley–Dickson construction
Composition algebra
== Notes ==
== References == | Wikipedia/*-algebra |
Algebra Felicia Blessett (born April 9, 1976), usually known as Algebra Blessett or just Algebra, is an American contemporary R&B singer.
== Early life ==
Blessett's mother was a gospel singer and bass player, and Blessett grew up to the sounds of soul music, gospel and R&B. Like many R&B singers, she sang in a gospel choir when she was in school. However, she was not passionate about the experience, and decided to do it only because she was not good at sports, but still wanted to stay after school with her friends.
== Career ==
Blessett started doing background vocals, among others for R&B artists Monica and Bilal. This earned her a contract with Rowdy Records in Atlanta. She has toured with Anthony Hamilton, and collaborated with India.Arie. At a later age she learned to play the guitar, and started to do her own gigs in the Atlanta club scene. She writes her own songs.
Blessett released her first single, "U Do It For Me", on the Kedar Entertainment label in 2006. She released her first album, Purpose, in 2008.
In 2014, her sophomore effort Recovery was released.
=== Charts ===
Blessett's debut album Purpose was on the US Billboard R&B chart for 14 weeks, and reached No. 56. It also landed No. 37 on Heatseekers Albums.
Her second album, Recovery, was her debut on the Billboard 200, charting No. 149. It also entered No. 2 on Heatseekers Albums and No. 23 on Top R&B/Hip-Hop Albums.
== Discography ==
=== Albums ===
=== Singles ===
2006: "U Do It For Me"
2008: "Run and Hide"
2012: "Black Gold" – credited as 'Esperanza Spalding with Algebra Blessett'
2013: "Nobody But You"
== References ==
== External links ==
2 Algebra pictures, Vibe Magazine Gallery, retrieved September 26, 2009
Algebra on NeoSoulVille, retrieved September 26, 2009
Algebra Blessett on Myspace | Wikipedia/Algebra_(singer) |
In database theory, relational algebra is a theory that uses algebraic structures for modeling data and defining queries on it with well founded semantics. The theory was introduced by Edgar F. Codd.
The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations.
The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results).
Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth.
== Introduction ==
Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages.
Relational algebra operates on homogeneous sets of tuples
S
=
{
(
s
j
1
,
s
j
2
,
.
.
.
s
j
n
)
|
j
∈
1...
m
}
{\displaystyle S=\{(s_{j1},s_{j2},...s_{jn})|j\in 1...m\}}
where we commonly interpret m to be the number of rows of tuples in a table and n to be the number of columns. All entries in each column have the same type.
A relation also has a unique tuple called the header which gives each column a unique name or attribute inside the relation. Attributes are used in projections and selections.
== Set operators ==
The relational algebra uses set union, set difference, and Cartesian product from set theory, and adds additional constraints to these operators to create new ones.
For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible.
For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name.
In addition, the Cartesian product is defined differently from the one in set theory in the sense that tuples are considered to be "shallow" for the purposes of the operation. That is, the Cartesian product of a set of n-tuples with a set of m-tuples yields a set of "flattened" (n + m)-tuples (whereas basic set theory would have prescribed a set of 2-tuples, each containing an n-tuple and an m-tuple). More formally, R × S is defined as follows:
R
×
S
:=
{
(
r
1
,
r
2
,
…
,
r
n
,
s
1
,
s
2
,
…
,
s
m
)
|
(
r
1
,
r
2
,
…
,
r
n
)
∈
R
,
(
s
1
,
s
2
,
…
,
s
m
)
∈
S
}
{\displaystyle R\times S:=\{(r_{1},r_{2},\dots ,r_{n},s_{1},s_{2},\dots ,s_{m})|(r_{1},r_{2},\dots ,r_{n})\in R,(s_{1},s_{2},\dots ,s_{m})\in S\}}
The cardinality of the Cartesian product is the product of the cardinalities of its factors, that is, |R × S| = |R| × |S|.
== Projection ==
A projection (Π) is a unary operation written as
Π
a
1
,
…
,
a
n
(
R
)
{\displaystyle \Pi _{a_{1},\ldots ,a_{n}}(R)}
where
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set
{
a
1
,
…
,
a
n
}
{\displaystyle \{a_{1},\ldots ,a_{n}\}}
.
Note: when implemented in SQL standard the "default projection" returns a multiset instead of a set, and the Π projection to eliminate duplicate data is obtained by the addition of the DISTINCT keyword.
== Selection ==
A generalized selection (σ) is a unary operation written as
σ
φ
(
R
)
{\displaystyle \sigma _{\varphi }(R)}
where φ is a propositional formula that consists of atoms as allowed in the normal selection and the logical operators
∧
{\displaystyle \wedge }
(and),
∨
{\displaystyle \lor }
(or) and
¬
{\displaystyle \neg }
(negation). This selection selects all those tuples in R for which φ holds.
To obtain a listing of all friends or business associates in an address book, the selection might be written as
σ
isFriend = true
∨
isBusinessContact = true
(
addressBook
)
{\displaystyle \sigma _{{\text{isFriend = true}}\,\lor \,{\text{isBusinessContact = true}}}({\text{addressBook}})}
. The result would be a relation containing every attribute of every unique record where isFriend is true or where isBusinessContact is true.
== Rename ==
A rename (ρ) is a unary operation written as
ρ
a
/
b
(
R
)
{\displaystyle \rho _{a/b}(R)}
where the result is identical to R except that the b attribute in all tuples is renamed to an a attribute. This is commonly used to rename the attribute of a relation for the purpose of a join.
To rename the "isFriend" attribute to "isBusinessContact" in a relation,
ρ
isBusinessContact / isFriend
(
addressBook
)
{\displaystyle \rho _{\text{isBusinessContact / isFriend}}({\text{addressBook}})}
might be used.
There is also the
ρ
x
(
A
1
,
…
,
A
n
)
(
R
)
{\displaystyle \rho _{x(A_{1},\ldots ,A_{n})}(R)}
notation, where R is renamed to x and the attributes
{
a
1
,
…
,
a
n
}
{\displaystyle \{a_{1},\ldots ,a_{n}\}}
are renamed to
{
A
1
,
…
,
A
n
}
{\displaystyle \{A_{1},\ldots ,A_{n}\}}
.
== Joins and join-like operators ==
=== Natural join ===
Natural join (⨝) is a binary operator that is written as (R ⨝ S) where R and S are relations. The result of the natural join is the set of all combinations of tuples in R and S that are equal on their common attribute names. For an example consider the tables Employee and Dept and their natural join:
Note that neither the employee named Mary nor the Production department appear in the result. Mary does not appear in the result because Mary's Department, "Human Resources", is not listed in the Dept relation and the Production department does not appear in the result because there are no tuples in the Employee relation that have "Production" as their DeptName attribute.
This can also be used to define composition of relations. For example, the composition of Employee and Dept is their join as shown above, projected on all but the common attribute DeptName. In category theory, the join is precisely the fiber product.
The natural join is arguably one of the most important operators since it is the relational counterpart of the logical AND operator. Note that if the same variable appears in each of two predicates that are connected by AND, then that variable stands for the same thing and both appearances must always be substituted by the same value (this is a consequence of the idempotence of the logical AND). In particular, natural join allows the combination of relations that are associated by a foreign key. For example, in the above example a foreign key probably holds from Employee.DeptName to Dept.DeptName and then the natural join of Employee and Dept combines all employees with their departments. This works because the foreign key holds between attributes with the same name. If this is not the case such as in the foreign key from Dept.Manager to Employee.Name then these columns must be renamed before taking the natural join. Such a join is sometimes also referred to as an equijoin.
More formally the semantics of the natural join are defined as follows:
where Fun(t) is a predicate that is true for a relation t (in the mathematical sense) iff t is a function (that is, t does not map any attribute to multiple values). It is usually required that R and S must have at least one common attribute, but if this constraint is omitted, and R and S have no common attributes, then the natural join becomes exactly the Cartesian product.
The natural join can be simulated with Codd's primitives as follows. Assume that c1,...,cm are the attribute names common to R and S, r1,...,rn are the
attribute names unique to R and s1,...,sk are the
attribute names unique to S. Furthermore, assume that the attribute names x1,...,xm are neither in R nor in S. In a first step the common attribute names in S can be renamed:
Then we take the Cartesian product and select the tuples that are to be joined:
Finally we take a projection to get rid of the renamed attributes:
=== θ-join and equijoin ===
Consider tables Car and Boat which list models of cars and boats and their respective prices. Suppose a customer wants to buy a car and a boat, but she does not want to spend more money for the boat than for the car. The θ-join (⋈θ) on the predicate CarPrice ≥ BoatPrice produces the flattened pairs of rows which satisfy the predicate. When using a condition where the attributes are equal, for example Price, then the condition may be specified as Price=Price
or alternatively (Price) itself.
In order to combine tuples from two relations where the combination condition is not simply the equality of shared attributes it is convenient to have a more general form of join operator, which is the θ-join (or theta-join). The θ-join is a binary operator that is written as
R
⋈
S
a
θ
b
{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ b}}
or
R
⋈
S
a
θ
v
{\displaystyle {R\ \bowtie \ S \atop a\ \theta \ v}}
where a and b are attribute names, θ is a binary relational operator in the set {<, ≤, =, ≠, >, ≥}, υ is a value constant, and R and S are relations. The result of this operation consists of all combinations of tuples in R and S that satisfy θ. The result of the θ-join is defined only if the headers of S and R are disjoint, that is, do not contain a common attribute.
The simulation of this operation in the fundamental operations is therefore as follows:
R ⋈θ S = σθ(R × S)
In case the operator θ is the equality operator (=) then this join is also called an equijoin.
Note, however, that a computer language that supports the natural join and selection operators does not need θ-join as well, as this can be achieved by selection from the result of a natural join (which degenerates to Cartesian product when there are no shared attributes).
In SQL implementations, joining on a predicate is usually called an inner join, and the on keyword allows one to specify the predicate used to filter the rows. It is important to note: forming the flattened Cartesian product then filtering the rows is conceptually correct, but an implementation would use more sophisticated data structures to speed up the join query.
=== Semijoin ===
The left semijoin (⋉ and ⋊) is a joining similar to the natural join and written as
R
⋉
S
{\displaystyle R\ltimes S}
where
R
{\displaystyle R}
and
S
{\displaystyle S}
are relations. The result is the set of all tuples in
R
{\displaystyle R}
for which there is a tuple in
S
{\displaystyle S}
that is equal on their common attribute names. The difference from a natural join is that other columns of
S
{\displaystyle S}
do not appear. For example, consider the tables Employee and Dept and their semijoin:
More formally the semantics of the semijoin can be defined as
follows:
R
⋉
S
=
{
t
:
t
∈
R
∧
∃
s
∈
S
(
Fun
(
t
∪
s
)
)
}
{\displaystyle R\ltimes S=\{t:t\in R\land \exists s\in S(\operatorname {Fun} (t\cup s))\}}
where
Fun
(
r
)
{\displaystyle \operatorname {Fun} (r)}
is as in the definition of natural join.
The semijoin can be simulated using the natural join as follows. If
a
1
,
…
,
a
n
{\displaystyle a_{1},\ldots ,a_{n}}
are the attribute names of
R
{\displaystyle R}
, then
R
⋉
S
=
Π
a
1
,
…
,
a
n
(
R
⋈
S
)
.
{\displaystyle R\ltimes S=\Pi _{a_{1},\ldots ,a_{n}}(R\bowtie S).}
Since we can simulate the natural join with the basic operators it follows that this also holds for the semijoin.
In Codd's 1970 paper, semijoin is called restriction.
=== Antijoin ===
The antijoin (▷), written as R ▷ S where R and S are relations, is similar to the semijoin, but the result of an antijoin is only those tuples in R for which there is no tuple in S that is equal on their common attribute names.
For an example consider the tables Employee and Dept and their
antijoin:
The antijoin is formally defined as follows:
R ▷ S = { t : t ∈ R ∧ ¬∃s ∈ S(Fun (t ∪ s))}
or
R ▷ S = { t : t ∈ R, there is no tuple s of S that satisfies Fun (t ∪ s)}
where Fun (t ∪ s) is as in the definition of natural join.
The antijoin can also be defined as the complement of the semijoin, as follows:
Given this, the antijoin is sometimes called the anti-semijoin, and the antijoin operator is sometimes written as semijoin symbol with a bar above it, instead of ▷.
In the case where the relations have the same attributes (union-compatible), antijoin is the same as minus.
=== Division ===
The division (÷) is a binary operation that is written as R ÷ S. Division is not implemented directly in SQL. The result consists of the restrictions of tuples in R to the attribute names unique to R, i.e., in the header of R but not in the header of S, for which it holds that all their combinations with tuples in S are present in R.
==== Example ====
If DBProject contains all the tasks of the Database project, then the result of the division above contains exactly the students who have completed both of the tasks in the Database project.
More formally the semantics of the division is defined as follows:where {a1,...,an} is the set of attribute names unique to R and t[a1,...,an] is the restriction of t to this set. It is usually required that the attribute names in the header of S are a subset of those of R because otherwise the result of the operation will always be empty.
The simulation of the division with the basic operations is as follows. We assume that a1,...,an are the attribute names unique to R and b1,...,bm are the attribute names of S. In the first step we project R on its unique attribute names and construct all combinations with tuples in S:
T := πa1,...,an(R) × S
In the prior example, T would represent a table such that every Student (because Student is the unique key / attribute of the Completed table) is combined with every given Task. So Eugene, for instance, would have two rows, Eugene → Database1 and Eugene → Database2 in T.
EG: First, let's pretend that "Completed" has a third attribute called "grade". It's unwanted baggage here, so we must project it off always. In fact in this step we can drop "Task" from R as well; the multiply puts it back on.
T := πStudent(R) × S // This gives us every possible desired combination, including those that don't actually exist in R, and excluding others (eg Fred | compiler1, which is not a desired combination)
In the next step we subtract R from T
relation:
U := T − R
In U we have the possible combinations that "could have" been in R, but weren't.
EG: Again with projections — T and R need to have identical attribute names/headers.
U := T − πStudent,Task(R) // This gives us a "what's missing" list.
So if we now take the projection on the attribute names unique to R
then we have the restrictions of the tuples in R for which not
all combinations with tuples in S were present in R:
V := πa1,...,an(U)
EG: Project U down to just the attribute(s) in question (Student)
V := πStudent(U)
So what remains to be done is take the projection of R on its
unique attribute names and subtract those in V:
W := πa1,...,an(R) − V
EG: W := πStudent(R) − V.
== Common extensions ==
In practice the classical relational algebra described above is extended with various operations such as outer joins, aggregate functions and even transitive closure.
=== Outer joins ===
Whereas the result of a join (or inner join) consists of tuples formed by combining matching tuples in the two operands, an outer join contains those tuples and additionally some tuples formed by extending an unmatched tuple in one of the operands by "fill" values for each of the attributes of the other operand. Outer joins are not considered part of the classical relational algebra discussed so far.
The operators defined in this section assume the existence of a null value, ω, which we do not define, to be used for the fill values; in practice this corresponds to the NULL in SQL. In order to make subsequent selection operations on the resulting table meaningful, a semantic meaning needs to be assigned to nulls; in Codd's approach the propositional logic used by the selection is extended to a three-valued logic, although we elide those details in this article.
Three outer join operators are defined: left outer join, right outer join, and full outer join. (The word "outer" is sometimes omitted.)
==== Left outer join ====
The left outer join (⟕) is written as R ⟕ S where R and S are relations. The result of the left outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition (loosely speaking) to tuples in R that have no matching tuples in S.
For an example consider the tables Employee and Dept and their left outer join:
In the resulting relation, tuples in S which have no common values in common attribute names with tuples in R take a null value, ω.
Since there are no tuples in Dept with a DeptName of Finance or Executive, ωs occur in the resulting relation where tuples in Employee have a DeptName of Finance or Executive.
Let r1, r2, ..., rn be the attributes of the relation R and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation S (those that are not attributes of R). Then the left outer join can be described in terms of the natural join (and hence using basic operators) as follows:
(
R
⋈
S
)
∪
(
(
R
−
π
r
1
,
r
2
,
…
,
r
n
(
R
⋈
S
)
)
×
{
(
ω
,
…
,
ω
)
}
)
{\displaystyle (R\bowtie S)\cup ((R-\pi _{r_{1},r_{2},\dots ,r_{n}}(R\bowtie S))\times \{(\omega ,\dots ,\omega )\})}
==== Right outer join ====
The right outer join (⟖) behaves almost identically to the left outer join, but the roles of the tables are switched.
The right outer join of relations R and S is written as R ⟖ S. The result of the right outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R.
For example, consider the tables Employee and Dept and their right outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω.
Since there are no tuples in Employee with a DeptName of Production, ωs occur in the Name and EmpId attributes of the resulting relation where tuples in Dept had DeptName of Production.
Let s1, s2, ..., sn be the attributes of the relation S and let {(ω, ..., ω)} be the singleton
relation on the attributes that are unique to the relation R (those that are not attributes of S). Then, as with the left outer join, the right outer join can be simulated using the natural join as follows:
(
R
⋈
S
)
∪
(
{
(
ω
,
…
,
ω
)
}
×
(
S
−
π
s
1
,
s
2
,
…
,
s
n
(
R
⋈
S
)
)
)
{\displaystyle (R\bowtie S)\cup (\{(\omega ,\dots ,\omega )\}\times (S-\pi _{s_{1},s_{2},\dots ,s_{n}}(R\bowtie S)))}
==== Full outer join ====
The outer join (⟗) or full outer join in effect combines the results of the left and right outer joins.
The full outer join is written as R ⟗ S where R and S are relations. The result of the full outer join is the set of all combinations of tuples in R and S that are equal on their common attribute names, in addition to tuples in S that have no matching tuples in R and tuples in R that have no matching tuples in S in their common attribute names.
For an example consider the tables Employee and Dept and their full outer join:
In the resulting relation, tuples in R which have no common values in common attribute names with tuples in S take a null value, ω. Tuples in S which have no common values in common attribute names with tuples in R also take a null value, ω.
The full outer join can be simulated using the left and right outer joins (and hence the natural join and set union) as follows:
R ⟗ S = (R ⟕ S) ∪ (R ⟖ S)
=== Operations for domain computations ===
There is nothing in relational algebra introduced so far that would allow computations on the data domains (other than evaluation of propositional expressions involving equality). For example, it is not possible using only the algebra introduced so far to write an expression that would multiply the numbers from two columns, e.g. a unit price with a quantity to obtain a total price. Practical query languages have such facilities, e.g. the SQL SELECT allows arithmetic operations to define new columns in the result SELECT unit_price * quantity AS total_price FROM t, and a similar facility is provided more explicitly by Tutorial D's EXTEND keyword. In database theory, this is called extended projection.: 213
==== Aggregation ====
Furthermore, computing various functions on a column, like the summing up of its elements, is also not possible using the relational algebra introduced so far. There are five aggregate functions that are included with most relational database systems. These operations are Sum, Count, Average, Maximum and Minimum. In relational algebra the aggregation operation over a schema (A1, A2, ... An) is written as follows:
G
1
,
G
2
,
…
,
G
m
g
f
1
(
A
1
′
)
,
f
2
(
A
2
′
)
,
…
,
f
k
(
A
k
′
)
(
r
)
{\displaystyle G_{1},G_{2},\ldots ,G_{m}\ g_{f_{1}({A_{1}}'),f_{2}({A_{2}}'),\ldots ,f_{k}({A_{k}}')}\ (r)}
where each Aj', 1 ≤ j ≤ k, is one of the original attributes Ai, 1 ≤ i ≤ n.
The attributes preceding the g are grouping attributes, which function like a "group by" clause in SQL. Then there are an arbitrary number of aggregation functions applied to individual attributes. The operation is applied to an arbitrary relation r. The grouping attributes are optional, and if they are not supplied, the aggregation functions are applied across the entire relation to which the operation is applied.
Let's assume that we have a table named Account with three columns, namely Account_Number, Branch_Name and Balance. We wish to find the maximum balance of each branch. This is accomplished by Branch_NameGMax(Balance)(Account). To find the highest balance of all accounts regardless of branch, we could simply write GMax(Balance)(Account).
Grouping is often written as Branch_NameɣMax(Balance)(Account) instead.
=== Transitive closure ===
Although relational algebra seems powerful enough for most practical purposes, there are some simple and natural operators on relations that cannot be expressed by relational algebra. One of them is the transitive closure of a binary relation. Given a domain D, let binary relation R be a subset of D×D. The transitive closure R+ of R is the smallest subset of D×D that contains R and satisfies the following condition:
∀
x
∀
y
∀
z
(
(
x
,
y
)
∈
R
+
∧
(
y
,
z
)
∈
R
+
⇒
(
x
,
z
)
∈
R
+
)
{\displaystyle \forall x\forall y\forall z\left((x,y)\in R^{+}\wedge (y,z)\in R^{+}\Rightarrow (x,z)\in R^{+}\right)}
It can be proved using the fact that there is no relational algebra expression E(R) taking R as a variable argument that produces R+.
SQL however officially supports such fixpoint queries since 1999, and it had vendor-specific extensions in this direction well before that.
== Use of algebraic properties for query optimization ==
Relational database management systems often include a query optimizer which attempts to determine the most efficient way to execute a given query. Query optimizers enumerate possible query plans, estimate their cost, and pick the plan with the lowest estimated cost. If queries are represented by operators from relational algebra, the query optimizer can enumerate possible query plans by rewriting the initial query using the algebraic properties of these operators.
Queries can be represented as a tree, where
the internal nodes are operators,
leaves are relations,
subtrees are subexpressions.
The primary goal of the query optimizer is to transform expression trees into equivalent expression trees, where the average size of the relations yielded by subexpressions in the tree is smaller than it was before the optimization. The secondary goal is to try to form common subexpressions within a single query, or if there is more than one query being evaluated at the same time, in all of those queries. The rationale behind the second goal is that it is enough to compute common subexpressions once, and the results can be used in all queries that contain that subexpression.
Here are a set of rules that can be used in such transformations.
=== Selection ===
Rules about selection operators play the most important role in query optimization. Selection is an operator that very effectively decreases the number of rows in its operand, so if the selections in an expression tree are moved towards the leaves, the internal relations (yielded by subexpressions) will likely shrink.
==== Basic selection properties ====
Selection is idempotent (multiple applications of the same selection have no additional effect beyond the first one), and commutative (the order selections are applied in has no effect on the eventual result).
σ
A
(
R
)
=
σ
A
σ
A
(
R
)
{\displaystyle \sigma _{A}(R)=\sigma _{A}\sigma _{A}(R)\,\!}
σ
A
σ
B
(
R
)
=
σ
B
σ
A
(
R
)
{\displaystyle \sigma _{A}\sigma _{B}(R)=\sigma _{B}\sigma _{A}(R)\,\!}
==== Breaking up selections with complex conditions ====
A selection whose condition is a conjunction of simpler conditions is equivalent to a sequence of selections with those same individual conditions, and selection whose condition is a disjunction is equivalent to a union of selections. These identities can be used to merge selections so that fewer selections need to be evaluated, or to split them so that the component selections may be moved or optimized separately.
σ
A
∧
B
(
R
)
=
σ
A
(
σ
B
(
R
)
)
=
σ
B
(
σ
A
(
R
)
)
{\displaystyle \sigma _{A\land B}(R)=\sigma _{A}(\sigma _{B}(R))=\sigma _{B}(\sigma _{A}(R))}
σ
A
∨
B
(
R
)
=
σ
A
(
R
)
∪
σ
B
(
R
)
{\displaystyle \sigma _{A\lor B}(R)=\sigma _{A}(R)\cup \sigma _{B}(R)}
==== Selection and cross product ====
Cross product is the costliest operator to evaluate. If the input relations have N and M rows, the result will contain
N
M
{\displaystyle NM}
rows. Therefore, it is important to decrease the size of both operands before applying the cross product operator.
This can be effectively done if the cross product is followed by a selection operator, e.g.
σ
A
(
R
×
P
)
{\displaystyle \sigma _{A}(R\times P)}
. Considering the definition of join, this is the most likely case. If the cross product is not followed by a selection operator, we can try to push down a selection from higher levels of the expression tree using the other selection rules.
In the above case the condition A is broken up in to conditions B, C and D using the split rules about complex selection conditions, so that
A
=
B
∧
C
∧
D
{\displaystyle A=B\wedge C\wedge D}
and B contains attributes only from R, C contains attributes only from P, and D contains the part of A that contains attributes from both R and P. Note, that B, C or D are possibly empty. Then the following holds:
σ
A
(
R
×
P
)
=
σ
B
∧
C
∧
D
(
R
×
P
)
=
σ
D
(
σ
B
(
R
)
×
σ
C
(
P
)
)
{\displaystyle \sigma _{A}(R\times P)=\sigma _{B\wedge C\wedge D}(R\times P)=\sigma _{D}(\sigma _{B}(R)\times \sigma _{C}(P))}
==== Selection and set operators ====
Selection is distributive over the set difference, intersection, and union operators. The following three rules are used to push selection below set operations in the expression tree. For the set difference and the intersection operators, it is possible to apply the selection operator to just one of the operands following the transformation. This can be beneficial where one of the operands is small, and the overhead of evaluating the selection operator outweighs the benefits of using a smaller relation as an operand.
σ
A
(
R
∖
P
)
=
σ
A
(
R
)
∖
σ
A
(
P
)
=
σ
A
(
R
)
∖
P
{\displaystyle \sigma _{A}(R\setminus P)=\sigma _{A}(R)\setminus \sigma _{A}(P)=\sigma _{A}(R)\setminus P}
σ
A
(
R
∪
P
)
=
σ
A
(
R
)
∪
σ
A
(
P
)
{\displaystyle \sigma _{A}(R\cup P)=\sigma _{A}(R)\cup \sigma _{A}(P)}
σ
A
(
R
∩
P
)
=
σ
A
(
R
)
∩
σ
A
(
P
)
=
σ
A
(
R
)
∩
P
=
R
∩
σ
A
(
P
)
{\displaystyle \sigma _{A}(R\cap P)=\sigma _{A}(R)\cap \sigma _{A}(P)=\sigma _{A}(R)\cap P=R\cap \sigma _{A}(P)}
==== Selection and projection ====
Selection commutes with projection if and only if the fields referenced in the selection condition are a subset of the fields in the projection. Performing selection before projection may be useful if the operand is a cross product or join. In other cases, if the selection condition is relatively expensive to compute, moving selection outside the projection may reduce the number of tuples which must be tested (since projection may produce fewer tuples due to the elimination of duplicates resulting from omitted fields).
π
a
1
,
…
,
a
n
(
σ
A
(
R
)
)
=
σ
A
(
π
a
1
,
…
,
a
n
(
R
)
)
where fields in
A
⊆
{
a
1
,
…
,
a
n
}
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(\sigma _{A}(R))=\sigma _{A}(\pi _{a_{1},\ldots ,a_{n}}(R)){\text{ where fields in }}A\subseteq \{a_{1},\ldots ,a_{n}\}}
=== Projection ===
==== Basic projection properties ====
Projection is idempotent, so that a series of (valid) projections is equivalent to the outermost projection.
π
a
1
,
…
,
a
n
(
π
b
1
,
…
,
b
m
(
R
)
)
=
π
a
1
,
…
,
a
n
(
R
)
where
{
a
1
,
…
,
a
n
}
⊆
{
b
1
,
…
,
b
m
}
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(\pi _{b_{1},\ldots ,b_{m}}(R))=\pi _{a_{1},\ldots ,a_{n}}(R){\text{ where }}\{a_{1},\ldots ,a_{n}\}\subseteq \{b_{1},\ldots ,b_{m}\}}
==== Projection and set operators ====
Projection is distributive over set union.
π
a
1
,
…
,
a
n
(
R
∪
P
)
=
π
a
1
,
…
,
a
n
(
R
)
∪
π
a
1
,
…
,
a
n
(
P
)
.
{\displaystyle \pi _{a_{1},\ldots ,a_{n}}(R\cup P)=\pi _{a_{1},\ldots ,a_{n}}(R)\cup \pi _{a_{1},\ldots ,a_{n}}(P).\,}
Projection does not distribute over intersection and set difference. Counterexamples are given by:
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
∩
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
∅
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \}\cap \{\langle A=a,B=b'\rangle \})=\emptyset }
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
)
∩
π
A
(
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
{
⟨
A
=
a
⟩
}
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \})\cap \pi _{A}(\{\langle A=a,B=b'\rangle \})=\{\langle A=a\rangle \}}
and
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
∖
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
{
⟨
A
=
a
⟩
}
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \}\setminus \{\langle A=a,B=b'\rangle \})=\{\langle A=a\rangle \}}
π
A
(
{
⟨
A
=
a
,
B
=
b
⟩
}
)
∖
π
A
(
{
⟨
A
=
a
,
B
=
b
′
⟩
}
)
=
∅
,
{\displaystyle \pi _{A}(\{\langle A=a,B=b\rangle \})\setminus \pi _{A}(\{\langle A=a,B=b'\rangle \})=\emptyset \,,}
where b is assumed to be distinct from b'.
=== Rename ===
==== Basic rename properties ====
Successive renames of a variable can be collapsed into a single rename. Rename operations which have no variables in common can be arbitrarily reordered with respect to one another, which can be exploited to make successive renames adjacent so that they can be collapsed.
ρ
a
/
b
(
ρ
b
/
c
(
R
)
)
=
ρ
a
/
c
(
R
)
{\displaystyle \rho _{a/b}(\rho _{b/c}(R))=\rho _{a/c}(R)\,\!}
ρ
a
/
b
(
ρ
c
/
d
(
R
)
)
=
ρ
c
/
d
(
ρ
a
/
b
(
R
)
)
{\displaystyle \rho _{a/b}(\rho _{c/d}(R))=\rho _{c/d}(\rho _{a/b}(R))\,\!}
==== Rename and set operators ====
Rename is distributive over set difference, union, and intersection.
ρ
a
/
b
(
R
∖
P
)
=
ρ
a
/
b
(
R
)
∖
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\setminus P)=\rho _{a/b}(R)\setminus \rho _{a/b}(P)}
ρ
a
/
b
(
R
∪
P
)
=
ρ
a
/
b
(
R
)
∪
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\cup P)=\rho _{a/b}(R)\cup \rho _{a/b}(P)}
ρ
a
/
b
(
R
∩
P
)
=
ρ
a
/
b
(
R
)
∩
ρ
a
/
b
(
P
)
{\displaystyle \rho _{a/b}(R\cap P)=\rho _{a/b}(R)\cap \rho _{a/b}(P)}
=== Product and union ===
Cartesian product is distributive over union.
(
A
×
B
)
∪
(
A
×
C
)
=
A
×
(
B
∪
C
)
{\displaystyle (A\times B)\cup (A\times C)=A\times (B\cup C)}
== Implementations ==
The first query language to be based on Codd's algebra was Alpha, developed by Dr. Codd himself. Subsequently, ISBL was created, and this pioneering work has been acclaimed by many authorities as having shown the way to make Codd's idea into a useful language. Business System 12 was a short-lived industry-strength relational DBMS that followed the ISBL example.
In 1998 Chris Date and Hugh Darwen proposed a language called Tutorial D intended for use in teaching relational database theory, and its query language also draws on ISBL's ideas. Rel is an implementation of Tutorial D. Bmg is an implementation of relational algebra in Ruby which closely follows the principles of Tutorial D and The Third Manifesto.
Even the query language of SQL is loosely based on a relational algebra, though the operands in SQL (tables) are not exactly relations and several useful theorems about the relational algebra do not hold in the SQL counterpart (arguably to the detriment of optimisers and/or users). The SQL table model is a bag (multiset), rather than a set. For example, the expression
(
R
∪
S
)
∖
T
=
(
R
∖
T
)
∪
(
S
∖
T
)
{\displaystyle (R\cup S)\setminus T=(R\setminus T)\cup (S\setminus T)}
is a theorem for relational algebra on sets, but not for relational algebra on bags.
== See also ==
== Notes ==
== References ==
== Further reading ==
Imieliński, T.; Lipski, W. (1984). "The relational model of data and cylindric algebras". Journal of Computer and System Sciences. 28: 80–102. doi:10.1016/0022-0000(84)90077-1. (For relationship with cylindric algebras).
== External links ==
RAT Relational Algebra Translator Free software to convert relational algebra to SQL
Lecture Videos: Relational Algebra Processing - An introduction to how database systems process relational algebra
Lecture Notes: Relational Algebra – A quick tutorial to adapt SQL queries into relational algebra
Relational – A graphic implementation of the relational algebra
Query Optimization This paper is an introduction into the use of the relational algebra in optimizing queries, and includes numerous citations for more in-depth study.
Relational Algebra System for Oracle and Microsoft SQL Server
Pireal – An experimental educational tool for working with Relational Algebra
DES – An educational tool for working with Relational Algebra and other formal languages
RelaX - Relational Algebra Calculator (open-source software available as an online service without registration)
RA: A Relational Algebra Interpreter
Translating SQL to Relational Algebra | Wikipedia/Relational_algebra |
In mathematics, a field of sets is a mathematical structure consisting of a pair
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
consisting of a set
X
{\displaystyle X}
and a family
F
{\displaystyle {\mathcal {F}}}
of subsets of
X
{\displaystyle X}
called an algebra over
X
{\displaystyle X}
that contains the empty set as an element, and is closed under the operations of taking complements in
X
,
{\displaystyle X,}
finite unions, and finite intersections.
Fields of sets should not be confused with fields in ring theory nor with fields in physics. Similarly the term "algebra over
X
{\displaystyle X}
" is used in the sense of a Boolean algebra and should not be confused with algebras over fields or rings in ring theory.
Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be represented as a field of sets.
== Definitions ==
A field of sets is a pair
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
consisting of a set
X
{\displaystyle X}
and a family
F
{\displaystyle {\mathcal {F}}}
of subsets of
X
,
{\displaystyle X,}
called an algebra over
X
,
{\displaystyle X,}
that has the following properties:
Closed under complementation in
X
{\displaystyle X}
:
X
∖
F
∈
F
for all
F
∈
F
.
{\displaystyle X\setminus F\in {\mathcal {F}}{\text{ for all }}F\in {\mathcal {F}}.}
Contains the empty set (or contains
X
{\displaystyle X}
) as an element:
∅
∈
F
.
{\displaystyle \varnothing \in {\mathcal {F}}.}
Assuming that (1) holds, this condition (2) is equivalent to:
X
∈
F
.
{\displaystyle X\in {\mathcal {F}}.}
Any/all of the following equivalent conditions hold:
Closed under binary unions:
F
∪
G
∈
F
for all
F
,
G
∈
F
.
{\displaystyle F\cup G\in {\mathcal {F}}{\text{ for all }}F,G\in {\mathcal {F}}.}
Closed under binary intersections:
F
∩
G
∈
F
for all
F
,
G
∈
F
.
{\displaystyle F\cap G\in {\mathcal {F}}{\text{ for all }}F,G\in {\mathcal {F}}.}
Closed under finite unions:
F
1
∪
⋯
∪
F
n
∈
F
for all integers
n
≥
1
and all
F
1
,
…
,
F
n
∈
F
.
{\displaystyle F_{1}\cup \cdots \cup F_{n}\in {\mathcal {F}}{\text{ for all integers }}n\geq 1{\text{ and all }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}.}
Closed under finite intersections:
F
1
∩
⋯
∩
F
n
∈
F
for all integers
n
≥
1
and all
F
1
,
…
,
F
n
∈
F
.
{\displaystyle F_{1}\cap \cdots \cap F_{n}\in {\mathcal {F}}{\text{ for all integers }}n\geq 1{\text{ and all }}F_{1},\ldots ,F_{n}\in {\mathcal {F}}.}
In other words,
F
{\displaystyle {\mathcal {F}}}
forms a subalgebra of the power set Boolean algebra of
X
{\displaystyle X}
(with the same identity element
X
∈
F
{\displaystyle X\in {\mathcal {F}}}
).
Many authors refer to
F
{\displaystyle {\mathcal {F}}}
itself as a field of sets.
Elements of
X
{\displaystyle X}
are called points while elements of
F
{\displaystyle {\mathcal {F}}}
are called complexes and are said to be the admissible sets of
X
.
{\displaystyle X.}
A field of sets
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
is called a σ-field of sets and the algebra
F
{\displaystyle {\mathcal {F}}}
is called a σ-algebra if the following additional condition (4) is satisfied:
Any/both of the following equivalent conditions hold:
Closed under countable unions:
⋃
i
=
1
∞
F
i
:=
F
1
∪
F
2
∪
⋯
∈
F
{\displaystyle \bigcup _{i=1}^{\infty }F_{i}:=F_{1}\cup F_{2}\cup \cdots \in {\mathcal {F}}}
for all
F
1
,
F
2
,
…
∈
F
.
{\displaystyle F_{1},F_{2},\ldots \in {\mathcal {F}}.}
Closed under countable intersections:
⋂
i
=
1
∞
F
i
:=
F
1
∩
F
2
∩
⋯
∈
F
{\displaystyle \bigcap _{i=1}^{\infty }F_{i}:=F_{1}\cap F_{2}\cap \cdots \in {\mathcal {F}}}
for all
F
1
,
F
2
,
…
∈
F
.
{\displaystyle F_{1},F_{2},\ldots \in {\mathcal {F}}.}
== Fields of sets in the representation theory of Boolean algebras ==
=== Stone representation ===
For an arbitrary set
Y
,
{\displaystyle Y,}
its power set
2
Y
{\displaystyle 2^{Y}}
(or, somewhat pedantically, the pair
(
Y
,
2
Y
)
{\displaystyle (Y,2^{Y})}
of this set and its power set) is a field of sets. If
Y
{\displaystyle Y}
is finite (namely,
n
{\displaystyle n}
-element), then
2
Y
{\displaystyle 2^{Y}}
is finite (namely,
2
n
{\displaystyle 2^{n}}
-element). It appears that every finite field of sets (it means,
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
with
F
{\displaystyle {\mathcal {F}}}
finite, while
X
{\displaystyle X}
may be infinite) admits a representation of the form
(
Y
,
2
Y
)
{\displaystyle (Y,2^{Y})}
with finite
Y
{\displaystyle Y}
; it means a function
f
:
X
→
Y
{\displaystyle f:X\to Y}
that establishes a one-to-one correspondence between
F
{\displaystyle {\mathcal {F}}}
and
2
Y
{\displaystyle 2^{Y}}
via inverse image:
S
=
f
−
1
[
B
]
=
{
x
∈
X
∣
f
(
x
)
∈
B
}
{\displaystyle S=f^{-1}[B]=\{x\in X\mid f(x)\in B\}}
where
S
∈
F
{\displaystyle S\in {\mathcal {F}}}
and
B
∈
2
Y
{\displaystyle B\in 2^{Y}}
(that is,
B
⊂
Y
{\displaystyle B\subset Y}
). One notable consequence: the number of complexes, if finite, is always of the form
2
n
.
{\displaystyle 2^{n}.}
To this end one chooses
Y
{\displaystyle Y}
to be the set of all atoms of the given field of sets, and defines
f
{\displaystyle f}
by
f
(
x
)
=
A
{\displaystyle f(x)=A}
whenever
x
∈
A
{\displaystyle x\in A}
for a point
x
∈
X
{\displaystyle x\in X}
and a complex
A
∈
F
{\displaystyle A\in {\mathcal {F}}}
that is an atom; the latter means that a nonempty subset of
A
{\displaystyle A}
different from
A
{\displaystyle A}
cannot be a complex.
In other words: the atoms are a partition of
X
{\displaystyle X}
;
Y
{\displaystyle Y}
is the corresponding quotient set; and
f
{\displaystyle f}
is the corresponding canonical surjection.
Similarly, every finite Boolean algebra can be represented as a power set – the power set of its set of atoms; each element of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set representation can be constructed more generally for any complete atomic Boolean algebra.
In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation by considering fields of sets instead of whole power sets. To do this we first observe that the atoms of a finite Boolean algebra correspond to its ultrafilters and that an atom is below an element of a finite Boolean algebra if and only if that element is contained in the ultrafilter corresponding to the atom. This leads us to construct a representation of a Boolean algebra by taking its set of ultrafilters and forming complexes by associating with each element of the Boolean algebra the set of ultrafilters containing that element. This construction does indeed produce a representation of the Boolean algebra as a field of sets and is known as the Stone representation. It is the basis of Stone's representation theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or filters, similar to Dedekind cuts.
Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top element. (The approach is equivalent as the ultrafilters of a Boolean algebra are precisely the pre-images of the top elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded as a generalization of the representation of finite Boolean algebras by truth tables.
=== Separative and compact fields of sets: towards Stone duality ===
A field of sets is called separative (or differentiated) if and only if for every pair of distinct points there is a complex containing one and not the other.
A field of sets is called compact if and only if for every proper filter over
X
{\displaystyle X}
the intersection of all the complexes contained in the filter is non-empty.
These definitions arise from considering the topology generated by the complexes of a field of sets. (It is just one of notable topologies on the given set of points; it often happens that another topology is given, with quite different properties, in particular, not zero-dimensional). Given a field of sets
X
=
(
X
,
F
)
{\displaystyle \mathbf {X} =(X,{\mathcal {F}})}
the complexes form a base for a topology. We denote by
T
(
X
)
{\displaystyle T(\mathbf {X} )}
the corresponding topological space,
(
X
,
T
)
{\displaystyle (X,{\mathcal {T}})}
where
T
{\displaystyle {\mathcal {T}}}
is the topology formed by taking arbitrary unions of complexes. Then
T
(
X
)
{\displaystyle T(\mathbf {X} )}
is always a zero-dimensional space.
T
(
X
)
{\displaystyle T(\mathbf {X} )}
is a Hausdorff space if and only if
X
{\displaystyle \mathbf {X} }
is separative.
T
(
X
)
{\displaystyle T(\mathbf {X} )}
is a compact space with compact open sets
F
{\displaystyle {\mathcal {F}}}
if and only if
X
{\displaystyle \mathbf {X} }
is compact.
T
(
X
)
{\displaystyle T(\mathbf {X} )}
is a Boolean space with clopen sets
F
{\displaystyle {\mathcal {F}}}
if and only if
X
{\displaystyle \mathbf {X} }
is both separative and compact (in which case it is described as being descriptive)
The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality exists between Boolean algebras and Boolean spaces.
== Fields of sets with additional structure ==
=== Sigma algebras and measure spaces ===
If an algebra over a set is closed under countable unions (hence also under countable intersections), it is called a sigma algebra and the corresponding field of sets is called a measurable space. The complexes of a measurable space are called measurable sets. The Loomis-Sikorski theorem provides a Stone-type duality between countably complete Boolean algebras (which may be called abstract sigma algebras) and measurable spaces.
A measure space is a triple
(
X
,
F
,
μ
)
{\displaystyle (X,{\mathcal {F}},\mu )}
where
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
is a measurable space and
μ
{\displaystyle \mu }
is a measure defined on it. If
μ
{\displaystyle \mu }
is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample space. The points of a sample space are called sample points and represent potential outcomes while the measurable sets (complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many use the term sample space simply for the underlying set of a probability space, particularly in the case where every subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability theory respectively.
In applications to Physics we often deal with measure spaces and probability spaces derived from rich mathematical structures such as inner product spaces or topological groups which already have a topology associated with them - this should not be confused with the topology generated by taking arbitrary unions of complexes.
=== Topological fields of sets ===
A topological field of sets is a triple
(
X
,
T
,
F
)
{\displaystyle (X,{\mathcal {T}},{\mathcal {F}})}
where
(
X
,
T
)
{\displaystyle (X,{\mathcal {T}})}
is a topological space and
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
is a field of sets which is closed under the closure operator of
T
{\displaystyle {\mathcal {T}}}
or equivalently under the interior operator i.e. the closure and interior of every complex is also a complex. In other words,
F
{\displaystyle {\mathcal {F}}}
forms a subalgebra of the power set interior algebra on
(
X
,
T
)
.
{\displaystyle (X,{\mathcal {T}}).}
Topological fields of sets play a fundamental role in the representation theory of interior algebras and Heyting algebras. These two classes of algebraic structures provide the algebraic semantics for the modal logic S4 (a formal mathematical abstraction of epistemic logic) and intuitionistic logic respectively. Topological fields of sets representing these algebraic structures provide a related topological semantics for these logics.
Every interior algebra can be represented as a topological field of sets with the underlying Boolean algebra of the interior algebra corresponding to the complexes of the topological field of sets and the interior and closure operators of the interior algebra corresponding to those of the topology. Every Heyting algebra can be represented by a topological field of sets with the underlying lattice of the Heyting algebra corresponding to the lattice of complexes of the topological field of sets that are open in the topology. Moreover the topological field of sets representing a Heyting algebra may be chosen so that the open complexes generate all the complexes as a Boolean algebra. These related representations provide a well defined mathematical apparatus for studying the relationship between truth modalities (possibly true vs necessarily true, studied in modal logic) and notions of provability and refutability (studied in intuitionistic logic) and is thus deeply connected to the theory of modal companions of intermediate logics.
Given a topological space the clopen sets trivially form a topological field of sets as each clopen set is its own interior and closure. The Stone representation of a Boolean algebra can be regarded as such a topological field of sets, however in general the topology of a topological field of sets can differ from the topology generated by taking arbitrary unions of complexes and in general the complexes of a topological field of sets need not be open or closed in the topology.
==== Algebraic fields of sets and Stone fields ====
A topological field of sets is called algebraic if and only if there is a base for its topology consisting of complexes.
If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology.
Topological fields of sets that are separative, compact and algebraic are called Stone fields and provide a generalization of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - the Stone representation. (The topology of the Stone representation is also known as the McKinsey–Tarski Stone topology after the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology).
=== Preorder fields ===
A preorder field is a triple
(
X
,
≤
,
F
)
{\displaystyle (X,\leq ,{\mathcal {F}})}
where
(
X
,
≤
)
{\displaystyle (X,\leq )}
is a preordered set and
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
is a field of sets.
Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of the Alexandrov topology induced by the preorder. In other words, for all
S
∈
F
{\displaystyle S\in {\mathcal {F}}}
:
I
n
t
(
S
)
=
{
x
∈
X
:
there exists a
y
∈
S
with
y
≤
x
}
{\displaystyle \mathrm {Int} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}y\leq x\}}
and
C
l
(
S
)
=
{
x
∈
X
:
there exists a
y
∈
S
with
x
≤
y
}
{\displaystyle \mathrm {Cl} (S)=\{x\in X:{\text{ there exists a }}y\in S{\text{ with }}x\leq y\}}
Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics of a theory in the modal logic S4, the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of the Lindenbaum–Tarski algebra of the theory. They are a special case of the general modal frames which are fields of sets with an additional accessibility relation providing representations of modal algebras.
==== Algebraic and canonical preorder fields ====
A preorder field is called algebraic (or tight) if and only if it has a set of complexes
A
{\displaystyle {\mathcal {A}}}
which determines the preorder in the following manner:
x
≤
y
{\displaystyle x\leq y}
if and only if for every complex
S
∈
A
{\displaystyle S\in {\mathcal {A}}}
,
x
∈
S
{\displaystyle x\in S}
implies
y
∈
S
{\displaystyle y\in S}
. The preorder fields obtained from S4 theories are always algebraic, the complexes determining the preorder being the sets of possible worlds in which the sentences of the theory closed under necessity hold.
A separative compact algebraic preorder field is said to be canonical. Given an interior algebra, by replacing the topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a representation of the interior algebra as a canonical preorder field. By replacing the preorder by its corresponding Alexandrov topology we obtain an alternative representation of the interior algebra as a topological field of sets. (The topology of this "Alexandrov representation" is just the Alexandrov bi-coreflection of the topology of the Stone representation.) While representation of modal algebras by general modal frames is possible for any normal modal algebra, it is only in the case of interior algebras (which correspond to the modal logic S4) that the general modal frame corresponds to topological field of sets in this manner.
=== Complex algebras and fields of sets on relational structures ===
The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal) Boolean algebras with operators. For this we consider structures
(
X
,
(
R
i
)
I
,
F
)
{\displaystyle (X,(R_{i})_{I},{\mathcal {F}})}
where
(
X
,
(
R
i
)
I
)
{\displaystyle (X,(R_{i})_{I})}
is a relational structure i.e. a set with an indexed family of relations defined on it, and
(
X
,
F
)
{\displaystyle (X,{\mathcal {F}})}
is a field of sets. The complex algebra (or algebra of complexes) determined by a field of sets
X
=
(
X
,
(
R
i
)
I
,
F
)
{\displaystyle \mathbf {X} =(X,\left(R_{i}\right)_{I},{\mathcal {F}})}
on a relational structure, is the Boolean algebra with operators
C
(
X
)
=
(
F
,
∩
,
∪
,
′
,
∅
,
X
,
(
f
i
)
I
)
{\displaystyle {\mathcal {C}}(\mathbf {X} )=({\mathcal {F}},\cap ,\cup ,\prime ,\emptyset ,X,(f_{i})_{I})}
where for all
i
∈
I
,
{\displaystyle i\in I,}
if
R
i
{\displaystyle R_{i}}
is a relation of arity
n
+
1
,
{\displaystyle n+1,}
then
f
i
{\displaystyle f_{i}}
is an operator of arity
n
{\displaystyle n}
and for all
S
1
,
…
,
S
n
∈
F
{\displaystyle S_{1},\ldots ,S_{n}\in {\mathcal {F}}}
f
i
(
S
1
,
…
,
S
n
)
=
{
x
∈
X
:
there exist
x
1
∈
S
1
,
…
,
x
n
∈
S
n
such that
R
i
(
x
1
,
…
,
x
n
,
x
)
}
{\displaystyle f_{i}(S_{1},\ldots ,S_{n})=\left\{x\in X:{\text{ there exist }}x_{1}\in S_{1},\ldots ,x_{n}\in S_{n}{\text{ such that }}R_{i}(x_{1},\ldots ,x_{n},x)\right\}}
This construction can be generalized to fields of sets on arbitrary algebraic structures having both operators and relations as operators can be viewed as a special case of relations. If
F
{\displaystyle {\mathcal {F}}}
is the whole power set of
X
{\displaystyle X}
then
C
(
X
)
{\displaystyle {\mathcal {C}}(\mathbf {X} )}
is called a full complex algebra or power algebra.
Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it is isomorphic to the complex algebra corresponding to the field.
(Historically the term complex was first used in the case where the algebraic structure was a group and has its origins in 19th century group theory where a subset of a group was called a complex.)
== See also ==
== Notes ==
== References ==
Goldblatt, R., Algebraic Polymodal Logic: A Survey, Logic Journal of the IGPL, Volume 8, Issue 4, p. 393-450, July 2000
Goldblatt, R., Varieties of complex algebras, Annals of Pure and Applied Logic, 44, p. 173-242, 1989
Johnstone, Peter T. (1982). Stone spaces (3rd ed.). Cambridge: Cambridge University Press. ISBN 0-521-33779-8.
Naturman, C.A., Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics, 1991
Patrick Blackburn, Johan F.A.K. van Benthem, Frank Wolter ed., Handbook of Modal Logic, Volume 3 of Studies in Logic and Practical Reasoning, Elsevier, 2006
== External links ==
"Algebra of sets", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Algebra of sets, Encyclopedia of Mathematics. | Wikipedia/Algebra_over_a_set |
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories.
A covariety is the class of all coalgebraic structures of a given signature.
== Terminology ==
A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication.
== Definition ==
A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as v = w.
A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function oA : An → A such that for each axiom v = w and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras.
Given two algebras of a theory T, say A and B, a homomorphism is a function f : A → B such that
f
(
o
A
(
a
1
,
…
,
a
n
)
)
=
o
B
(
f
(
a
1
)
,
…
,
f
(
a
n
)
)
{\displaystyle f(o_{A}(a_{1},\dots ,a_{n}))=o_{B}(f(a_{1}),\dots ,f(a_{n}))}
for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms.
== Examples ==
The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
x
(
y
z
)
=
(
x
y
)
z
.
{\displaystyle x(yz)=(xy)z.}
The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
x
(
y
z
)
=
(
x
y
)
z
{\displaystyle x(yz)=(xy)z}
1
x
=
x
1
=
x
{\displaystyle 1x=x1=x}
x
x
−
1
=
x
−
1
x
=
1.
{\displaystyle xx^{-1}=x^{-1}x=1.}
The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras.
The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity.
== Birkhoff's variety theorem ==
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
== Subvarieties ==
A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains
⟨
Z
,
+
⟩
{\displaystyle \langle \mathbb {Z} ,+\rangle }
and does not contain its subalgebra (more precisely, submonoid)
⟨
N
,
+
⟩
{\displaystyle \langle \mathbb {N} ,+\rangle }
.
However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying xy = yx, with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V.
== Free objects ==
Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map i : S → FS that satisfies the following universal property: given any algebra A in V and any map k : S → A, there exists a unique V-homomorphism f : FS → A such that f ∘ i = k.
This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
== Category theory ==
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor G : V → Set has a left adjoint F : Set → V, namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad T = GF. Moreover the monad T is finitary, meaning it commutes with filtered colimits.
The monad T : Set → Set is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is a locally presentable category.
== Pseudovariety of finite algebras ==
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived. Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set of profinite identities.
Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups.
== See also ==
Quasivariety
== Notes ==
== External links == | Wikipedia/Algebraic_category |
In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure that resembles a group in the sense that "division" is always possible. Quasigroups differ from groups mainly in that the associative and identity element properties are optional. In fact, a nonempty associative quasigroup is a group.
A quasigroup that has an identity element is called a loop.
== Definitions ==
There are at least two structurally equivalent formal definitions of quasigroup:
One defines a quasigroup as a set with one binary operation.
The other, from universal algebra, defines a quasigroup as having three primitive operations.
The homomorphic image of a quasigroup that is defined with a single binary operation, however, need not be a quasigroup, in contrast to a quasigroup as having three primitive operations. We begin with the first definition.
=== Algebra ===
A quasigroup (Q, ∗) is a non-empty set Q with a binary operation ∗ (that is, a magma, indicating that a quasigroup has to satisfy the closure property), obeying the Latin square property. This states that, for each a and b in Q, there exist unique elements x and y in Q such that both
a
∗
x
=
b
{\displaystyle a\ast x=b}
y
∗
a
=
b
{\displaystyle y\ast a=b}
hold. (In other words: Each element of the set occurs exactly once in each row and exactly once in each column of the quasigroup's multiplication table, or Cayley table. This property ensures that the Cayley table of a finite quasigroup, and, in particular, a finite group, is a Latin square.) The requirement that x and y be unique can be replaced by the requirement that the magma be cancellative.
The unique solutions to these equations are written x = a \ b and y = b / a. The operations '\' and '/' are called, respectively, left division and right division. With regard to the Cayley table, the first equation (left division) means that the b entry in the a row is in the x column while the second equation (right division) means that the b entry in the a column is in the y row.
The empty set equipped with the empty binary operation satisfies this definition of a quasigroup. Some authors accept the empty quasigroup but others explicitly exclude it.
=== Universal algebra ===
Given some algebraic structure, an identity is an equation in which all variables are tacitly universally quantified, and in which all operations are among the primitive operations proper to the structure. Algebraic structures that satisfy axioms that are given solely by identities are called a variety. Many standard results in universal algebra hold only for varieties. Quasigroups form a variety if left and right division are taken as primitive.
A right-quasigroup (Q, ∗, /) is a type (2, 2) algebra that satisfy both identities:
y
=
(
y
/
x
)
∗
x
{\displaystyle y=(y/x)\ast x}
y
=
(
y
∗
x
)
/
x
{\displaystyle y=(y\ast x)/x}
A left-quasigroup (Q, ∗, \) is a type (2, 2) algebra that satisfy both identities:
y
=
x
∗
(
x
∖
y
)
{\displaystyle y=x\ast (x\backslash y)}
y
=
x
∖
(
x
∗
y
)
{\displaystyle y=x\backslash (x\ast y)}
A quasigroup (Q, ∗, \, /) is a type (2, 2, 2) algebra (i.e., equipped with three binary operations) that satisfy the identities:
y
=
(
y
/
x
)
∗
x
{\displaystyle y=(y/x)\ast x}
y
=
(
y
∗
x
)
/
x
{\displaystyle y=(y\ast x)/x}
y
=
x
∗
(
x
∖
y
)
{\displaystyle y=x\ast (x\backslash y)}
y
=
x
∖
(
x
∗
y
)
{\displaystyle y=x\backslash (x\ast y)}
In other words: Multiplication and division in either order, one after the other, on the same side by the same element, have no net effect.
Hence if (Q, ∗) is a quasigroup according to the definition of the previous section, then (Q, ∗, \, /) is the same quasigroup in the sense of universal algebra. And vice versa: if (Q, ∗, \, /) is a quasigroup according to the sense of universal algebra, then (Q, ∗) is a quasigroup according to the first definition.
== Loops ==
A loop is a quasigroup with an identity element; that is, an element, e, such that
x ∗ e = x and e ∗ x = x for all x in Q.
It follows that the identity element, e, is unique, and that every element of Q has unique left and right inverses (which need not be the same).
A quasigroup with an idempotent element is called a pique ("pointed idempotent quasigroup"); this is a weaker notion than a loop but common nonetheless because, for example, given an abelian group, (A, +), taking its subtraction operation as quasigroup multiplication yields a pique (A, −) with the group identity (zero) turned into a "pointed idempotent". (That is, there is a principal isotopy (x, y, z) ↦ (x, −y, z).)
A loop that is associative is a group. A group can have a strictly nonassociative pique isotope, but it cannot have a strictly nonassociative loop isotope.
There are weaker associativity properties that have been given special names.
For instance, a Bol loop is a loop that satisfies either:
x ∗ (y ∗ (x ∗ z)) = (x ∗ (y ∗ x)) ∗ z for each x, y and z in Q (a left Bol loop),
or else
((z ∗ x) ∗ y) ∗ x = z ∗ ((x ∗ y) ∗ x) for each x, y and z in Q (a right Bol loop).
A loop that is both a left and right Bol loop is a Moufang loop. This is equivalent to any one of the following single Moufang identities holding for all x, y, z:
x ∗ (y ∗ (x ∗ z)) = ((x ∗ y) ∗ x) ∗ z
z ∗ (x ∗ (y ∗ x)) = ((z ∗ x) ∗ y) ∗ x
(x ∗ y) ∗ (z ∗ x) = x ∗ ((y ∗ z) ∗ x)
(x ∗ y) ∗ (z ∗ x) = (x ∗ (y ∗ z)) ∗ x.
According to Jonathan D. H. Smith, "loops" were named after the Chicago Loop, as their originators were studying quasigroups in Chicago at the time.
== Symmetries ==
Smith (2007) names the following important properties and subclasses:
=== Semisymmetry ===
A quasigroup is semisymmetric if any of the following equivalent identities hold for all x, y:
x ∗ y = y / x
y ∗ x = x \ y
x = (y ∗ x) ∗ y
x = y ∗ (x ∗ y).
Although this class may seem special, every quasigroup Q induces a semisymmetric quasigroup QΔ on the direct product cube Q3 via the following operation:
(x1, x2, x3) ⋅ (y1, y2, y3) = (y3 / x2, y1 \ x3, x1 ∗ y2) = (x2 // y3, x3 \\ y1, x1 ∗ y2),
where "//" and "\\" are the conjugate division operations given by y // x = x / y and y \\ x = x \ y.
=== Triality ===
A quasigroup may exhibit semisymmetric triality.
=== Total symmetry ===
A narrower class is a totally symmetric quasigroup (sometimes abbreviated TS-quasigroup) in which all conjugates coincide as one operation: x ∗ y = x / y = x \ y. Another way to define (the same notion of) totally symmetric quasigroup is as a semisymmetric quasigroup that is commutative, i.e. x ∗ y = y ∗ x.
Idempotent total symmetric quasigroups are precisely (i.e. in a bijection with) Steiner triples, so such a quasigroup is also called a Steiner quasigroup, and sometimes the latter is even abbreviated as squag. The term sloop refers to an analogue for loops, namely, totally symmetric loops that satisfy x ∗ x = 1 instead of x ∗ x = x. Without idempotency, total symmetric quasigroups correspond to the geometric notion of extended Steiner triple, also called Generalized Elliptic Cubic Curve (GECC).
=== Total antisymmetry ===
A quasigroup (Q, ∗) is called weakly totally anti-symmetric if for all c, x, y ∈ Q, the following implication holds.
(c ∗ x) ∗ y = (c ∗ y) ∗ x implies that x = y.
A quasigroup (Q, ∗) is called totally anti-symmetric if, in addition, for all x, y ∈ Q, the following implication holds:
x ∗ y = y ∗ x implies that x = y.
This property is required, for example, in the Damm algorithm.
== Examples ==
Every group is a loop, because a ∗ x = b if and only if x = a−1 ∗ b, and y ∗ a = b if and only if y = b ∗ a−1.
The integers Z (or the rationals Q or the reals R) with subtraction (−) form a quasigroup. These quasigroups are not loops because there is no identity element (0 is a right identity because a − 0 = a, but not a left identity because, in general, 0 − a ≠ a).
The nonzero rationals Q× (or the nonzero reals R×) with division (÷) form a quasigroup.
Any vector space over a field of characteristic not equal to 2 forms an idempotent, commutative quasigroup under the operation x ∗ y = (x + y) / 2.
Every Steiner triple system defines an idempotent, commutative quasigroup: a ∗ b is the third element of the triple containing a and b. These quasigroups also satisfy (x ∗ y) ∗ y = x for all x and y in the quasigroup. These quasigroups are known as Steiner quasigroups.
The set {±1, ±i, ±j, ±k} where ii = jj = kk = +1 and with all other products as in the quaternion group forms a nonassociative loop of order 8. See hyperbolic quaternions for its application. (The hyperbolic quaternions themselves do not form a loop or quasigroup.)
The nonzero octonions form a nonassociative loop under multiplication. The octonions are a special type of loop known as a Moufang loop.
An associative quasigroup is either empty or is a group, since if there is at least one element, the invertibility of the quasigroup binary operation combined with associativity implies the existence of an identity element, which then implies the existence of inverse elements, thus satisfying all three requirements of a group.
The following construction is due to Hans Zassenhaus. On the underlying set of the four-dimensional vector space F4 over the 3-element Galois field F = Z/3Z define
(x1, x2, x3, x4) ∗ (y1, y2, y3, y4) = (x1, x2, x3, x4) + (y1, y2, y3, y4) + (0, 0, 0, (x3 − y3)(x1y2 − x2y1)).
Then, (F4, ∗) is a commutative Moufang loop that is not a group.
More generally, the nonzero elements of any division algebra form a quasigroup with the operation of multiplication in the algebra.
== Properties ==
In the remainder of the article we shall denote quasigroup multiplication simply by juxtaposition.
Quasigroups have the cancellation property: if ab = ac, then b = c. This follows from the uniqueness of left division of ab or ac by a. Similarly, if ba = ca, then b = c.
The Latin square property of quasigroups implies that, given any two of the three variables in xy = z, the third variable is uniquely determined.
=== Multiplication operators ===
The definition of a quasigroup can be treated as conditions on the left and right multiplication operators Lx, Rx : Q → Q, defined by
Lx(y) = xy
Rx(y) = yx
The definition says that both mappings are bijections from Q to itself. A magma Q is a quasigroup precisely when all these operators, for every x in Q, are bijective. The inverse mappings are left and right division, that is,
L−1x(y) = x \ y
R−1x(y) = y / x
In this notation the identities among the quasigroup's multiplication and division operations (stated in the section on universal algebra) are
LxL−1x = id corresponding to x(x \ y) = y
L−1xLx = id corresponding to x \ (xy) = y
RxR−1x = id corresponding to (y / x)x = y
R−1xRx = id corresponding to (yx) / x = y
where id denotes the identity mapping on Q.
=== Latin squares ===
The multiplication table of a finite quasigroup is a Latin square: an n × n table filled with n different symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column.
Conversely, every Latin square can be taken as the multiplication table of a quasigroup in many ways: the border row (containing the column headers) and the border column (containing the row headers) can each be any permutation of the elements. See Small Latin squares and quasigroups.
==== Infinite quasigroups ====
For a countably infinite quasigroup Q, it is possible to imagine an infinite array in which every row and every column corresponds to some element q of Q, and where the element a ∗ b is in the row corresponding to a and the column responding to b. In this situation too, the Latin square property says that each row and each column of the infinite array will contain every possible value precisely once.
For an uncountably infinite quasigroup, such as the group of non-zero real numbers under multiplication, the Latin square property still holds, although the name is somewhat unsatisfactory, as it is not possible to produce the array of combinations to which the above idea of an infinite array extends since the real numbers cannot all be written in a sequence. (This is somewhat misleading however, as the reals can be written in a sequence of length
c
{\displaystyle {\mathfrak {c}}}
, assuming the well-ordering theorem.)
=== Inverse properties ===
The binary operation of a quasigroup is invertible in the sense that both Lx and Rx, the left and right multiplication operators, are bijective, and hence invertible.
Every loop element has a unique left and right inverse given by
xλ = e / x xλx = e
xρ = x \ e xxρ = e
A loop is said to have (two-sided) inverses if xλ = xρ for all x. In this case the inverse element is usually denoted by x−1.
There are some stronger notions of inverses in loops that are often useful:
A loop has the left inverse property if xλ(xy) = y for all x and y. Equivalently, L−1x = Lxλ or x \ y = xλy.
A loop has the right inverse property if (yx)xρ = y for all x and y. Equivalently, R−1x = Rxρ or y / x = yxρ.
A loop has the antiautomorphic inverse property if (xy)λ = yλxλ or, equivalently, if (xy)ρ = yρxρ.
A loop has the weak inverse property when (xy)z = e if and only if x(yz) = e. This may be stated in terms of inverses via (xy)λx = yλ or equivalently x(yx)ρ = yρ.
A loop has the inverse property if it has both the left and right inverse properties. Inverse property loops also have the antiautomorphic and weak inverse properties. In fact, any loop that satisfies any two of the above four identities has the inverse property and therefore satisfies all four.
Any loop that satisfies the left, right, or antiautomorphic inverse properties automatically has two-sided inverses.
== Morphisms ==
A quasigroup or loop homomorphism is a map f : Q → P between two quasigroups such that f(xy) = f(x)f(y). Quasigroup homomorphisms necessarily preserve left and right division, as well as identity elements (if they exist).
=== Homotopy and isotopy ===
Let Q and P be quasigroups. A quasigroup homotopy from Q to P is a triple (α, β, γ) of maps from Q to P such that
α(x)β(y) = γ(xy)
for all x, y in Q. A quasigroup homomorphism is just a homotopy for which the three maps are equal.
An isotopy is a homotopy for which each of the three maps (α, β, γ) is a bijection. Two quasigroups are isotopic if there is an isotopy between them. In terms of Latin squares, an isotopy (α, β, γ) is given by a permutation of rows α, a permutation of columns β, and a permutation on the underlying element set γ.
An autotopy is an isotopy from a quasigroup to itself. The set of all autotopies of a quasigroup forms a group with the automorphism group as a subgroup.
Every quasigroup is isotopic to a loop. If a loop is isotopic to a group, then it is isomorphic to that group and thus is itself a group. However, a quasigroup that is isotopic to a group need not be a group. For example, the quasigroup on R with multiplication given by (x, y) ↦ (x + y)/2 is isotopic to the additive group (R, +), but is not itself a group as it has no identity element. Every medial quasigroup is isotopic to an abelian group by the Bruck–Toyoda theorem.
=== Conjugation (parastrophe) ===
Left and right division are examples of forming a quasigroup by permuting the variables in the defining equation. From the original operation ∗ (i.e., x ∗ y = z) we can form five new operations: x o y := y ∗ x (the opposite operation), / and \, and their opposites. That makes a total of six quasigroup operations, which are called the conjugates or parastrophes of ∗. Any two of these operations are said to be "conjugate" or "parastrophic" to each other (and to themselves).
=== Isostrophe (paratopy) ===
If the set Q has two quasigroup operations, ∗ and ·, and one of them is isotopic to a conjugate of the other, the operations are said to be isostrophic to each other. There are also many other names for this relation of "isostrophe", e.g., paratopy.
== Generalizations ==
=== Polyadic or multiary quasigroups ===
An n-ary quasigroup is a set with an n-ary operation, (Q, f) with f : Qn → Q, such that the equation f(x1, ..., xn) = y has a unique solution for any one variable if all the other n variables are specified arbitrarily. Polyadic or multiary means n-ary for some nonnegative integer n.
A 0-ary, or nullary, quasigroup is just a constant element of Q. A 1-ary, or unary, quasigroup is a bijection of Q to itself. A binary, or 2-ary, quasigroup is an ordinary quasigroup.
An example of a multiary quasigroup is an iterated group operation, y = x1 · x2 · ··· · xn; it is not necessary to use parentheses to specify the order of operations because the group is associative. One can also form a multiary quasigroup by carrying out any sequence of the same or different group or quasigroup operations, if the order of operations is specified.
There exist multiary quasigroups that cannot be represented in any of these ways. An n-ary quasigroup is irreducible if its operation cannot be factored into the composition of two operations in the following way:
f(x1, ..., xn) = g(x1, ..., xi−1, h(xi, ..., xj), xj+1, ..., xn),
where 1 ≤ i < j ≤ n and (i, j) ≠ (1, n). Finite irreducible n-ary quasigroups exist for all n > 2; see Akivis & Goldberg (2001) for details.
An n-ary quasigroup with an n-ary version of associativity is called an n-ary group.
== Number of small quasigroups and loops ==
The number of isomorphism classes of small quasigroups (sequence A057991 in the OEIS) and loops (sequence A057771 in the OEIS) is given here:
== See also ==
Division ring – a ring in which every non-zero element has a multiplicative inverse
Semigroup – an algebraic structure consisting of a set together with an associative binary operation
Monoid – a semigroup with an identity element
Planar ternary ring – has an additive and multiplicative loop structure
Problems in loop theory and quasigroup theory
Mathematics of Sudoku
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== External links ==
quasigroups
"Quasi-group", Encyclopedia of Mathematics, EMS Press, 2001 [1994] | Wikipedia/Loop_(algebra) |
Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma. It runs on Unix-like operating systems, as well as Windows.
== Introduction ==
Magma is produced and distributed by the Computational Algebra Group within the Sydney School of Mathematics and Statistics at the University of Sydney.
In late 2006, the book Discovering Mathematics with Magma was published by Springer as volume 19 of the Algorithms and Computations in Mathematics series.
The Magma system is used extensively within pure mathematics. The Computational Algebra Group maintain a list of publications that cite Magma, and as of 2010 there are about 2600 citations, mostly in pure mathematics, but also including papers from areas as diverse as economics and geophysics.
== History ==
The predecessor of the Magma system was named Cayley (1982–1993), after Arthur Cayley.
Magma was officially released in August 1993 (version 1.0). Version 2.0 of Magma was released in June 1996 and subsequent versions of 2.X have been released approximately once per year.
In 2013, the Computational Algebra Group finalized an agreement with the Simons Foundation, whereby the Simons Foundation will underwrite all costs of providing Magma to all U.S. nonprofit, non-governmental scientific research or educational institutions. All students, researchers and faculty associated with a participating institution will be able to access Magma for free, through that institution.
== Mathematical areas covered by the system ==
Group theory
Magma includes permutation, matrix, finitely presented, soluble, abelian (finite or infinite), polycyclic, braid and straight-line program groups. Several databases of groups are also included.
Number theory
Magma contains asymptotically fast algorithms for all fundamental integer and polynomial operations, such as the Schönhage–Strassen algorithm for fast multiplication of integers and polynomials. Integer factorization algorithms include the Elliptic Curve Method, the Quadratic sieve and the Number field sieve.
Algebraic number theory
Magma includes the KANT computer algebra system for comprehensive computations in algebraic number fields. A special type also allows one to compute in the algebraic closure of a field.
Module theory and linear algebra
Magma contains asymptotically fast algorithms for all fundamental dense matrix operations, such as Strassen multiplication.
Sparse matrices
Magma contains the structured Gaussian elimination and Lanczos algorithms for reducing sparse systems which arise in index calculus methods, while Magma uses Markowitz pivoting for several other sparse linear algebra problems.
Lattices and the LLL algorithm
Magma has a provable implementation of fpLLL, which is an LLL algorithm for integer matrices which uses floating point numbers for the Gram–Schmidt coefficients, but such that the result is rigorously proven to be LLL-reduced.
Commutative algebra and Gröbner bases
Magma has an efficient implementation of the Faugère F4 algorithm for computing Gröbner bases.
Representation theory
Magma has extensive tools for computing in representation theory, including the computation of character tables of finite groups and the Meataxe algorithm.
Invariant theory
Magma has a type for invariant rings of finite groups, for which one can primary, secondary and fundamental invariants, and compute with the module structure.
Lie theory
Algebraic geometry
Arithmetic geometry
Finite incidence structures
Cryptography
Coding theory
Optimization
== See also ==
Comparison of computer algebra systems
== References ==
== External links ==
Official website
Magma Free Online Calculator
Magma's High Performance for computing Gröbner Bases (2004)
Magma's High Performance for computing Hermite Normal Forms of integer matrices
Magma V2.12 is apparently "Overall Best in the World at Polynomial GCD" :-)
Magma example code
Liste von Publikationen, die Magma zitieren | Wikipedia/Magma_computer_algebra_system |
In mathematics, a partial function f from a set X to a set Y is a function from a subset S of X (possibly the whole X itself) to Y. The subset S, that is, the domain of f viewed as a function, is called the domain of definition or natural domain of f. If S equals X, that is, if f is defined on every element in X, then f is said to be a total function.
In other words, a partial function is a binary relation over two sets that associates to every element of the first set at most one element of the second set; it is thus a univalent relation. This generalizes the concept of a (total) function by not requiring every element of the first set to be associated to an element of the second set.
A partial function is often used when its exact domain of definition is not known, or is difficult to specify. However, even when the exact domain of definition is known, partial functions are often used for simplicity or brevity. This is the case in calculus, where, for example, the quotient of two functions is a partial function whose domain of definition cannot contain the zeros of the denominator; in this context, a partial function is generally simply called a function.
In computability theory, a general recursive function is a partial function from the integers to the integers; no algorithm can exist for deciding whether an arbitrary such function is in fact total.
When arrow notation is used for functions, a partial function
f
{\displaystyle f}
from
X
{\displaystyle X}
to
Y
{\displaystyle Y}
is sometimes written as
f
:
X
⇀
Y
,
{\displaystyle f:X\rightharpoonup Y,}
f
:
X
↛
Y
,
{\displaystyle f:X\nrightarrow Y,}
or
f
:
X
↪
Y
.
{\displaystyle f:X\hookrightarrow Y.}
However, there is no general convention, and the latter notation is more commonly used for inclusion maps or embeddings.
Specifically, for a partial function
f
:
X
⇀
Y
,
{\displaystyle f:X\rightharpoonup Y,}
and any
x
∈
X
,
{\displaystyle x\in X,}
one has either:
f
(
x
)
=
y
∈
Y
{\displaystyle f(x)=y\in Y}
(it is a single element in Y), or
f
(
x
)
{\displaystyle f(x)}
is undefined.
For example, if
f
{\displaystyle f}
is the square root function restricted to the integers
f
:
Z
→
N
,
{\displaystyle f:\mathbb {Z} \to \mathbb {N} ,}
defined by:
f
(
n
)
=
m
{\displaystyle f(n)=m}
if, and only if,
m
2
=
n
,
{\displaystyle m^{2}=n,}
m
∈
N
,
n
∈
Z
,
{\displaystyle m\in \mathbb {N} ,n\in \mathbb {Z} ,}
then
f
(
n
)
{\displaystyle f(n)}
is only defined if
n
{\displaystyle n}
is a perfect square (that is,
0
,
1
,
4
,
9
,
16
,
…
{\displaystyle 0,1,4,9,16,\ldots }
). So
f
(
25
)
=
5
{\displaystyle f(25)=5}
but
f
(
26
)
{\displaystyle f(26)}
is undefined.
== Basic concepts ==
A partial function arises from the consideration of maps between two sets X and Y that may not be defined on the entire set X. A common example is the square root operation on the real numbers
R
{\displaystyle \mathbb {R} }
: because negative real numbers do not have real square roots, the operation can be viewed as a partial function from
R
{\displaystyle \mathbb {R} }
to
R
.
{\displaystyle \mathbb {R} .}
The domain of definition of a partial function is the subset S of X on which the partial function is defined; in this case, the partial function may also be viewed as a function from S to Y. In the example of the square root operation, the set S consists of the nonnegative real numbers
[
0
,
+
∞
)
.
{\displaystyle [0,+\infty ).}
The notion of partial function is particularly convenient when the exact domain of definition is unknown or even unknowable. For a computer-science example of the latter, see Halting problem.
In case the domain of definition S is equal to the whole set X, the partial function is said to be total. Thus, total partial functions from X to Y coincide with functions from X to Y.
Many properties of functions can be extended in an appropriate sense of partial functions. A partial function is said to be injective, surjective, or bijective when the function given by the restriction of the partial function to its domain of definition is injective, surjective, bijective respectively.
Because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective.
An injective partial function may be inverted to an injective partial function, and a partial function which is both injective and surjective has an injective function as inverse. Furthermore, a function which is injective may be inverted to a bijective partial function.
The notion of transformation can be generalized to partial functions as well. A partial transformation is a function
f
:
A
⇀
B
,
{\displaystyle f:A\rightharpoonup B,}
where both
A
{\displaystyle A}
and
B
{\displaystyle B}
are subsets of some set
X
.
{\displaystyle X.}
== Function spaces ==
For convenience, denote the set of all partial functions
f
:
X
⇀
Y
{\displaystyle f:X\rightharpoonup Y}
from a set
X
{\displaystyle X}
to a set
Y
{\displaystyle Y}
by
[
X
⇀
Y
]
.
{\displaystyle [X\rightharpoonup Y].}
This set is the union of the sets of functions defined on subsets of
X
{\displaystyle X}
with same codomain
Y
{\displaystyle Y}
:
[
X
⇀
Y
]
=
⋃
D
⊆
X
[
D
→
Y
]
,
{\displaystyle [X\rightharpoonup Y]=\bigcup _{D\subseteq X}[D\to Y],}
the latter also written as
⋃
D
⊆
X
Y
D
.
{\textstyle \bigcup _{D\subseteq {X}}Y^{D}.}
In finite case, its cardinality is
|
[
X
⇀
Y
]
|
=
(
|
Y
|
+
1
)
|
X
|
,
{\displaystyle |[X\rightharpoonup Y]|=(|Y|+1)^{|X|},}
because any partial function can be extended to a function by any fixed value
c
{\displaystyle c}
not contained in
Y
,
{\displaystyle Y,}
so that the codomain is
Y
∪
{
c
}
,
{\displaystyle Y\cup \{c\},}
an operation which is injective (unique and invertible by restriction).
== Discussion and examples ==
The first diagram at the top of the article represents a partial function that is not a function since the element 1 in the left-hand set is not associated with anything in the right-hand set. Whereas, the second diagram represents a function since every element on the left-hand set is associated with exactly one element in the right hand set.
=== Natural logarithm ===
Consider the natural logarithm function mapping the real numbers to themselves. The logarithm of a non-positive real is not a real number, so the natural logarithm function doesn't associate any real number in the codomain with any non-positive real number in the domain. Therefore, the natural logarithm function is not a function when viewed as a function from the reals to themselves, but it is a partial function. If the domain is restricted to only include the positive reals (that is, if the natural logarithm function is viewed as a function from the positive reals to the reals), then the natural logarithm is a function.
=== Subtraction of natural numbers ===
Subtraction of natural numbers (in which
N
{\displaystyle \mathbb {N} }
is the non-negative integers) is a partial function:
f
:
N
×
N
⇀
N
{\displaystyle f:\mathbb {N} \times \mathbb {N} \rightharpoonup \mathbb {N} }
f
(
x
,
y
)
=
x
−
y
.
{\displaystyle f(x,y)=x-y.}
It is defined only when
x
≥
y
.
{\displaystyle x\geq y.}
=== Bottom element ===
In denotational semantics a partial function is considered as returning the bottom element when it is undefined.
In computer science a partial function corresponds to a subroutine that raises an exception or loops forever. The IEEE floating point standard defines a not-a-number value which is returned when a floating point operation is undefined and exceptions are suppressed, e.g. when the square root of a negative number is requested.
In a programming language where function parameters are statically typed, a function may be defined as a partial function because the language's type system cannot express the exact domain of the function, so the programmer instead gives it the smallest domain which is expressible as a type and contains the domain of definition of the function.
=== In category theory ===
In category theory, when considering the operation of morphism composition in concrete categories, the composition operation
∘
:
hom
(
C
)
×
hom
(
C
)
→
hom
(
C
)
{\displaystyle \circ \;:\;\hom(C)\times \hom(C)\to \hom(C)}
is a total function if and only if
ob
(
C
)
{\displaystyle \operatorname {ob} (C)}
has one element. The reason for this is that two morphisms
f
:
X
→
Y
{\displaystyle f:X\to Y}
and
g
:
U
→
V
{\displaystyle g:U\to V}
can only be composed as
g
∘
f
{\displaystyle g\circ f}
if
Y
=
U
,
{\displaystyle Y=U,}
that is, the codomain of
f
{\displaystyle f}
must equal the domain of
g
.
{\displaystyle g.}
The category of sets and partial functions is equivalent to but not isomorphic with the category of pointed sets and point-preserving maps. One textbook notes that "This formal completion of sets and partial maps by adding “improper,” “infinite” elements was reinvented many times, in particular, in topology (one-point compactification) and in theoretical computer science."
The category of sets and partial bijections is equivalent to its dual. It is the prototypical inverse category.
=== In abstract algebra ===
Partial algebra generalizes the notion of universal algebra to partial operations. An example would be a field, in which the multiplicative inversion is the only proper partial operation (because division by zero is not defined).
The set of all partial functions (partial transformations) on a given base set,
X
,
{\displaystyle X,}
forms a regular semigroup called the semigroup of all partial transformations (or the partial transformation semigroup on
X
{\displaystyle X}
), typically denoted by
P
T
X
.
{\displaystyle {\mathcal {PT}}_{X}.}
The set of all partial bijections on
X
{\displaystyle X}
forms the symmetric inverse semigroup.
=== Charts and atlases for manifolds and fiber bundles ===
Charts in the atlases which specify the structure of manifolds and fiber bundles are partial functions. In the case of manifolds, the domain is the point set of the manifold. In the case of fiber bundles, the domain is the space of the fiber bundle. In these applications, the most important construction is the transition map, which is the composite of one chart with the inverse of another. The initial classification of manifolds and fiber bundles is largely expressed in terms of constraints on these transition maps.
The reason for the use of partial functions instead of functions is to permit general global topologies to be represented by stitching together local patches to describe the global structure. The "patches" are the domains where the charts are defined.
== See also ==
Analytic continuation – Extension of the domain of an analytic function (mathematics)
Multivalued function – Generalized mathematical function
Densely defined operator – Function that is defined almost everywhere (mathematics)
== References ==
Martin Davis (1958), Computability and Unsolvability, McGraw–Hill Book Company, Inc, New York. Republished by Dover in 1982. ISBN 0-486-61471-9.
Stephen Kleene (1952), Introduction to Meta-Mathematics, North-Holland Publishing Company, Amsterdam, Netherlands, 10th printing with corrections added on 7th printing (1974). ISBN 0-7204-2103-9.
Harold S. Stone (1972), Introduction to Computer Organization and Data Structures, McGraw–Hill Book Company, New York.
=== Notes === | Wikipedia/Total_function |
In mathematics, the concept of groupoid algebra generalizes the notion of group algebra.
== Definition ==
Given a groupoid
(
G
,
⋅
)
{\displaystyle (G,\cdot )}
(in the sense of a category with all morphisms invertible) and a field
K
{\displaystyle K}
, it is possible to define the groupoid algebra
K
G
{\displaystyle KG}
as the algebra over
K
{\displaystyle K}
formed by the vector space having the elements of (the morphisms of)
G
{\displaystyle G}
as generators and having the multiplication of these elements defined by
g
∗
h
=
g
⋅
h
{\displaystyle g*h=g\cdot h}
, whenever this product is defined, and
g
∗
h
=
0
{\displaystyle g*h=0}
otherwise. The product is then extended by linearity.
== Examples ==
Some examples of groupoid algebras are the following:
Group rings
Matrix algebras
Algebras of functions
== Properties ==
When a groupoid has a finite number of objects and a finite number of morphisms, the groupoid algebra is a direct sum of tensor products of group algebras and matrix algebras.
== See also ==
Hopf algebra
Partial group algebra
== Notes ==
== References ==
Khalkhali, Masoud (2009). Basic Noncommutative Geometry. EMS Series of Lectures in Mathematics. European Mathematical Society. ISBN 978-3-03719-061-6.
da Silva, Ana Cannas; Weinstein, Alan (1999). Geometric models for noncommutative algebras. Berkeley mathematics lecture notes. Vol. 10 (2 ed.). AMS Bookstore. ISBN 978-0-8218-0952-5.
Dokuchaev, M.; Exel, R.; Piccione, P. (2000). "Partial Representations and Partial Group Algebras". Journal of Algebra. 226. Elsevier: 505–532. arXiv:math/9903129. doi:10.1006/jabr.1999.8204. ISSN 0021-8693. S2CID 14622598.
Khalkhali, Masoud; Marcolli, Matilde (2008). An invitation to noncommutative geometry. World Scientific. ISBN 978-981-270-616-4. | Wikipedia/Groupoid_algebra |
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, or it may be fixed (after creation). A string is often implemented as an array data structure of bytes (or words) that stores a sequence of elements, typically characters, using some character encoding. More general, string may also denote a sequence (or list) of data other than just characters.
Depending on the programming language and precise data type used, a variable declared to be a string may either cause storage in memory to be statically allocated for a predetermined maximum length or employ dynamic allocation to allow it to hold a variable number of elements.
When a string appears literally in source code, it is known as a string literal or an anonymous string.
In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set called an alphabet.
== Purpose ==
A primary purpose of strings is to store human-readable text, like words and sentences. Strings are used to communicate information from a computer program to the user of the program. A program may also accept string input from its user. Further, strings may store data expressed as characters yet not intended for human reading.
Example strings and their purposes:
A message like "file upload complete" is a string that software shows to end users. In the program's source code, this message would likely appear as a string literal.
User-entered text, like "I got a new job today" as a status update on a social media service. Instead of a string literal, the software would likely store this string in a database.
Alphabetical data, like "AGATGCCGT" representing nucleic acid sequences of DNA.
Computer settings or parameters, like "?action=edit" as a URL query string. Often these are intended to be somewhat human-readable, though their primary purpose is to communicate to computers.
The term string may also designate a sequence of data or computer records other than characters — like a "string of bits" — but when used without qualification it refers to strings of characters.
== History ==
Use of the word "string" to mean any items arranged in a line, series or succession dates back centuries. In 19th-century typesetting, compositors used the term "string" to denote a length of type printed on paper; the string would be measured to determine the compositor's pay.
Use of the word "string" to mean "a sequence of symbols or linguistic elements in a definite order" emerged from mathematics, symbolic logic, and linguistic theory to speak about the formal behavior of symbolic systems, setting aside the symbols' meaning.
For example, logician C. I. Lewis wrote in 1918:
A mathematical system is any set of strings of recognisable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks. That a system should consist of 'marks' instead of sounds or odours is immaterial.
According to Jean E. Sammet, "the first realistic string handling and pattern matching language" for computers was COMIT in the 1950s, followed by the SNOBOL language of the early 1960s.
== String datatypes ==
A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal.
=== String length ===
Although formal strings can have an arbitrary finite length, the length of strings in real languages is often constrained to an artificial maximum. In general, there are two types of string datatypes: fixed-length strings, which have a fixed maximum length to be determined at compile time and which use the same amount of memory whether this maximum is needed or not, and variable-length strings, whose length is not arbitrarily fixed and which can use varying amounts of memory depending on the actual requirements at run time (see Memory management). Most strings in modern programming languages are variable-length strings. Of course, even variable-length strings are limited in length by the amount of available memory. The string length can be stored as a separate integer (which may put another artificial limit on the length) or implicitly through a termination character, usually a character value with all bits zero such as in C programming language. See also "Null-terminated" below.
=== Character encoding ===
String datatypes have historically allocated one byte per character, and, although the exact character set varied by region, character encodings were similar enough that programmers could often get away with ignoring this, since characters a program treated specially (such as period and space and comma) were in the same place in all the encodings a program would encounter. These character sets were typically based on ASCII or EBCDIC. If text in one encoding was displayed on a system using a different encoding, text was often mangled, though often somewhat readable and some computer users learned to read the mangled text.
Logographic languages such as Chinese, Japanese, and Korean (known collectively as CJK) need far more than 256 characters (the limit of a one 8-bit byte per-character encoding) for reasonable representation. The normal solutions involved keeping single-byte representations for ASCII and using two-byte representations for CJK ideographs. Use of these with existing code led to problems with matching and cutting of strings, the severity of which depended on how the character encoding was designed. Some encodings such as the EUC family guarantee that a byte value in the ASCII range will represent only that ASCII character, making the encoding safe for systems that use those characters as field separators. Other encodings such as ISO-2022 and Shift-JIS do not make such guarantees, making matching on byte codes unsafe. These encodings also were not "self-synchronizing", so that locating character boundaries required backing up to the start of a string, and pasting two strings together could result in corruption of the second string.
Unicode has simplified the picture somewhat. Most programming languages now have a datatype for Unicode strings. Unicode's preferred byte stream format UTF-8 is designed not to have the problems described above for older multibyte encodings. UTF-8, UTF-16 and UTF-32 require the programmer to know that the fixed-size code units are different from the "characters", the main difficulty currently is incorrectly designed APIs that attempt to hide this difference (UTF-32 does make code points fixed-sized, but these are not "characters" due to composing codes).
=== Implementations ===
Some languages, such as C++, Perl and Ruby, normally allow the contents of a string to be changed after it has been created; these are termed mutable strings. In other languages, such as Java, JavaScript, Lua, Python, and Go, the value is fixed and a new string must be created if any alteration is to be made; these are termed immutable strings. Some of these languages with immutable strings also provide another type that is mutable, such as Java and .NET's StringBuilder, the thread-safe Java StringBuffer, and the Cocoa NSMutableString. There are both advantages and disadvantages to immutability: although immutable strings may require inefficiently creating many copies, they are simpler and completely thread-safe.
Strings are typically implemented as arrays of bytes, characters, or code units, in order to allow fast access to individual units or substrings—including characters when they have a fixed length. A few languages such as Haskell implement them as linked lists instead.
A lot of high-level languages provide strings as a primitive data type, such as JavaScript and PHP, while most others provide them as a composite data type, some with special language support in writing literals, for example, Java and C#.
Some languages, such as C, Prolog and Erlang, avoid implementing a dedicated string datatype at all, instead adopting the convention of representing strings as lists of character codes. Even in programming languages having a dedicated string type, string can usually be iterated as a sequence character codes, like lists of integers or other values.
=== Representations ===
Representations of strings depend heavily on the choice of character repertoire and the method of character encoding. Older string implementations were designed to work with repertoire and encoding defined by ASCII, or more recent extensions like the ISO 8859 series. Modern implementations often use the extensive repertoire defined by Unicode along with a variety of complex encodings such as UTF-8 and UTF-16.
The term byte string usually indicates a general-purpose string of bytes, rather than strings of only (readable) characters, strings of bits, or such. Byte strings often imply that bytes can take any value and any data can be stored as-is, meaning that there should be no value interpreted as a termination value.
Most string implementations are very similar to variable-length arrays with the entries storing the character codes of corresponding characters. The principal difference is that, with certain encodings, a single logical character may take up more than one entry in the array. This happens for example with UTF-8, where single codes (UCS code points) can take anywhere from one to four bytes, and single characters can take an arbitrary number of codes. In these cases, the logical length of the string (number of characters) differs from the physical length of the array (number of bytes in use). UTF-32 avoids the first part of the problem.
==== Null-terminated ====
The length of a string can be stored implicitly by using a special terminating character; often this is the null character (NUL), which has all bits zero, a convention used and perpetuated by the popular C programming language. Hence, this representation is commonly referred to as a C string. This representation of an n-character string takes n + 1 space (1 for the terminator), and is thus an implicit data structure.
In terminated strings, the terminating code is not an allowable character in any string. Strings with length field do not have this limitation and can also store arbitrary binary data.
An example of a null-terminated string stored in a 10-byte buffer, along with its ASCII (or more modern UTF-8) representation as 8-bit hexadecimal numbers is:
The length of the string in the above example, "FRANK", is 5 characters, but it occupies 6 bytes. Characters after the terminator do not form part of the representation; they may be either part of other data or just garbage. (Strings of this form are sometimes called ASCIZ strings, after the original assembly language directive used to declare them.)
==== Byte- and bit-terminated ====
Using a special byte other than null for terminating strings has historically appeared in both hardware and software, though sometimes with a value that was also a printing character. $ was used by many assembler systems, : used by CDC systems (this character had a value of zero), and the ZX80 used " since this was the string delimiter in its BASIC language.
Somewhat similar, "data processing" machines like the IBM 1401 used a special word mark bit to delimit strings at the left, where the operation would start at the right. This bit had to be clear in all other parts of the string. This meant that, while the IBM 1401 had a seven-bit word, almost no-one ever thought to use this as a feature, and override the assignment of the seventh bit to (for example) handle ASCII codes.
Early microcomputer software relied upon the fact that ASCII codes do not use the high-order bit, and set it to indicate the end of a string. It must be reset to 0 prior to output.
==== Length-prefixed ====
The length of a string can also be stored explicitly, for example by prefixing the string with the length as a byte value. This convention is used in many Pascal dialects; as a consequence, some people call such a string a Pascal string or P-string. Storing the string length as byte limits the maximum string length to 255. To avoid such limitations, improved implementations of P-strings use 16-, 32-, or 64-bit words to store the string length. When the length field covers the address space, strings are limited only by the available memory.
If the length is bounded, then it can be encoded in constant space, typically a machine word, thus leading to an implicit data structure, taking n + k space, where k is the number of characters in a word (8 for 8-bit ASCII on a 64-bit machine, 1 for 32-bit UTF-32/UCS-4 on a 32-bit machine, etc.).
If the length is not bounded, encoding a length n takes log(n) space (see fixed-length code), so length-prefixed strings are a succinct data structure, encoding a string of length n in log(n) + n space.
In the latter case, the length-prefix field itself does not have fixed length, therefore the actual string data needs to be moved when the string grows such that the length field needs to be increased.
Here is a Pascal string stored in a 10-byte buffer, along with its ASCII / UTF-8 representation:
==== Strings as records ====
Many languages, including object-oriented ones, implement strings as records with an internal structure like:
However, since the implementation is usually hidden, the string must be accessed and modified through member functions. text is a pointer to a dynamically allocated memory area, which might be expanded as needed. See also string (C++).
==== Other representations ====
Both character termination and length codes limit strings: For example, C character arrays that contain null (NUL) characters cannot be handled directly by C string library functions: Strings using a length code are limited to the maximum value of the length code.
Both of these limitations can be overcome by clever programming.
It is possible to create data structures and functions that manipulate them that do not have the problems associated with character termination and can in principle overcome length code bounds. It is also possible to optimize the string represented using techniques from run length encoding (replacing repeated characters by the character value and a length) and Hamming encoding.
While these representations are common, others are possible. Using ropes makes certain string operations, such as insertions, deletions, and concatenations more efficient.
The core data structure in a text editor is the one that manages the string (sequence of characters) that represents the current state of the file being edited.
While that state could be stored in a single long consecutive array of characters, a typical text editor instead uses an alternative representation as its sequence data structure—a gap buffer, a linked list of lines, a piece table, or a rope—which makes certain string operations, such as insertions, deletions, and undoing previous edits, more efficient.
=== Security concerns ===
The differing memory layout and storage requirements of strings can affect the security of the program accessing the string data. String representations requiring a terminating character are commonly susceptible to buffer overflow problems if the terminating character is not present, caused by a coding error or an attacker deliberately altering the data. String representations adopting a separate length field are also susceptible if the length can be manipulated. In such cases, program code accessing the string data requires bounds checking to ensure that it does not inadvertently access or change data outside of the string memory limits.
String data is frequently obtained from user input to a program. As such, it is the responsibility of the program to validate the string to ensure that it represents the expected format. Performing limited or no validation of user input can cause a program to be vulnerable to code injection attacks.
== Literal strings ==
Sometimes, strings need to be embedded inside a text file that is both human-readable and intended for consumption by a machine. This is needed in, for example, source code of programming languages, or in configuration files. In this case, the NUL character does not work well as a terminator since it is normally invisible (non-printable) and is difficult to input via a keyboard. Storing the string length would also be inconvenient as manual computation and tracking of the length is tedious and error-prone.
Two common representations are:
Surrounded by quotation marks (ASCII 0x22 double quote "str" or ASCII 0x27 single quote 'str'), used by most programming languages. To be able to include special characters such as the quotation mark itself, newline characters, or non-printable characters, escape sequences are often available, usually prefixed with the backslash character (ASCII 0x5C).
Terminated by a newline sequence, for example in Windows INI files.
== Non-text strings ==
While character strings are very common uses of strings, a string in computer science may refer generically to any sequence of homogeneously typed data. A bit string or byte string, for example, may be used to represent non-textual binary data retrieved from a communications medium. This data may or may not be represented by a string-specific datatype, depending on the needs of the application, the desire of the programmer, and the capabilities of the programming language being used. If the programming language's string implementation is not 8-bit clean, data corruption may ensue.
C programmers draw a sharp distinction between a "string", aka a "string of characters", which by definition is always null terminated, vs. a "array of characters" which may be stored in the same array but is often not null terminated.
Using C string handling functions on such an array of characters often seems to work, but later leads to security problems.
== String processing algorithms ==
There are many algorithms for processing strings, each with various trade-offs. Competing algorithms can be analyzed with respect to run time, storage requirements, and so forth. The name stringology was coined in 1984 by computer scientist Zvi Galil for the theory of algorithms and data structures used for string processing.
Some categories of algorithms include:
String searching algorithms for finding a given substring or pattern
String manipulation algorithms
Sorting algorithms
Regular expression algorithms
Parsing a string
Sequence mining
Advanced string algorithms often employ complex mechanisms and data structures, among them suffix trees and finite-state machines.
== Character string-oriented languages and utilities ==
Character strings are such a useful datatype that several languages have been designed in order to make string processing applications easy to write. Examples include the following languages:
AWK
Icon
MUMPS
Perl
Rexx
Ruby
sed
SNOBOL
Tcl
TTM
Many Unix utilities perform simple string manipulations and can be used to easily program some powerful string processing algorithms. Files and finite streams may be viewed as strings.
Some APIs like Multimedia Control Interface, embedded SQL or printf use strings to hold commands that will be interpreted.
Many scripting programming languages, including Perl, Python, Ruby, and Tcl employ regular expressions to facilitate text operations. Perl is particularly noted for its regular expression use, and many other languages and applications implement Perl compatible regular expressions.
Some languages such as Perl and Ruby support string interpolation, which permits arbitrary expressions to be evaluated and included in string literals.
== Character string functions ==
String functions are used to create strings or change the contents of a mutable string. They also are used to query information about a string. The set of functions and their names varies depending on the computer programming language.
The most basic example of a string function is the string length function – the function that returns the length of a string (not counting any terminator characters or any of the string's internal structural information) and does not modify the string. This function is often named length or len. For example, length("hello world") would return 11. Another common function is concatenation, where a new string is created by appending two strings, often this is the + addition operator.
Some microprocessor's instruction set architectures contain direct support for string operations, such as block copy (e.g. In intel x86m REPNZ MOVSB).
== Formal theory ==
Let Σ be a finite set of distinct, unambiguous symbols (alternatively called characters), called the alphabet. A string (or word or expression) over Σ is any finite sequence of symbols from Σ. For example, if Σ = {0, 1}, then 01011 is a string over Σ.
The length of a string s is the number of symbols in s (the length of the sequence) and can be any non-negative integer; it is often denoted as |s|. The empty string is the unique string over Σ of length 0, and is denoted ε or λ.
The set of all strings over Σ of length n is denoted Σn. For example, if Σ = {0, 1}, then Σ2 = {00, 01, 10, 11}. We have Σ0 = {ε} for every alphabet Σ.
The set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn,
Σ
∗
=
⋃
n
∈
N
∪
{
0
}
Σ
n
{\displaystyle \Sigma ^{*}=\bigcup _{n\in \mathbb {N} \cup \{0\}}\Sigma ^{n}}
For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000, 001, 010, 011, ...}. Although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length.
A set of strings over Σ (i.e. any subset of Σ*) is called a formal language over Σ. For example, if Σ = {0, 1}, the set of strings with an even number of zeros, {ε, 1, 00, 11, 001, 010, 100, 111, 0000, 0011, 0101, 0110, 1001, 1010, 1100, 1111, ...}, is a formal language over Σ.
=== Concatenation and substrings ===
Concatenation is an important binary operation on Σ*. For any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ = {a, b, ..., z}, s = bear, and t = hug, then st = bearhug and ts = hugbear.
String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element; for any string s, εs = sε = s. Therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers (that is, a function
L
:
Σ
∗
↦
N
∪
{
0
}
{\displaystyle L:\Sigma ^{*}\mapsto \mathbb {N} \cup \{0\}}
, such that
L
(
s
t
)
=
L
(
s
)
+
L
(
t
)
∀
s
,
t
∈
Σ
∗
{\displaystyle L(st)=L(s)+L(t)\quad \forall s,t\in \Sigma ^{*}}
).
A string s is said to be a substring or factor of t if there exist (possibly empty) strings u and v such that t = usv. The relation "is a substring of" defines a partial order on Σ*, the least element of which is the empty string.
=== Prefixes and suffixes ===
A string s is said to be a prefix of t if there exists a string u such that t = su. If u is nonempty, s is said to be a proper prefix of t. Symmetrically, a string s is said to be a suffix of t if there exists a string u such that t = us. If u is nonempty, s is said to be a proper suffix of t. Suffixes and prefixes are substrings of t. Both the relations "is a prefix of" and "is a suffix of" are prefix orders.
=== Reversal ===
The reverse of a string is a string with the same symbols but in reverse order. For example, if s = abc (where a, b, and c are symbols of the alphabet), then the reverse of s is cba. A string that is the reverse of itself (e.g., s = madam) is called a palindrome, which also includes the empty string and all strings of length 1.
=== Rotations ===
A string s = uv is said to be a rotation of t if t = vu. For example, if Σ = {0, 1} the string 0011001 is a rotation of 0100110, where u = 00110 and v = 01. As another example, the string abc has three different rotations, viz. abc itself (with u=abc, v=ε), bca (with u=bc, v=a), and cab (with u=c, v=ab).
=== Lexicographical ordering ===
It is often useful to define an ordering on a set of strings. If the alphabet Σ has a total order (cf. alphabetical order) one can define a total order on Σ* called lexicographical order. The lexicographical order is total if the alphabetical order is, but is not well-founded for any nontrivial alphabet, even if the alphabetical order is. For example, if Σ = {0, 1} and 0 < 1, then the lexicographical order on Σ* includes the relationships ε < 0 < 00 < 000 < ... < 0001 < ... < 001 < ... < 01 < 010 < ... < 011 < 0110 < ... < 01111 < ... < 1 < 10 < 100 < ... < 101 < ... < 111 < ... < 1111 < ... < 11111 ... With respect to this ordering, e.g. the infinite set { 1, 01, 001, 0001, 00001, 000001, ... } has no minimal element.
See Shortlex for an alternative string ordering that preserves well-foundedness.
For the example alphabet, the shortlex order is ε < 0 < 1 < 00 < 01 < 10 < 11 < 000 < 001 < 010 < 011 < 100 < 101 < 0110 < 111 < 0000 < 0001 < 0010 < 0011 < ... < 1111 < 00000 < 00001 ...
=== String operations ===
A number of additional operations on strings commonly occur in the formal theory. These are given in the article on string operations.
=== Topology ===
Strings admit the following interpretation as nodes on a graph, where k is the number of symbols in Σ:
Fixed-length strings of length n can be viewed as the integer locations in an n-dimensional hypercube with sides of length k-1.
Variable-length strings (of finite length) can be viewed as nodes on a perfect k-ary tree.
Infinite strings (otherwise not considered here) can be viewed as infinite paths on a k-node complete graph.
The natural topology on the set of fixed-length strings or variable-length strings is the discrete topology, but the natural topology on the set of infinite strings is the limit topology, viewing the set of infinite strings as the inverse limit of the sets of finite strings. This is the construction used for the p-adic numbers and some constructions of the Cantor set, and yields the same topology.
Isomorphisms between string representations of topologies can be found by normalizing according to the lexicographically minimal string rotation.
== See also ==
Binary-safe — a property of string manipulating functions treating their input as raw data stream
Bit array — a string of binary digits
C string handling — overview of C string handling
C++ string handling — overview of C++ string handling
Comparison of programming languages (string functions)
Connection string — passed to a driver to initiate a connection (e.g., to a database)
Empty string — its properties and representation in programming languages
Incompressible string — a string that cannot be compressed by any algorithm
Rope (data structure) — a data structure for efficiently manipulating long strings
String metric — notions of similarity between strings
== References == | Wikipedia/String_(computer_science) |
In topology, constructible sets are a class of subsets of a topological space that have a relatively "simple" structure.
They are used particularly in algebraic geometry and related fields. A key result known as Chevalley's theorem
in algebraic geometry shows that the image of a constructible set is constructible for an important class of mappings
(more specifically morphisms) of algebraic varieties (or more generally schemes).
In addition, a large number of "local" geometric properties of schemes, morphisms and sheaves are (locally) constructible.
Constructible sets also feature in the definition of various types of constructible sheaves in algebraic geometry
and intersection cohomology.
== Definitions ==
A simple definition, adequate in many situations, is that a constructible set is a finite union of locally closed sets. (A set is locally closed if it is the intersection of an open set and closed set.)
However, a modification and another slightly weaker definition are needed to have definitions that behave better with "large" spaces:
Definitions: A subset
Z
{\displaystyle Z}
of a topological space
X
{\displaystyle X}
is called retrocompact if
Z
∩
U
{\displaystyle Z\cap U}
is compact for every compact open subset
U
⊂
X
{\displaystyle U\subset X}
. A subset of
X
{\displaystyle X}
is constructible if it is a finite union of subsets of the form
U
∩
(
X
−
V
)
{\displaystyle U\cap (X-V)}
where both
U
{\displaystyle U}
and
V
{\displaystyle V}
are open and retrocompact subsets of
X
{\displaystyle X}
.
A subset
Z
⊂
X
{\displaystyle Z\subset X}
is locally constructible if there is a cover
(
U
i
)
i
∈
I
{\displaystyle (U_{i})_{i\in I}}
of
X
{\displaystyle X}
consisting of open subsets with the property that each
Z
∩
U
i
{\displaystyle Z\cap U_{i}}
is a constructible subset of
U
i
{\displaystyle U_{i}}
.
Equivalently the constructible subsets of a topological space
X
{\displaystyle X}
are the smallest collection
C
{\displaystyle {\mathfrak {C}}}
of subsets of
X
{\displaystyle X}
that (i) contains all open retrocompact subsets and (ii) contains all complements and finite unions (and hence also finite intersections) of sets in it. In other words, constructible sets are precisely the Boolean algebra generated by retrocompact open subsets.
In a locally noetherian topological space, all subsets are retrocompact, and so for such spaces the simplified definition given first above is equivalent to the more elaborate one. Most of the commonly met schemes in algebraic geometry (including all algebraic varieties) are locally Noetherian, but there are important constructions that lead to more general schemes.
In any (not necessarily noetherian) topological space, every constructible set contains a dense open subset of its closure.
Terminology: The definition given here is the one used by the first edition of EGA and the Stacks Project. In the second edition of EGA constructible sets (according to the definition above) are called "globally constructible" while the word "constructible" is reserved for what are called locally constructible above.
== Chevalley's theorem ==
A major reason for the importance of constructible sets in algebraic geometry is that the image of a (locally) constructible set is also (locally) constructible for a large class of maps (or "morphisms"). The key result is:
Chevalley's theorem. If
f
:
X
→
Y
{\displaystyle f:X\to Y}
is a finitely presented morphism of schemes and
Z
⊂
X
{\displaystyle Z\subset X}
is a locally constructible subset, then
f
(
Z
)
{\displaystyle f(Z)}
is also locally constructible in
Y
{\displaystyle Y}
.
In particular, the image of an algebraic variety need not be a variety, but is (under the assumptions) always a constructible set. For example, the map
A
2
→
A
2
{\displaystyle \mathbf {A} ^{2}\rightarrow \mathbf {A} ^{2}}
that sends
(
x
,
y
)
{\displaystyle (x,y)}
to
(
x
,
x
y
)
{\displaystyle (x,xy)}
has image the set
{
x
≠
0
}
∪
{
x
=
y
=
0
}
{\displaystyle \{x\neq 0\}\cup \{x=y=0\}}
, which is not a variety, but is constructible.
Chevalley's theorem in the generality stated above would fail if the simplified definition of constructible sets (without restricting to retrocompact open sets in the definition) were used.
== Constructible properties ==
A large number of "local" properties of morphisms of schemes and quasicoherent sheaves on schemes hold true over a locally constructible subset. EGA IV § 9 covers a large number of such properties. Below are some examples (where all references point to EGA IV):
If
f
:
X
→
S
{\displaystyle f\colon X\rightarrow S}
is an finitely presented morphism of schemes and
F
′
→
F
→
F
″
{\displaystyle {\mathcal {F}}'\rightarrow {\mathcal {F}}\rightarrow {\mathcal {F}}''}
is a sequence of finitely presented quasi-coherent
O
X
{\displaystyle {\mathcal {O}}_{X}}
-modules, then the set of
s
∈
S
{\displaystyle s\in S}
for which
F
s
′
→
F
s
→
F
s
″
{\displaystyle {\mathcal {F}}'_{s}\rightarrow {\mathcal {F}}_{s}\rightarrow {\mathcal {F}}''_{s}}
is exact is locally constructible. (Proposition (9.4.4))
If
f
:
X
→
S
{\displaystyle f\colon X\rightarrow S}
is an finitely presented morphism of schemes and
F
{\displaystyle {\mathcal {F}}}
is a finitely presented quasi-coherent
O
X
{\displaystyle {\mathcal {O}}_{X}}
-module, then the set of
s
∈
S
{\displaystyle s\in S}
for which
F
s
{\displaystyle {\mathcal {F}}_{s}}
is locally free is locally constructible. (Proposition (9.4.7))
If
f
:
X
→
S
{\displaystyle f\colon X\rightarrow S}
is an finitely presented morphism of schemes and
Z
⊂
X
{\displaystyle Z\subset X}
is a locally constructible subset, then the set of
s
∈
S
{\displaystyle s\in S}
for which
f
−
1
(
s
)
∩
Z
{\displaystyle f^{-1}(s)\cap Z}
is closed (or open) in
f
−
1
(
s
)
{\displaystyle f^{-1}(s)}
is locally constructible. (Corollary (9.5.4))
Let
S
{\displaystyle S}
be a scheme and
f
:
X
→
Y
{\displaystyle f\colon X\rightarrow Y}
a morphism of
S
{\displaystyle S}
-schemes. Consider the set
P
⊂
S
{\displaystyle P\subset S}
of
s
∈
S
{\displaystyle s\in S}
for which the induced morphism
f
s
:
X
s
→
Y
s
{\displaystyle f_{s}\colon X_{s}\rightarrow Y_{s}}
of fibres over
s
{\displaystyle s}
has some property
P
{\displaystyle \mathbf {P} }
. Then
P
{\displaystyle P}
is locally constructible if
P
{\displaystyle \mathbf {P} }
is any of the following properties: surjective, proper, finite, immersion, closed immersion, open immersion, isomorphism. (Proposition (9.6.1))
Let
f
:
X
→
S
{\displaystyle f\colon X\rightarrow S}
be an finitely presented morphism of schemes and consider the set
P
⊂
S
{\displaystyle P\subset S}
of
s
∈
S
{\displaystyle s\in S}
for which the fibre
f
−
1
(
s
)
{\displaystyle f^{-1}(s)}
has a property
P
{\displaystyle \mathbf {P} }
. Then
P
{\displaystyle P}
is locally constructible if
P
{\displaystyle \mathbf {P} }
is any of the following properties: geometrically irreducible, geometrically connected, geometrically reduced. (Theorem (9.7.7))
Let
f
:
X
→
S
{\displaystyle f\colon X\rightarrow S}
be an locally finitely presented morphism of schemes and consider the set
P
⊂
X
{\displaystyle P\subset X}
of
x
∈
X
{\displaystyle x\in X}
for which the fibre
f
−
1
(
f
(
x
)
)
{\displaystyle f^{-1}(f(x))}
has a property
P
{\displaystyle \mathbf {P} }
. Then
P
{\displaystyle P}
is locally constructible if
P
{\displaystyle \mathbf {P} }
is any of the following properties: geometrically regular, geometrically normal, geometrically reduced. (Proposition (9.9.4))
One important role that these constructibility results have is that in most cases assuming the morphisms in questions are also
flat it follows that the properties in question in fact hold in an open subset. A substantial number of such results is included in EGA IV § 12.
== See also ==
Constructible topology
Constructible sheaf
== Notes ==
== References ==
Allouche, Jean Paul. Note on the constructible sets of a topological space.
Andradas, Carlos; Bröcker, Ludwig; Ruiz, Jesús M. (1996). Constructible sets in real geometry. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) --- Results in Mathematics and Related Areas (3). Vol. 33. Berlin: Springer-Verlag. pp. x+270. ISBN 3-540-60451-0. MR 1393194.
Borel, Armand. Linear algebraic groups.
Grothendieck, Alexandre; Dieudonné, Jean (1961). "Eléments de géométrie algébrique: III. Étude cohomologique des faisceaux cohérents, Première partie". Publications Mathématiques de l'IHÉS. 11. doi:10.1007/bf02684274. MR 0217085.
Grothendieck, Alexandre; Dieudonné, Jean (1964). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Première partie". Publications Mathématiques de l'IHÉS. 20. doi:10.1007/bf02684747. MR 0173675.
Grothendieck, Alexandre; Dieudonné, Jean (1966). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Troisième partie". Publications Mathématiques de l'IHÉS. 28. doi:10.1007/bf02684343. MR 0217086.
Grothendieck, Alexandre; Dieudonné, Jean (1971). Éléments de géométrie algébrique: I. Le langage des schémas. Grundlehren der Mathematischen Wissenschaften (in French). Vol. 166 (2nd ed.). Berlin; New York: Springer-Verlag. ISBN 978-3-540-05113-8.
Mostowski, A. (1969). Constructible sets with applications. Studies in Logic and the Foundations of Mathematics. Amsterdam --- Warsaw: North-Holland Publishing Co. ---- PWN-Polish Scientific Publishers. pp. ix+269. MR 0255390.
== External links ==
https://stacks.math.columbia.edu/tag/04ZC Topological definition of (local) constructibility
https://stacks.math.columbia.edu/tag/054H Constructibility properties of morphisms of schemes (incl. Chevalley's theorem) | Wikipedia/Constructible_set_(topology) |
In mathematics, an algebraic manifold is an algebraic variety which is also a manifold. As such, algebraic manifolds are a generalisation of the concept of smooth curves and surfaces defined by polynomials. An example is the sphere, which can be defined as the zero set of the polynomial x2 + y2 + z2 – 1, and hence is an algebraic variety.
For an algebraic manifold, the ground field will be the real numbers or complex numbers; in the case of the real numbers, the manifold of real points is sometimes called a Nash manifold.
Every sufficiently small local patch of an algebraic manifold is isomorphic to km where k is the ground field. Equivalently the variety is smooth (free from singular points). The Riemann sphere is one example of a complex algebraic manifold, since it is the complex projective line.
== Examples ==
Elliptic curves
Grassmannian
== See also ==
Algebraic geometry and analytic geometry
== References ==
Nash, John Forbes (1952). "Real algebraic manifolds". Annals of Mathematics. 56 (3): 405–21. doi:10.2307/1969649. JSTOR 1969649. MR 0050928. (See also Proc. Internat. Congr. Math., 1950, (AMS, 1952), pp. 516–517.)
== External links ==
K-Algebraic manifold at PlanetMath
Algebraic manifold at Mathworld
Lecture notes on algebraic manifolds
Lecture notes on algebraic manifolds | Wikipedia/Algebraic_manifold |
In algebraic geometry, a morphism between algebraic varieties is a function between the varieties that is given locally by polynomials. It is also called a regular map. A morphism from an algebraic variety to the affine line is also called a regular function.
A regular map whose inverse is also regular is called biregular, and the biregular maps are the isomorphisms of algebraic varieties. Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective varieties – the concepts of rational and birational maps are widely used as well; they are partial functions that are defined locally by rational fractions instead of polynomials.
An algebraic variety has naturally the structure of a locally ringed space; a morphism between algebraic varieties is precisely a morphism of the underlying locally ringed spaces.
== Definition ==
If X and Y are closed subvarieties of
A
n
{\displaystyle \mathbb {A} ^{n}}
and
A
m
{\displaystyle \mathbb {A} ^{m}}
(so they are affine varieties), then a regular map
f
:
X
→
Y
{\displaystyle f\colon X\to Y}
is the restriction of a polynomial map
A
n
→
A
m
{\displaystyle \mathbb {A} ^{n}\to \mathbb {A} ^{m}}
. Explicitly, it has the form:
f
=
(
f
1
,
…
,
f
m
)
{\displaystyle f=(f_{1},\dots ,f_{m})}
where the
f
i
{\displaystyle f_{i}}
s are in the coordinate ring of X:
k
[
X
]
=
k
[
x
1
,
…
,
x
n
]
/
I
,
{\displaystyle k[X]=k[x_{1},\dots ,x_{n}]/I,}
where I is the ideal defining X (note: two polynomials f and g define the same function on X if and only if f − g is in I). The image f(X) lies in Y, and hence satisfies the defining equations of Y. That is, a regular map
f
:
X
→
Y
{\displaystyle f:X\to Y}
is the same as the restriction of a polynomial map whose components satisfy the defining equations of
Y
{\displaystyle Y}
.
More generally, a map f : X→Y between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of f(x) such that f(U) ⊂ V and the restricted function f : U→V is regular as a function on some affine charts of U and V. Then f is called regular, if it is regular at all points of X.
Note: It is not immediately obvious that the two definitions coincide: if X and Y are affine varieties, then a map f : X→Y is regular in the first sense if and only if it is so in the second sense. Also, it is not immediately clear whether regularity depends on a choice of affine charts (it does not.) This kind of a consistency issue, however, disappears if one adopts the formal definition. Formally, an (abstract) algebraic variety is defined to be a particular kind of a locally ringed space. When this definition is used, a morphism of varieties is just a morphism of locally ringed spaces.
The composition of regular maps is again regular; thus, algebraic varieties form the category of algebraic varieties where the morphisms are the regular maps.
Regular maps between affine varieties correspond contravariantly in one-to-one to algebra homomorphisms between the coordinate rings: if f : X→Y is a morphism of affine varieties, then it defines the algebra homomorphism
f
#
:
k
[
Y
]
→
k
[
X
]
,
g
↦
g
∘
f
{\displaystyle f^{\#}:k[Y]\to k[X],\,g\mapsto g\circ f}
where
k
[
X
]
,
k
[
Y
]
{\displaystyle k[X],k[Y]}
are the coordinate rings of X and Y; it is well-defined since
g
∘
f
=
g
(
f
1
,
…
,
f
m
)
{\displaystyle g\circ f=g(f_{1},\dots ,f_{m})}
is a polynomial in elements of
k
[
X
]
{\displaystyle k[X]}
. Conversely, if
ϕ
:
k
[
Y
]
→
k
[
X
]
{\displaystyle \phi :k[Y]\to k[X]}
is an algebra homomorphism, then it induces the morphism
ϕ
a
:
X
→
Y
{\displaystyle \phi ^{a}:X\to Y}
given by: writing
k
[
Y
]
=
k
[
y
1
,
…
,
y
m
]
/
J
,
{\displaystyle k[Y]=k[y_{1},\dots ,y_{m}]/J,}
ϕ
a
=
(
ϕ
(
y
1
¯
)
,
…
,
ϕ
(
y
m
¯
)
)
{\displaystyle \phi ^{a}=(\phi ({\overline {y_{1}}}),\dots ,\phi ({\overline {y_{m}}}))}
where
y
¯
i
{\displaystyle {\overline {y}}_{i}}
are the images of
y
i
{\displaystyle y_{i}}
's. Note
ϕ
a
#
=
ϕ
{\displaystyle {\phi ^{a}}^{\#}=\phi }
as well as
f
#
a
=
f
.
{\displaystyle {f^{\#}}^{a}=f.}
In particular, f is an isomorphism of affine varieties if and only if f# is an isomorphism of the coordinate rings.
For example, if X is a closed subvariety of an affine variety Y and f is the inclusion, then f# is the restriction of regular functions on Y to X. See #Examples below for more examples.
== Regular functions ==
In the particular case that
Y
{\displaystyle Y}
equals
A
1
{\displaystyle \mathbb {A} ^{1}}
the regular maps
f
:
X
→
A
1
{\displaystyle f:X\rightarrow \mathbb {A} ^{1}}
are called regular functions, and are algebraic analogs of smooth functions studied in differential geometry. The ring of regular functions (that is the coordinate ring or more abstractly the ring of global sections of the structure sheaf) is a fundamental object in affine algebraic geometry. The only regular function on a projective variety is constant (this can be viewed as an algebraic analogue of Liouville's theorem in complex analysis).
A scalar function
f
:
X
→
A
1
{\displaystyle f:X\rightarrow \mathbb {A} ^{1}}
is regular at a point
x
{\displaystyle x}
if, in some open affine neighborhood of
x
{\displaystyle x}
, it is a rational function that is regular at
x
{\displaystyle x}
; i.e., there are regular functions
g
{\displaystyle g}
,
h
{\displaystyle h}
near
x
{\displaystyle x}
such that
f
=
g
/
h
{\displaystyle f=g/h}
and
h
{\displaystyle h}
does not vanish at
x
{\displaystyle x}
. Caution: the condition is for some pair (g, h) not for all pairs (g, h); see Examples.
If X is a quasi-projective variety; i.e., an open subvariety of a projective variety, then the function field k(X) is the same as that of the closure
X
¯
{\displaystyle {\overline {X}}}
of X and thus a rational function on X is of the form g/h for some homogeneous elements g, h of the same degree in the homogeneous coordinate ring
k
[
X
¯
]
{\displaystyle k[{\overline {X}}]}
of
X
¯
{\displaystyle {\overline {X}}}
(cf. Projective variety#Variety structure). Then a rational function f on X is regular at a point x if and only if there are some homogeneous elements g, h of the same degree in
k
[
X
¯
]
{\displaystyle k[{\overline {X}}]}
such that f = g/h and h does not vanish at x. This characterization is sometimes taken as the definition of a regular function.
== Comparison with a morphism of schemes ==
If
X
=
Spec
A
{\displaystyle X=\operatorname {Spec} A}
and
Y
=
Spec
B
{\displaystyle Y=\operatorname {Spec} B}
are affine schemes, then each ring homomorphism
ϕ
:
B
→
A
{\displaystyle \phi :B\rightarrow A}
determines a morphism
ϕ
a
:
X
→
Y
,
p
↦
ϕ
−
1
(
p
)
{\displaystyle \phi ^{a}:X\to Y,\,{\mathfrak {p}}\mapsto \phi ^{-1}({\mathfrak {p}})}
by taking the pre-images of prime ideals. All morphisms between affine schemes are of this type and gluing such morphisms gives a morphism of schemes in general.
Now, if X, Y are affine varieties; i.e., A, B are integral domains that are finitely generated algebras over an algebraically closed field k, then, working with only the closed points, the above coincides with the definition given at #Definition. (Proof: If f : X → Y is a morphism, then writing
ϕ
=
f
#
{\displaystyle \phi =f^{\#}}
, we need to show
m
f
(
x
)
=
ϕ
−
1
(
m
x
)
{\displaystyle {\mathfrak {m}}_{f(x)}=\phi ^{-1}({\mathfrak {m}}_{x})}
where
m
x
,
m
f
(
x
)
{\displaystyle {\mathfrak {m}}_{x},{\mathfrak {m}}_{f(x)}}
are the maximal ideals corresponding to the points x and f(x); i.e.,
m
x
=
{
g
∈
k
[
X
]
∣
g
(
x
)
=
0
}
{\displaystyle {\mathfrak {m}}_{x}=\{g\in k[X]\mid g(x)=0\}}
. This is immediate.)
This fact means that the category of affine varieties can be identified with a full subcategory of affine schemes over k. Since morphisms of varieties are obtained by gluing morphisms of affine varieties in the same way morphisms of schemes are obtained by gluing morphisms of affine schemes, it follows that the category of varieties is a full subcategory of the category of schemes over k.
For more details, see [1].
== Examples ==
The regular functions on
A
n
{\displaystyle \mathbb {A} ^{n}}
are exactly the polynomials in
n
{\displaystyle n}
variables and the regular functions on
P
n
{\displaystyle \mathbb {P} ^{n}}
are exactly the constants.
Let
X
{\displaystyle X}
be the affine curve
y
=
x
2
{\displaystyle y=x^{2}}
. Then
f
:
X
→
A
1
,
(
x
,
y
)
↦
x
{\displaystyle f:X\to \mathbf {A} ^{1},\,(x,y)\mapsto x}
is a morphism; it is bijective with the inverse
g
(
x
)
=
(
x
,
x
2
)
{\displaystyle g(x)=(x,x^{2})}
. Since
g
{\displaystyle g}
is also a morphism,
f
{\displaystyle f}
is an isomorphism of varieties.
Let
X
{\displaystyle X}
be the affine curve
y
2
=
x
3
+
x
2
{\displaystyle y^{2}=x^{3}+x^{2}}
. Then
f
:
A
1
→
X
,
t
↦
(
t
2
−
1
,
t
3
−
t
)
{\displaystyle f:\mathbf {A} ^{1}\to X,\,t\mapsto (t^{2}-1,t^{3}-t)}
is a morphism. It corresponds to the ring homomorphism
f
#
:
k
[
X
]
→
k
[
t
]
,
g
↦
g
(
t
2
−
1
,
t
3
−
t
)
,
{\displaystyle f^{\#}:k[X]\to k[t],\,g\mapsto g(t^{2}-1,t^{3}-t),}
which is seen to be injective (since f is surjective).
Continuing the preceding example, let U = A1 − {1}. Since U is the complement of the hyperplane t = 1, U is affine. The restriction
f
:
U
→
X
{\displaystyle f:U\to X}
is bijective. But the corresponding ring homomorphism is the inclusion
k
[
X
]
=
k
[
t
2
−
1
,
t
3
−
t
]
↪
k
[
t
,
(
t
−
1
)
−
1
]
{\displaystyle k[X]=k[t^{2}-1,t^{3}-t]\hookrightarrow k[t,(t-1)^{-1}]}
, which is not an isomorphism and so the restriction f |U is not an isomorphism.
Let X be the affine curve x2 + y2 = 1 and let
f
(
x
,
y
)
=
1
−
y
x
.
{\displaystyle f(x,y)={1-y \over x}.}
Then f is a rational function on X. It is regular at (0, 1) despite the expression since, as a rational function on X, f can also be written as
f
(
x
,
y
)
=
x
1
+
y
{\displaystyle f(x,y)={x \over 1+y}}
.
Let X = A2 − (0, 0). Then X is an algebraic variety since it is an open subset of a variety. If f is a regular function on X, then f is regular on
D
A
2
(
x
)
=
A
2
−
{
x
=
0
}
{\displaystyle D_{\mathbf {A} ^{2}}(x)=\mathbf {A} ^{2}-\{x=0\}}
and so is in
k
[
D
A
2
(
x
)
]
=
k
[
A
2
]
[
x
−
1
]
=
k
[
x
,
x
−
1
,
y
]
{\displaystyle k[D_{\mathbf {A} ^{2}}(x)]=k[\mathbf {A} ^{2}][x^{-1}]=k[x,x^{-1},y]}
. Similarly, it is in
k
[
x
,
y
,
y
−
1
]
{\displaystyle k[x,y,y^{-1}]}
. Thus, we can write:
f
=
g
x
n
=
h
y
m
{\displaystyle f={g \over x^{n}}={h \over y^{m}}}
where g, h are polynomials in k[x, y]. But this implies g is divisible by xn and so f is in fact a polynomial. Hence, the ring of regular functions on X is just k[x, y]. (This also shows that X cannot be affine since if it were, X is determined by its coordinate ring and thus X = A2.)
Suppose
P
1
=
A
1
∪
{
∞
}
{\displaystyle \mathbf {P} ^{1}=\mathbf {A} ^{1}\cup \{\infty \}}
by identifying the points (x : 1) with the points x on A1 and ∞ = (1 : 0). There is an automorphism σ of P1 given by σ(x : y) = (y : x); in particular, σ exchanges 0 and ∞. If f is a rational function on P1, then
σ
#
(
f
)
=
f
(
1
/
z
)
{\displaystyle \sigma ^{\#}(f)=f(1/z)}
and f is regular at ∞ if and only if f(1/z) is regular at zero.
Taking the function field k(V) of an irreducible algebraic curve V, the functions F in the function field may all be realised as morphisms from V to the projective line over k. (cf. #Properties) The image will either be a single point, or the whole projective line (this is a consequence of the completeness of projective varieties). That is, unless F is actually constant, we have to attribute to F the value ∞ at some points of V.
For any algebraic varieties X, Y, the projection
p
:
X
×
Y
→
X
,
(
x
,
y
)
↦
x
{\displaystyle p:X\times Y\to X,\,(x,y)\mapsto x}
is a morphism of varieties. If X and Y are affine, then the corresponding ring homomorphism is
p
#
:
k
[
X
]
→
k
[
X
×
Y
]
=
k
[
X
]
⊗
k
k
[
Y
]
,
f
↦
f
⊗
1
{\displaystyle p^{\#}:k[X]\to k[X\times Y]=k[X]\otimes _{k}k[Y],\,f\mapsto f\otimes 1}
where
(
f
⊗
1
)
(
x
,
y
)
=
f
(
p
(
x
,
y
)
)
=
f
(
x
)
{\displaystyle (f\otimes 1)(x,y)=f(p(x,y))=f(x)}
.
== Properties ==
A morphism between varieties is continuous with respect to Zariski topologies on the source and the target.
The image of a morphism of varieties need not be open nor closed (for example, the image of
A
2
→
A
2
,
(
x
,
y
)
↦
(
x
,
x
y
)
{\displaystyle \mathbf {A} ^{2}\to \mathbf {A} ^{2},\,(x,y)\mapsto (x,xy)}
is neither open nor closed). However, one can still say: if f is a morphism between varieties, then the image of f contains an open dense subset of its closure (cf. constructible set).
A morphism f:X→Y of algebraic varieties is said to be dominant if it has dense image. For such an f, if V is a nonempty open affine subset of Y, then there is a nonempty open affine subset U of X such that f(U) ⊂ V and then
f
#
:
k
[
V
]
→
k
[
U
]
{\displaystyle f^{\#}:k[V]\to k[U]}
is injective. Thus, the dominant map f induces an injection on the level of function fields:
k
(
Y
)
=
lim
→
k
[
V
]
↪
k
(
X
)
,
g
↦
g
∘
f
{\displaystyle k(Y)=\varinjlim k[V]\hookrightarrow k(X),\,g\mapsto g\circ f}
where the direct limit runs over all nonempty open affine subsets of Y. (More abstractly, this is the induced map from the residue field of the generic point of Y to that of X.) Conversely, every inclusion of fields
k
(
Y
)
↪
k
(
X
)
{\displaystyle k(Y)\hookrightarrow k(X)}
is induced by a dominant rational map from X to Y. Hence, the above construction determines a contravariant-equivalence between the category of algebraic varieties over a field k and dominant rational maps between them and the category of finitely generated field extension of k.
If X is a smooth complete curve (for example, P1) and if f is a rational map from X to a projective space Pm, then f is a regular map X → Pm. In particular, when X is a smooth complete curve, any rational function on X may be viewed as a morphism X → P1 and, conversely, such a morphism as a rational function on X.
On a normal variety (in particular, a smooth variety), a rational function is regular if and only if it has no poles of codimension one. This is an algebraic analog of Hartogs' extension theorem. There is also a relative version of this fact; see [2].
A morphism between algebraic varieties that is a homeomorphism between the underlying topological spaces need not be an isomorphism (a counterexample is given by a Frobenius morphism
t
↦
t
p
{\displaystyle t\mapsto t^{p}}
.) On the other hand, if f is bijective birational and the target space of f is a normal variety, then f is biregular. (cf. Zariski's main theorem.)
A regular map between complex algebraic varieties is a holomorphic map. (There is actually a slight technical difference: a regular map is a meromorphic map whose singular points are removable, but the distinction is usually ignored in practice.) In particular, a regular map into the complex numbers is just a usual holomorphic function (complex-analytic function).
== Morphisms to a projective space ==
Let
f
:
X
→
P
m
{\displaystyle f:X\to \mathbf {P} ^{m}}
be a morphism from a projective variety to a projective space. Let x be a point of X. Then some i-th homogeneous coordinate of f(x) is nonzero; say, i = 0 for simplicity. Then, by continuity, there is an open affine neighborhood U of x such that
f
:
U
→
P
m
−
{
y
0
=
0
}
{\displaystyle f:U\to \mathbf {P} ^{m}-\{y_{0}=0\}}
is a morphism, where yi are the homogeneous coordinates. Note the target space is the affine space Am through the identification
(
a
0
:
⋯
:
a
m
)
=
(
1
:
a
1
/
a
0
:
⋯
:
a
m
/
a
0
)
∼
(
a
1
/
a
0
,
…
,
a
m
/
a
0
)
{\displaystyle (a_{0}:\dots :a_{m})=(1:a_{1}/a_{0}:\dots :a_{m}/a_{0})\sim (a_{1}/a_{0},\dots ,a_{m}/a_{0})}
. Thus, by definition, the restriction f |U is given by
f
|
U
(
x
)
=
(
g
1
(
x
)
,
…
,
g
m
(
x
)
)
{\displaystyle f|_{U}(x)=(g_{1}(x),\dots ,g_{m}(x))}
where gi's are regular functions on U. Since X is projective, each gi is a fraction of homogeneous elements of the same degree in the homogeneous coordinate ring k[X] of X. We can arrange the fractions so that they all have the same homogeneous denominator say f0. Then we can write gi = fi/f0 for some homogeneous elements fi's in k[X]. Hence, going back to the homogeneous coordinates,
f
(
x
)
=
(
f
0
(
x
)
:
f
1
(
x
)
:
⋯
:
f
m
(
x
)
)
{\displaystyle f(x)=(f_{0}(x):f_{1}(x):\dots :f_{m}(x))}
for all x in U and by continuity for all x in X as long as the fi's do not vanish at x simultaneously. If they vanish simultaneously at a point x of X, then, by the above procedure, one can pick a different set of fi's that do not vanish at x simultaneously (see Note at the end of the section.)
In fact, the above description is valid for any quasi-projective variety X, an open subvariety of a projective variety
X
¯
{\displaystyle {\overline {X}}}
; the difference being that fi's are in the homogeneous coordinate ring of
X
¯
{\displaystyle {\overline {X}}}
.
Note: The above does not say a morphism from a projective variety to a projective space is given by a single set of polynomials (unlike the affine case). For example, let X be the conic
y
2
=
x
z
{\displaystyle y^{2}=xz}
in P2. Then two maps
(
x
:
y
:
z
)
↦
(
x
:
y
)
{\displaystyle (x:y:z)\mapsto (x:y)}
and
(
x
:
y
:
z
)
↦
(
y
:
z
)
{\displaystyle (x:y:z)\mapsto (y:z)}
agree on the open subset
{
(
x
:
y
:
z
)
∈
X
∣
x
≠
0
,
z
≠
0
}
{\displaystyle \{(x:y:z)\in X\mid x\neq 0,z\neq 0\}}
of X (since
(
x
:
y
)
=
(
x
y
:
y
2
)
=
(
x
y
:
x
z
)
=
(
y
:
z
)
{\displaystyle (x:y)=(xy:y^{2})=(xy:xz)=(y:z)}
) and so defines a morphism
f
:
X
→
P
1
{\displaystyle f:X\to \mathbf {P} ^{1}}
.
== Fibers of a morphism ==
The important fact is:
In Mumford's red book, the theorem is proved by means of Noether's normalization lemma. For an algebraic approach where the generic freeness plays a main role and the notion of "universally catenary ring" is a key in the proof, see Eisenbud, Ch. 14 of "Commutative algebra with a view toward algebraic geometry." In fact, the proof there shows that if f is flat, then the dimension equality in 2. of the theorem holds in general (not just generically).
== Degree of a finite morphism ==
Let f: X → Y be a finite surjective morphism between algebraic varieties over a field k. Then, by definition, the degree of f is the degree of the finite field extension of the function field k(X) over f*k(Y). By generic freeness, there is some nonempty open subset U in Y such that the restriction of the structure sheaf OX to f−1(U) is free as OY|U-module. The degree of f is then also the rank of this free module.
If f is étale and if X, Y are complete, then for any coherent sheaf F on Y, writing χ for the Euler characteristic,
χ
(
f
∗
F
)
=
deg
(
f
)
χ
(
F
)
.
{\displaystyle \chi (f^{*}F)=\deg(f)\chi (F).}
(The Riemann–Hurwitz formula for a ramified covering shows the "étale" here cannot be omitted.)
In general, if f is a finite surjective morphism, if X, Y are complete and F a coherent sheaf on Y, then from the Leray spectral sequence
H
p
(
Y
,
R
q
f
∗
f
∗
F
)
⇒
H
p
+
q
(
X
,
f
∗
F
)
{\displaystyle \operatorname {H} ^{p}(Y,R^{q}f_{*}f^{*}F)\Rightarrow \operatorname {H} ^{p+q}(X,f^{*}F)}
, one gets:
χ
(
f
∗
F
)
=
∑
q
=
0
∞
(
−
1
)
q
χ
(
R
q
f
∗
f
∗
F
)
.
{\displaystyle \chi (f^{*}F)=\sum _{q=0}^{\infty }(-1)^{q}\chi (R^{q}f_{*}f^{*}F).}
In particular, if F is a tensor power
L
⊗
n
{\displaystyle L^{\otimes n}}
of a line bundle, then
R
q
f
∗
(
f
∗
F
)
=
R
q
f
∗
O
X
⊗
L
⊗
n
{\displaystyle R^{q}f_{*}(f^{*}F)=R^{q}f_{*}{\mathcal {O}}_{X}\otimes L^{\otimes n}}
and since the support of
R
q
f
∗
O
X
{\displaystyle R^{q}f_{*}{\mathcal {O}}_{X}}
has positive codimension if q is positive, comparing the leading terms, one has:
deg
(
f
∗
L
)
=
deg
(
f
)
deg
(
L
)
{\displaystyle \operatorname {deg} (f^{*}L)=\operatorname {deg} (f)\operatorname {deg} (L)}
(since the generic rank of
f
∗
O
X
{\displaystyle f_{*}{\mathcal {O}}_{X}}
is the degree of f.)
If f is étale and k is algebraically closed, then each geometric fiber f−1(y) consists exactly of deg(f) points.
== See also ==
Algebraic function
Smooth morphism
Étale morphisms – The algebraic analogue of local diffeomorphisms.
Resolution of singularities
contraction morphism
== Notes ==
== Citations ==
== References == | Wikipedia/Regular_map_(algebraic_geometry) |
In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets of G and H
f
:
V
(
G
)
→
V
(
H
)
{\displaystyle f\colon V(G)\to V(H)}
such that any two vertices u and v of G are adjacent in G if and only if
f
(
u
)
{\displaystyle f(u)}
and
f
(
v
)
{\displaystyle f(v)}
are adjacent in H. This kind of bijection is commonly described as "edge-preserving bijection", in accordance with the general notion of isomorphism being a structure-preserving bijection.
If an isomorphism exists between two graphs, then the graphs are called isomorphic and denoted as
G
≃
H
{\displaystyle G\simeq H}
. In the case when the isomorphism is a mapping of a graph onto itself, i.e., when G and H are one and the same graph, the isomorphism is called an automorphism of G.
Graph isomorphism is an equivalence relation on graphs and as such it partitions the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is called an isomorphism class of graphs. The question of whether graph isomorphism can be determined in polynomial time is a major unsolved problem in computer science, known as the graph isomorphism problem.
The two graphs shown below are isomorphic, despite their different looking drawings.
== Variations ==
In the above definition, graphs are understood to be undirected non-labeled non-weighted graphs. However, the notion of isomorphism may be applied to all other variants of the notion of graph, by adding the requirements to preserve the corresponding additional elements of structure: arc directions, edge weights, etc., with the following exception.
=== Isomorphism of labeled graphs ===
For labeled graphs, two definitions of isomorphism are in use.
Under one definition, an isomorphism is a vertex bijection which is both edge-preserving and label-preserving.
Under another definition, an isomorphism is an edge-preserving vertex bijection which preserves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels are mapped onto the vertices with equivalent labels and vice versa; same with edge labels.
For example, the
K
2
{\displaystyle K_{2}}
graph with the two vertices labelled with 1 and 2 has a single automorphism under the first definition, but under the second definition there are two auto-morphisms.
The second definition is assumed in certain situations when graphs are endowed with unique labels commonly taken from the integer range 1,...,n, where n is the number of the vertices of the graph, used only to uniquely identify the vertices. In such cases two labeled graphs are sometimes said to be isomorphic if the corresponding underlying unlabeled graphs are isomorphic (otherwise the definition of isomorphism would be trivial).
== Motivation ==
The formal notion of "isomorphism", e.g., of "graph isomorphism", captures the informal notion that some objects have "the same structure" if one ignores individual distinctions of "atomic" components of objects in question. Whenever individuality of "atomic" components (vertices and edges, for graphs) is important for correct representation of whatever is modeled by graphs, the model is refined by imposing additional restrictions on the structure, and other mathematical objects are used: digraphs, labeled graphs, colored graphs, rooted trees and so on. The isomorphism relation may also be defined for all these generalizations of graphs: the isomorphism bijection must preserve the elements of structure which define the object type in question: arcs, labels, vertex/edge colors, the root of the rooted tree, etc.
The notion of "graph isomorphism" allows us to distinguish graph properties inherent to the structures of graphs themselves from properties associated with graph representations: graph drawings, data structures for graphs, graph labelings, etc. For example, if a graph has exactly one cycle, then all graphs in its isomorphism class also have exactly one cycle. On the other hand, in the common case when the vertices of a graph are (represented by) the integers 1, 2,... N, then the expression
∑
v
∈
V
(
G
)
v
⋅
deg
v
{\displaystyle \sum _{v\in V(G)}v\cdot {\text{deg }}v}
may be different for two isomorphic graphs.
== Whitney theorem ==
The Whitney graph isomorphism theorem, shown by Hassler Whitney, states that two connected graphs are isomorphic if and only if their line graphs are isomorphic, with a single exception: K3, the complete graph on three vertices, and the complete bipartite graph K1,3, which are not isomorphic but both have K3 as their line graph. The Whitney graph theorem can be extended to hypergraphs.
== Recognition of graph isomorphism ==
While graph isomorphism may be studied in a classical mathematical way, as exemplified by the Whitney theorem, it is recognized that it is a problem to be tackled with an algorithmic approach. The computational problem of determining whether two finite graphs are isomorphic is called the graph isomorphism problem.
Its practical applications include primarily cheminformatics, mathematical chemistry (identification of chemical compounds), and electronic design automation (verification of equivalence of various representations of the design of an electronic circuit).
The graph isomorphism problem is one of few standard problems in computational complexity theory belonging to NP, but not known to belong to either of its well-known (and, if P ≠ NP, disjoint) subsets: P and NP-complete. It is one of only two, out of 12 total, problems listed in Garey & Johnson (1979) whose complexity remains unresolved, the other being integer factorization. It is however known that if the problem is NP-complete then the polynomial hierarchy collapses to a finite level.
In November 2015, László Babai, a mathematician and computer scientist at the University of Chicago, claimed to have proven that the graph isomorphism problem is solvable in quasi-polynomial time. He published preliminary versions of these results in the proceedings of the 2016 Symposium on Theory of Computing, and of the 2018 International Congress of Mathematicians. In January 2017, Babai briefly retracted the quasi-polynomiality claim and stated a sub-exponential time complexity bound instead. He restored the original claim five days later. As of 2024, the full journal version of Babai's paper has not yet been published.
Its generalization, the subgraph isomorphism problem, is known to be NP-complete.
The main areas of research for the problem are design of fast algorithms and theoretical investigations of its computational complexity, both for the general problem and for special classes of graphs.
The Weisfeiler Leman graph isomorphism test can be used to heuristically test for graph isomorphism. If the test fails the two input graphs are guaranteed to be non-isomorphic. If the test succeeds the graphs may or may not be isomorphic. There are generalizations of the test algorithm that are guaranteed to detect isomorphisms, however their run time is exponential.
Another well-known algorithm for graph isomorphism is the vf2 algorithm, developed by Cordella et al. in 2001. The vf2 algorithm is a depth-first search algorithm that tries to build an isomorphism between two graphs incrementally. It uses a set of feasibility rules to prune the search space, allowing it to efficiently handle graphs with thousands of nodes. The vf2 algorithm has been widely used in various applications, such as pattern recognition, computer vision, and bioinformatics. While it has a worst-case exponential time complexity, it performs well in practice for many types of graphs.
== See also ==
Graph homomorphism
Graph automorphism
Graph isomorphism problem
Graph canonization
Fractional graph isomorphism
== Notes ==
== References ==
Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. Series of Books in the Mathematical Sciences (1st ed.). New York: W. H. Freeman and Company. ISBN 9780716710455. MR 0519066. OCLC 247570676. | Wikipedia/Graph_isomorphism |
In the mathematical field of algebraic geometry, a singular point of an algebraic variety V is a point P that is 'special' (so, singular), in the geometric sense that at this point the tangent space at the variety may not be regularly defined. In case of varieties defined over the reals, this notion generalizes the notion of local non-flatness. A point of an algebraic variety that is not singular is said to be regular. An algebraic variety that has no singular point is said to be non-singular or smooth. The concept is generalized to smooth schemes in the modern language of scheme theory.
== Definition ==
A plane curve defined by an implicit equation
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
,
where F is a smooth function is said to be singular at a point if the Taylor series of F has order at least 2 at this point.
The reason for this is that, in differential calculus, the tangent at the point (x0, y0) of such a curve is defined by the equation
(
x
−
x
0
)
F
x
′
(
x
0
,
y
0
)
+
(
y
−
y
0
)
F
y
′
(
x
0
,
y
0
)
=
0
,
{\displaystyle (x-x_{0})F'_{x}(x_{0},y_{0})+(y-y_{0})F'_{y}(x_{0},y_{0})=0,}
whose left-hand side is the term of degree one of the Taylor expansion. Thus, if this term is zero, the tangent may not be defined in the standard way, either because it does not exist or a special definition must be provided.
In general for a hypersurface
F
(
x
,
y
,
z
,
…
)
=
0
{\displaystyle F(x,y,z,\ldots )=0}
the singular points are those at which all the partial derivatives simultaneously vanish. A general algebraic variety V being defined as the common zeros of several polynomials, the condition on a point P of V to be a singular point is that the Jacobian matrix of the first-order partial derivatives of the polynomials has a rank at P that is lower than the rank at other points of the variety.
Points of V that are not singular are called non-singular or regular. It is always true that almost all points are non-singular, in the sense that the non-singular points form a set that is both open and dense in the variety (for the Zariski topology, as well as for the usual topology, in the case of varieties defined over the complex numbers).
In case of a real variety (that is the set of the points with real coordinates of a variety defined by polynomials with real coefficients), the variety is a manifold near every regular point. But it is important to note that a real variety may be a manifold and have singular points. For example the equation y3 + 2x2y − x4 = 0 defines a real analytic manifold but has a singular point at the origin. This may be explained by saying that the curve has two complex conjugate branches that cut the real branch at the origin.
== Singular points of smooth mappings ==
As the notion of singular points is a purely local property, the above definition can be extended to cover the wider class of smooth mappings (functions from M to Rn where all derivatives exist). Analysis of these singular points can be reduced to the algebraic variety case by considering the jets of the mapping. The kth jet is the Taylor series of the mapping truncated at degree k and deleting the constant term.
== Nodes ==
In classical algebraic geometry, certain special singular points were also called nodes. A node is a singular point where the Hessian matrix is non-singular; this implies that the singular point has multiplicity two and the tangent cone is not singular outside its vertex.
== See also ==
Milnor map
Resolution of singularities
Singular point of a curve
Singularity theory
Smooth scheme
Zariski tangent space
== References == | Wikipedia/Singular_point_of_an_algebraic_variety |
In algebraic geometry, motives (or sometimes motifs, following French usage) is a theory proposed by Alexander Grothendieck in the 1960s to unify the vast array of similarly behaved cohomology theories such as singular cohomology, de Rham cohomology, etale cohomology, and crystalline cohomology. Philosophically, a "motif" is the "cohomology essence" of a variety.
In the formulation of Grothendieck for smooth projective varieties, a motive is a triple
(
X
,
p
,
m
)
{\displaystyle (X,p,m)}
, where
X
{\displaystyle X}
is a smooth projective variety,
p
:
X
⊢
X
{\displaystyle p:X\vdash X}
is an idempotent correspondence, and m an integer; however, such a triple contains almost no information outside the context of Grothendieck's category of pure motives, where a morphism from
(
X
,
p
,
m
)
{\displaystyle (X,p,m)}
to
(
Y
,
q
,
n
)
{\displaystyle (Y,q,n)}
is given by a correspondence of degree
n
−
m
{\displaystyle n-m}
. A more object-focused approach is taken by Pierre Deligne in Le Groupe Fondamental de la Droite Projective Moins Trois Points. In that article, a motive is a "system of realisations" – that is, a tuple
(
M
B
,
M
D
R
,
M
A
f
,
M
cris
,
p
,
comp
D
R
,
B
,
comp
A
f
,
B
,
comp
cris
p
,
D
R
,
W
,
F
∞
,
F
,
ϕ
,
ϕ
p
)
{\displaystyle \left(M_{B},M_{\mathrm {DR} },M_{\mathbb {A} ^{f}},M_{\operatorname {cris} ,p},\operatorname {comp} _{\mathrm {DR} ,B},\operatorname {comp} _{\mathbb {A} ^{f},B},\operatorname {comp} _{\operatorname {cris} p,\mathrm {DR} },W,F_{\infty },F,\phi ,\phi _{p}\right)}
consisting of modules
M
B
,
M
D
R
,
M
A
f
,
M
cris
,
p
{\displaystyle M_{B},M_{\mathrm {DR} },M_{\mathbb {A} ^{f}},M_{\operatorname {cris} ,p}}
over the rings
Q
,
Q
,
A
f
,
Q
p
,
{\displaystyle \mathbb {Q} ,\mathbb {Q} ,\mathbb {A} ^{f},\mathbb {Q} _{p},}
respectively, various comparison isomorphisms
comp
D
R
,
B
,
comp
A
f
,
B
,
comp
cris
p
,
D
R
{\displaystyle \operatorname {comp} _{\mathrm {DR} ,B},\operatorname {comp} _{\mathbb {A} ^{f},B},\operatorname {comp} _{\operatorname {cris} p,\mathrm {DR} }}
between the obvious base changes of these modules, filtrations
W
,
F
{\displaystyle W,F}
, a action
ϕ
{\displaystyle \phi }
of the absolute Galois group
Gal
(
Q
¯
,
Q
)
{\displaystyle \operatorname {Gal} ({\overline {\mathbb {Q} }},\mathbb {Q} )}
on
M
A
f
,
{\displaystyle M_{\mathbb {A} ^{f}},}
and a "Frobenius" automorphism
ϕ
p
{\displaystyle \phi _{p}}
of
M
cris
,
p
{\displaystyle M_{\operatorname {cris} ,p}}
. This data is modeled on the cohomologies of a smooth projective
Q
{\displaystyle \mathbb {Q} }
-variety and the structures and compatibilities they admit, and gives an idea about what kind of information is contained in a motive.
== Introduction ==
The theory of motives was originally conjectured as an attempt to unify a rapidly multiplying array of cohomology theories, including Betti cohomology, de Rham cohomology, l-adic cohomology, and crystalline cohomology. The general hope is that equations like
[projective line] = [line] + [point]
[projective plane] = [plane] + [line] + [point]
can be put on increasingly solid mathematical footing with a deep meaning. Of course, the above equations are already known to be true in many senses, such as in the sense of CW-complex where "+" corresponds to attaching cells, and in the sense of various cohomology theories, where "+" corresponds to the direct sum.
From another viewpoint, motives continue the sequence of generalizations from rational functions on varieties to divisors on varieties to Chow groups of varieties. The generalization happens in more than one direction, since motives can be considered with respect to more types of equivalence than rational equivalence. The admissible equivalences are given by the definition of an adequate equivalence relation.
== Definition of pure motives ==
The category of pure motives often proceeds in three steps. Below we describe the case of Chow motives
Chow
(
k
)
{\displaystyle \operatorname {Chow} (k)}
, where k is any field.
=== First step: category of (degree 0) correspondences, Corr(k) ===
The objects of
Corr
(
k
)
{\displaystyle \operatorname {Corr} (k)}
are simply smooth projective varieties over k. The morphisms are correspondences. They generalize morphisms of varieties
X
→
Y
{\displaystyle X\to Y}
, which can be associated with their graphs in
X
×
Y
{\displaystyle X\times Y}
, to fixed dimensional Chow cycles on
X
×
Y
{\displaystyle X\times Y}
.
It will be useful to describe correspondences of arbitrary degree, although morphisms in
Corr
(
k
)
{\displaystyle \operatorname {Corr} (k)}
are correspondences of degree 0. In detail, let X and Y be smooth projective varieties and consider a decomposition of X into connected components:
X
=
∐
i
X
i
,
d
i
:=
dim
X
i
.
{\displaystyle X=\coprod _{i}X_{i},\qquad d_{i}:=\dim X_{i}.}
If
r
∈
Z
{\displaystyle r\in \mathbb {Z} }
, then the correspondences of degree r from X to Y are
Corr
r
(
k
)
(
X
,
Y
)
:=
⨁
i
A
d
i
+
r
(
X
i
×
Y
)
,
{\displaystyle \operatorname {Corr} ^{r}(k)(X,Y):=\bigoplus _{i}A^{d_{i}+r}(X_{i}\times Y),}
where
A
k
(
X
)
{\displaystyle A^{k}(X)}
denotes the Chow-cycles of codimension k. Correspondences are often denoted using the "⊢"-notation, e.g.,
α
:
X
⊢
Y
{\displaystyle \alpha :X\vdash Y}
. For any
α
∈
Corr
r
(
X
,
Y
)
{\displaystyle \alpha \in \operatorname {Corr} ^{r}(X,Y)}
and
β
∈
Corr
s
(
Y
,
Z
)
,
{\displaystyle \beta \in \operatorname {Corr} ^{s}(Y,Z),}
their composition is defined by
β
∘
α
:=
π
X
Z
∗
(
π
X
Y
∗
(
α
)
⋅
π
Y
Z
∗
(
β
)
)
∈
Corr
r
+
s
(
X
,
Z
)
,
{\displaystyle \beta \circ \alpha :=\pi _{XZ*}\left(\pi _{XY}^{*}(\alpha )\cdot \pi _{YZ}^{*}(\beta )\right)\in \operatorname {Corr} ^{r+s}(X,Z),}
where the dot denotes the product in the Chow ring (i.e., intersection).
Returning to constructing the category
Corr
(
k
)
,
{\displaystyle \operatorname {Corr} (k),}
notice that the composition of degree 0 correspondences is degree 0. Hence we define morphisms of
Corr
(
k
)
{\displaystyle \operatorname {Corr} (k)}
to be degree 0 correspondences.
The following association is a functor (here
Γ
f
⊆
X
×
Y
{\displaystyle \Gamma _{f}\subseteq X\times Y}
denotes the graph of
f
:
X
→
Y
{\displaystyle f:X\to Y}
):
F
:
{
SmProj
(
k
)
⟶
Corr
(
k
)
X
⟼
X
f
⟼
Γ
f
{\displaystyle F:{\begin{cases}\operatorname {SmProj} (k)\longrightarrow \operatorname {Corr} (k)\\X\longmapsto X\\f\longmapsto \Gamma _{f}\end{cases}}}
Just like
SmProj
(
k
)
,
{\displaystyle \operatorname {SmProj} (k),}
the category
Corr
(
k
)
{\displaystyle \operatorname {Corr} (k)}
has direct sums (X ⊕ Y := X ∐ Y) and tensor products (X ⊗ Y := X × Y). It is a preadditive category. The sum of morphisms is defined by
α
+
β
:=
(
α
,
β
)
∈
A
∗
(
X
×
X
)
⊕
A
∗
(
Y
×
Y
)
↪
A
∗
(
(
X
∐
Y
)
×
(
X
∐
Y
)
)
.
{\displaystyle \alpha +\beta :=(\alpha ,\beta )\in A^{*}(X\times X)\oplus A^{*}(Y\times Y)\hookrightarrow A^{*}\left(\left(X\coprod Y\right)\times \left(X\coprod Y\right)\right).}
=== Second step: category of pure effective Chow motives, Choweff(k) ===
The transition to motives is made by taking the pseudo-abelian envelope of
Corr
(
k
)
{\displaystyle \operatorname {Corr} (k)}
:
Chow
eff
(
k
)
:=
S
p
l
i
t
(
Corr
(
k
)
)
{\displaystyle \operatorname {Chow} ^{\operatorname {eff} }(k):=Split(\operatorname {Corr} (k))}
.
In other words, effective Chow motives are pairs of smooth projective varieties X and idempotent correspondences α: X ⊢ X, and morphisms are of a certain type of correspondence:
Ob
(
Chow
eff
(
k
)
)
:=
{
(
X
,
α
)
∣
(
α
:
X
⊢
X
)
∈
Corr
(
k
)
such that
α
∘
α
=
α
}
.
{\displaystyle \operatorname {Ob} \left(\operatorname {Chow} ^{\operatorname {eff} }(k)\right):=\{(X,\alpha )\mid (\alpha :X\vdash X)\in \operatorname {Corr} (k){\mbox{ such that }}\alpha \circ \alpha =\alpha \}.}
Mor
(
(
X
,
α
)
,
(
Y
,
β
)
)
:=
{
f
:
X
⊢
Y
|
f
∘
α
=
f
=
β
∘
f
}
.
{\displaystyle \operatorname {Mor} ((X,\alpha ),(Y,\beta )):=\{f:X\vdash Y|f\circ \alpha =f=\beta \circ f\}.}
Composition is the above defined composition of correspondences, and the identity morphism of (X, α) is defined to be α : X ⊢ X.
The association,
h
:
{
SmProj
(
k
)
⟶
C
h
o
w
e
f
f
(
k
)
X
⟼
[
X
]
:=
(
X
,
Δ
X
)
f
⟼
[
f
]
:=
Γ
f
⊂
X
×
Y
{\displaystyle h:{\begin{cases}\operatorname {SmProj} (k)&\longrightarrow \operatorname {Chow^{eff}} (k)\\X&\longmapsto [X]:=(X,\Delta _{X})\\f&\longmapsto [f]:=\Gamma _{f}\subset X\times Y\end{cases}}}
,
where ΔX := [idX] denotes the diagonal of X × X, is a functor. The motive [X] is often called the motive associated to the variety X.
As intended, Choweff(k) is a pseudo-abelian category. The direct sum of effective motives is given by
(
[
X
]
,
α
)
⊕
(
[
Y
]
,
β
)
:=
(
[
X
∐
Y
]
,
α
+
β
)
,
{\displaystyle ([X],\alpha )\oplus ([Y],\beta ):=\left(\left[X\coprod Y\right],\alpha +\beta \right),}
The tensor product of effective motives is defined by
(
[
X
]
,
α
)
⊗
(
[
Y
]
,
β
)
:=
(
X
×
Y
,
π
X
∗
α
⋅
π
Y
∗
β
)
,
{\displaystyle ([X],\alpha )\otimes ([Y],\beta ):=(X\times Y,\pi _{X}^{*}\alpha \cdot \pi _{Y}^{*}\beta ),}
where
π
X
:
(
X
×
Y
)
×
(
X
×
Y
)
→
X
×
X
,
and
π
Y
:
(
X
×
Y
)
×
(
X
×
Y
)
→
Y
×
Y
.
{\displaystyle \pi _{X}:(X\times Y)\times (X\times Y)\to X\times X,\quad {\text{and}}\quad \pi _{Y}:(X\times Y)\times (X\times Y)\to Y\times Y.}
The tensor product of morphisms may also be defined. Let f1 : (X1, α1) → (Y1, β1) and f2 : (X2, α2) → (Y2, β2) be morphisms of motives. Then let γ1 ∈ A*(X1 × Y1) and γ2 ∈ A*(X2 × Y2) be representatives of f1 and f2. Then
f
1
⊗
f
2
:
(
X
1
,
α
1
)
⊗
(
X
2
,
α
2
)
⊢
(
Y
1
,
β
1
)
⊗
(
Y
2
,
β
2
)
,
f
1
⊗
f
2
:=
π
1
∗
γ
1
⋅
π
2
∗
γ
2
{\displaystyle f_{1}\otimes f_{2}:(X_{1},\alpha _{1})\otimes (X_{2},\alpha _{2})\vdash (Y_{1},\beta _{1})\otimes (Y_{2},\beta _{2}),\qquad f_{1}\otimes f_{2}:=\pi _{1}^{*}\gamma _{1}\cdot \pi _{2}^{*}\gamma _{2}}
,
where πi : X1 × X2 × Y1 × Y2 → Xi × Yi are the projections.
=== Third step: category of pure Chow motives, Chow(k) ===
To proceed to motives, we adjoin to Choweff(k) a formal inverse (with respect to the tensor product) of a motive called the Lefschetz motive. The effect is that motives become triples instead of pairs. The Lefschetz motive L is
L
:=
(
P
1
,
λ
)
,
λ
:=
p
t
×
P
1
∈
A
1
(
P
1
×
P
1
)
{\displaystyle L:=(\mathbb {P} ^{1},\lambda ),\qquad \lambda :=pt\times \mathbb {P} ^{1}\in A^{1}(\mathbb {P} ^{1}\times \mathbb {P} ^{1})}
.
If we define the motive 1, called the trivial Tate motive, by 1 := h(Spec(k)), then the elegant equation
[
P
1
]
=
1
⊕
L
{\displaystyle [\mathbb {P} ^{1}]=\mathbf {1} \oplus L}
holds, since
1
≅
(
P
1
,
P
1
×
pt
)
.
{\displaystyle \mathbf {1} \cong \left(\mathbb {P} ^{1},\mathbb {P} ^{1}\times \operatorname {pt} \right).}
The tensor inverse of the Lefschetz motive is known as the Tate motive, T := L−1. Then we define the category of pure Chow motives by
Chow
(
k
)
:=
Chow
eff
(
k
)
[
T
]
{\displaystyle \operatorname {Chow} (k):=\operatorname {Chow} ^{\operatorname {eff} }(k)[T]}
.
A motive is then a triple
(
X
∈
SmProj
(
k
)
,
p
:
X
⊢
X
,
n
∈
Z
)
{\displaystyle (X\in \operatorname {SmProj} (k),p:X\vdash X,n\in \mathbb {Z} )}
such that morphisms are given by correspondences
f
:
(
X
,
p
,
m
)
→
(
Y
,
q
,
n
)
,
f
∈
Corr
n
−
m
(
X
,
Y
)
such that
f
∘
p
=
f
=
q
∘
f
,
{\displaystyle f:(X,p,m)\to (Y,q,n),\quad f\in \operatorname {Corr} ^{n-m}(X,Y){\mbox{ such that }}f\circ p=f=q\circ f,}
and the composition of morphisms comes from composition of correspondences.
As intended,
Chow
(
k
)
{\displaystyle \operatorname {Chow} (k)}
is a rigid pseudo-abelian category.
=== Other types of motives ===
In order to define an intersection product, cycles must be "movable" so we can intersect them in general position. Choosing a suitable equivalence relation on cycles will guarantee that every pair of cycles has an equivalent pair in general position that we can intersect. The Chow groups are defined using rational equivalence, but other equivalences are possible, and each defines a different sort of motive. Examples of equivalences, from strongest to weakest, are
Rational equivalence
Algebraic equivalence
Smash-nilpotence equivalence (sometimes called Voevodsky equivalence)
Homological equivalence (in the sense of Weil cohomology)
Numerical equivalence
The literature occasionally calls every type of pure motive a Chow motive, in which case a motive with respect to algebraic equivalence would be called a Chow motive modulo algebraic equivalence.
== Mixed motives ==
For a fixed base field k, the category of mixed motives is a conjectural abelian tensor category
M
M
(
k
)
{\displaystyle MM(k)}
, together with a contravariant functor
Var
(
k
)
→
M
M
(
k
)
{\displaystyle \operatorname {Var} (k)\to MM(k)}
taking values on all varieties (not just smooth projective ones as it was the case with pure motives). This should be such that motivic cohomology defined by
Ext
M
M
∗
(
1
,
?
)
{\displaystyle \operatorname {Ext} _{MM}^{*}(1,?)}
coincides with the one predicted by algebraic K-theory, and contains the category of Chow motives in a suitable sense (and other properties). The existence of such a category was conjectured by Alexander Beilinson.
Instead of constructing such a category, it was proposed by Deligne to first construct a category DM having the properties one expects for the derived category
D
b
(
M
M
(
k
)
)
{\displaystyle D^{b}(MM(k))}
.
Getting MM back from DM would then be accomplished by a (conjectural) motivic t-structure.
The current state of the theory is that we do have a suitable category DM. Already this category is useful in applications. Vladimir Voevodsky's Fields Medal-winning proof of the Milnor conjecture uses these motives as a key ingredient.
There are different definitions due to Hanamura, Levine and Voevodsky. They are known to be equivalent in most cases and we will give Voevodsky's definition below. The category contains Chow motives as a full subcategory and gives the "right" motivic cohomology. However, Voevodsky also shows that (with integral coefficients) it does not admit a motivic t-structure.
=== Geometric mixed motives ===
We will fix a field k of characteristic 0 and let
A
=
Q
,
Z
{\displaystyle A=\mathbb {Q} ,\mathbb {Z} }
be our coefficient ring.
==== Smooth varieties with correspondences ====
Given a smooth variety X and a variety Y call an integral closed subscheme
W
⊂
X
×
Y
{\displaystyle W\subset X\times Y}
which is finite over X and surjective over a component of Y a prime correspondence from X to Y. Then, we can take the set of prime correspondences from X to Y and construct a free A-module
C
A
(
X
,
Y
)
{\displaystyle C_{A}(X,Y)}
. Its elements are called finite correspondences. Then, we can form an additive category
S
m
C
o
r
{\displaystyle {\mathcal {SmCor}}}
whose objects are smooth varieties and morphisms are given by smooth correspondences. The only non-trivial part of this "definition" is the fact that we need to describe compositions. These are given by a push-pull formula from the theory of Chow rings.
Typical examples of prime correspondences come from the graph
Γ
f
⊂
X
×
Y
{\displaystyle \Gamma _{f}\subset X\times Y}
of a morphism of varieties
f
:
X
→
Y
{\displaystyle f:X\to Y}
.
==== Localizing the homotopy category ====
From here we can form the homotopy category
K
b
(
S
m
C
o
r
)
{\displaystyle K^{b}({\mathcal {SmCor}})}
of bounded complexes of smooth correspondences. Here smooth varieties will be denoted
[
X
]
{\displaystyle [X]}
. If we localize this category with respect to the smallest thick subcategory (meaning it is closed under extensions) containing morphisms
[
X
×
A
1
]
→
[
X
]
{\displaystyle [X\times \mathbb {A} ^{1}]\to [X]}
and
[
U
∩
V
]
→
j
U
′
+
j
V
′
[
U
]
⊕
[
V
]
→
j
U
−
j
V
[
X
]
{\displaystyle [U\cap V]{\xrightarrow {j_{U}'+j_{V}'}}[U]\oplus [V]{\xrightarrow {j_{U}-j_{V}}}[X]}
then we can form the triangulated category of effective geometric motives
D
M
gm
eff
(
k
,
A
)
.
{\displaystyle {\mathcal {DM}}_{\text{gm}}^{\text{eff}}(k,A).}
Note that the first class of morphisms are localizing
A
1
{\displaystyle \mathbb {A} ^{1}}
-homotopies of varieties while the second will give the category of geometric mixed motives the Mayer–Vietoris sequence.
Also, note that this category has a tensor structure given by the product of varieties, so
[
X
]
⊗
[
Y
]
=
[
X
×
Y
]
{\displaystyle [X]\otimes [Y]=[X\times Y]}
.
==== Inverting the Tate motive ====
Using the triangulated structure we can construct a triangle
L
→
[
P
1
]
→
[
Spec
(
k
)
]
→
[
+
1
]
{\displaystyle \mathbb {L} \to [\mathbb {P} ^{1}]\to [\operatorname {Spec} (k)]{\xrightarrow {[+1]}}}
from the canonical map
P
1
→
Spec
(
k
)
{\displaystyle \mathbb {P} ^{1}\to \operatorname {Spec} (k)}
. We will set
A
(
1
)
=
L
[
−
2
]
{\displaystyle A(1)=\mathbb {L} [-2]}
and call it the Tate motive. Taking the iterative tensor product lets us construct
A
(
k
)
{\displaystyle A(k)}
. If we have an effective geometric motive M we let
M
(
k
)
{\displaystyle M(k)}
denote
M
⊗
A
(
k
)
.
{\displaystyle M\otimes A(k).}
Moreover, this behaves functorially and forms a triangulated functor. Finally, we can define the category of geometric mixed motives
D
M
g
m
{\displaystyle {\mathcal {DM}}_{gm}}
as the category of pairs
(
M
,
n
)
{\displaystyle (M,n)}
for M an effective geometric mixed motive and n an integer representing the twist by the Tate motive. The hom-groups are then the colimit
Hom
D
M
(
(
A
,
n
)
,
(
B
,
m
)
)
=
lim
k
≥
−
n
,
−
m
Hom
D
M
g
m
eff
(
A
(
k
+
n
)
,
B
(
k
+
m
)
)
{\displaystyle \operatorname {Hom} _{\mathcal {DM}}((A,n),(B,m))=\lim _{k\geq -n,-m}\operatorname {Hom} _{{\mathcal {DM}}_{gm}^{\operatorname {eff} }}(A(k+n),B(k+m))}
== Examples of motives ==
=== Tate motives ===
There are several elementary examples of motives which are readily accessible. One of them being the Tate motives, denoted
Q
(
n
)
{\displaystyle \mathbb {Q} (n)}
,
Z
(
n
)
{\displaystyle \mathbb {Z} (n)}
, or
A
(
n
)
{\displaystyle A(n)}
, depending on the coefficients used in the construction of the category of motives. These are fundamental building blocks in the category of motives because they form the "other part" besides Abelian varieties.
=== Motives of curves ===
The motive of a curve can be explicitly understood with relative ease: their Chow ring is just
Z
⊕
Pic
(
C
)
{\displaystyle \mathbb {Z} \oplus {\text{Pic}}(C)}
for any smooth projective curve
C
{\displaystyle C}
, hence Jacobians embed into the category of motives.
== Explanation for non-specialists ==
A commonly applied technique in mathematics is to study objects carrying a particular structure by introducing a category whose morphisms preserve this structure. Then one may ask when two given objects are isomorphic, and ask for a "particularly nice" representative in each isomorphism class. The classification of algebraic varieties, i.e. application of this idea in the case of algebraic varieties, is very difficult due to the highly non-linear structure of the objects. The relaxed question of studying varieties up to birational isomorphism has led to the field of birational geometry. Another way to handle the question is to attach to a given variety X an object of more linear nature, i.e. an object amenable to the techniques of linear algebra, for example a vector space. This "linearization" goes usually under the name of cohomology.
There are several important cohomology theories, which reflect different structural aspects of varieties. The (partly conjectural) theory of motives is an attempt to find a universal way to linearize algebraic varieties, i.e. motives are supposed to provide a cohomology theory that embodies all these particular cohomologies. For example, the genus of a smooth projective curve C which is an interesting invariant of the curve, is an integer, which can be read off the dimension of the first Betti cohomology group of C. So, the motive of the curve should contain the genus information. Of course, the genus is a rather coarse invariant, so the motive of C is more than just this number.
== The search for a universal cohomology ==
Each algebraic variety X has a corresponding motive [X], so the simplest examples of motives are:
[point]
[projective line] = [point] + [line]
[projective plane] = [plane] + [line] + [point]
These 'equations' hold in many situations, namely for de Rham cohomology and Betti cohomology, l-adic cohomology, the number of points over any finite field, and in multiplicative notation for local zeta-functions.
The general idea is that one motive has the same structure in any reasonable cohomology theory with good formal properties; in particular, any Weil cohomology theory will have such properties. There are different Weil cohomology theories, they apply in different situations and have values in different categories, and reflect different structural aspects of the variety in question:
Betti cohomology is defined for varieties over (subfields of) the complex numbers, it has the advantage of being defined over the integers and is a topological invariant
de Rham cohomology (for varieties over
C
{\displaystyle \mathbb {C} }
) comes with a mixed Hodge structure, it is a differential-geometric invariant
l-adic cohomology (over any field of characteristic ≠ l) has a canonical Galois group action, i.e. has values in representations of the (absolute) Galois group
crystalline cohomology
All these cohomology theories share common properties, e.g. existence of Mayer-Vietoris sequences, homotopy invariance
H
∗
(
X
)
≅
H
∗
(
X
×
A
1
)
,
{\displaystyle H^{*}(X)\cong H^{*}(X\times \mathbb {A} ^{1}),}
the product of X with the affine line) and others. Moreover, they are linked by comparison isomorphisms, for example Betti cohomology
H
Betti
∗
(
X
,
Z
/
n
)
{\displaystyle H_{\text{Betti}}^{*}(X,\mathbb {Z} /n)}
of a smooth variety X over
C
{\displaystyle \mathbb {C} }
with finite coefficients is isomorphic to l-adic cohomology with finite coefficients.
The theory of motives is an attempt to find a universal theory which embodies all these particular cohomologies and their structures and provides a framework for "equations" like
[projective line] = [line]+[point].
In particular, calculating the motive of any variety X directly gives all the information about the several Weil cohomology theories H*Betti(X), H*DR(X) etc.
Beginning with Grothendieck, people have tried to precisely define this theory for many years.
=== Motivic cohomology ===
Motivic cohomology itself had been invented before the creation of mixed motives by means of algebraic K-theory. The above category provides a neat way to (re)define it by
H
n
(
X
,
m
)
:=
H
n
(
X
,
Z
(
m
)
)
:=
Hom
D
M
(
X
,
Z
(
m
)
[
n
]
)
,
{\displaystyle H^{n}(X,m):=H^{n}(X,\mathbb {Z} (m)):=\operatorname {Hom} _{DM}(X,\mathbb {Z} (m)[n]),}
where n and m are integers and
Z
(
m
)
{\displaystyle \mathbb {Z} (m)}
is the m-th tensor power of the Tate object
Z
(
1
)
,
{\displaystyle \mathbb {Z} (1),}
which in Voevodsky's setting is the complex
P
1
→
pt
{\displaystyle \mathbb {P} ^{1}\to \operatorname {pt} }
shifted by –2, and [n] means the usual shift in the triangulated category.
== Conjectures related to motives ==
The standard conjectures were first formulated in terms of the interplay of algebraic cycles and Weil cohomology theories. The category of pure motives provides a categorical framework for these conjectures.
The standard conjectures are commonly considered to be very hard and are open in the general case. Grothendieck, with Bombieri, showed the depth of the motivic approach by producing a conditional (very short and elegant) proof of the Weil conjectures (which are proven by different means by Deligne), assuming the standard conjectures to hold.
For example, the Künneth standard conjecture, which states the existence of algebraic cycles πi ⊂ X × X inducing the canonical projectors H*(X) → Hi(X) ↣ H*(X) (for any Weil cohomology H) implies that every pure motive M decomposes in graded pieces of weight n: M = ⨁GrnM. The terminology weights comes from a similar decomposition of, say, de-Rham cohomology of smooth projective varieties, see Hodge theory.
Conjecture D, stating the concordance of numerical and homological equivalence, implies the equivalence of pure motives with respect to homological and numerical equivalence. (In particular the former category of motives would not depend on the choice of the Weil cohomology theory). Jannsen (1992) proved the following unconditional result: the category of (pure) motives over a field is abelian and semisimple if and only if the chosen equivalence relation is numerical equivalence.
The Hodge conjecture, may be neatly reformulated using motives: it holds iff the Hodge realization mapping any pure motive with rational coefficients (over a subfield
k
{\displaystyle k}
of
C
{\displaystyle \mathbb {C} }
) to its Hodge structure is a full functor
H
:
M
(
k
)
Q
→
H
S
Q
{\displaystyle H:M(k)_{\mathbb {Q} }\to HS_{\mathbb {Q} }}
(rational Hodge structures). Here pure motive means pure motive with respect to homological equivalence.
Similarly, the Tate conjecture is equivalent to: the so-called Tate realization, i.e. ℓ-adic cohomology, is a full functor
H
:
M
(
k
)
Q
ℓ
→
Rep
ℓ
(
Gal
(
k
)
)
{\displaystyle H:M(k)_{\mathbb {Q} _{\ell }}\to \operatorname {Rep} _{\ell }(\operatorname {Gal} (k))}
(pure motives up to homological equivalence, continuous representations of the absolute Galois group of the base field k), which takes values in semi-simple representations. (The latter part is automatic in the case of the Hodge analogue).
== Tannakian formalism and motivic Galois group ==
To motivate the (conjectural) motivic Galois group, fix a field k and consider the functor
finite separable extensions K of k → non-empty finite sets with a (continuous) transitive action of the absolute Galois group of k
which maps K to the (finite) set of embeddings of K into an algebraic closure of k. In Galois theory this functor is shown to be an equivalence of categories. Notice that fields are 0-dimensional. Motives of this kind are called Artin motives. By
Q
{\displaystyle \mathbb {Q} }
-linearizing the above objects, another way of expressing the above is to say that Artin motives are equivalent to finite
Q
{\displaystyle \mathbb {Q} }
-vector spaces together with an action of the Galois group.
The objective of the motivic Galois group is to extend the above equivalence to higher-dimensional varieties. In order to do this, the technical machinery of Tannakian category theory (going back to Tannaka–Krein duality, but a purely algebraic theory) is used. Its purpose is to shed light on both the Hodge conjecture and the Tate conjecture, the outstanding questions in algebraic cycle theory. Fix a Weil cohomology theory H. It gives a functor from Mnum (pure motives using numerical equivalence) to finite-dimensional
Q
{\displaystyle \mathbb {Q} }
-vector spaces. It can be shown that the former category is a Tannakian category. Assuming the equivalence of homological and numerical equivalence, i.e. the above standard conjecture D, the functor H is an exact faithful tensor-functor. Applying the Tannakian formalism, one concludes that Mnum is equivalent to the category of representations of an algebraic group G, known as the motivic Galois group.
The motivic Galois group is to the theory of motives what the Mumford–Tate group is to Hodge theory. Again speaking in rough terms, the Hodge and Tate conjectures are types of invariant theory (the spaces that are morally the algebraic cycles are picked out by invariance under a group, if one sets up the correct definitions). The motivic Galois group has the surrounding representation theory. (What it is not, is a Galois group; however in terms of the Tate conjecture and Galois representations on étale cohomology, it predicts the image of the Galois group, or, more accurately, its Lie algebra.)
== See also ==
Ring of periods
Motivic cohomology
Presheaf with transfers
Mixed Hodge module
Motivic L-function
Nori motive
motivic sheaf
== References ==
=== Survey Articles ===
Beilinson, Alexander; Vologodsky, Vadim (2007), A DG guide to Voevodsky's motives, p. 4004, arXiv:math/0604004, Bibcode:2006math......4004B (technical introduction with comparatively short proofs)
Motives over Finite Fields - J.S. Milne
Mazur, Barry (2004), "What is ... a motive?" (PDF), Notices of the American Mathematical Society, 51 (10): 1214–1216, ISSN 0002-9920, MR 2104916 (motives-for-dummies text).
Serre, Jean-Pierre (1991), "Motifs" (PDF), Astérisque (in French) (198): 11, 333–349 (1992), ISSN 0303-1179, MR 1144336, archived from the original (PDF) on 2022-01-10 (high-level introduction to motives in French).
Tabauda, Goncalo (2011), "A guided tour through the garden of noncommutative motives", Journal of K-theory, arXiv:1108.3787
=== Books ===
André, Yves (2004), Une introduction aux motifs (motifs purs, motifs mixtes, périodes), Panoramas et Synthèses, vol. 17, Paris: Société Mathématique de France, ISBN 978-2-85629-164-1, MR 2115000
Jannsen, Uwe; Kleiman, Steven; Serre, Jean-Pierre, eds. (1994), Motives, Proceedings of Symposia in Pure Mathematics, vol. 55, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1636-3, MR 1265518
L. Breen: Tannakian categories.
S. Kleiman: The standard conjectures.
A. Scholl: Classical motives. (detailed exposition of Chow motives)
Huber, Annette; Müller-Stach, Stefan (2017-03-20), Periods and Nori Motives, Springer, ISBN 978-3-319-50925-9
Mazza, Carlo; Voevodsky, Vladimir; Weibel, Charles (2006), Lecture notes on motivic cohomology, Clay Mathematics Monographs, vol. 2, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-3847-1, MR 2242284
Levine, Marc (1998). Mixed Motives. Mathematical surveys and monographs, 57. American Mathematical Society. ISBN 978-0-8218-0785-9.
Friedlander, Eric M.; Grayson, Daniel R. (2005). Handbook of K-Theory. Springer. ISBN 978-3-540-23019-9.
=== Reference Literature ===
Jannsen, Uwe (1992), "Motives, numerical equivalence and semi-simplicity" (PDF), Inventiones Math., 107: 447–452, Bibcode:1992InMat.107..447J, doi:10.1007/BF01231898, S2CID 120799359
Kleiman, Steven L. (1972), "Motives", in Oort, F. (ed.), Algebraic geometry, Oslo 1970 (Proc. Fifth Nordic Summer-School in Math., Oslo, 1970), Groningen: Wolters-Noordhoff, pp. 53–82 (adequate equivalence relations on cycles).
Milne, James S. Motives — Grothendieck’s Dream
Voevodsky, Vladimir; Suslin, Andrei; Friedlander, Eric M. (2000), Cycles, transfers, and motivic homology theories, Annals of Mathematics Studies, Princeton, New Jersey: Princeton University Press, ISBN 978-0-691-04814-7 (Voevodsky's definition of mixed motives. Highly technical).
Huber, Annette (2000). "Realization of Voevodsky's motives" (PDF). Journal of Algebraic Geometry. 9: 755–799. S2CID 17160833. Archived from the original (PDF) on 2017-09-26.
=== Future directions ===
Musings on
Q
(
1
/
4
)
{\displaystyle \mathbb {Q} (1/4)}
: Arithmetic spin structures on elliptic curves
What are "Fractional Motives"?
== External links ==
Quotations related to Motive (algebraic geometry) at Wikiquote | Wikipedia/Motive_(algebraic_geometry) |
In algebraic geometry, a moduli space of (algebraic) curves is a geometric space (typically a scheme or an algebraic stack) whose points represent isomorphism classes of algebraic curves. It is thus a special case of a moduli space. Depending on the restrictions applied to the classes of algebraic curves considered, the corresponding moduli problem and the moduli space is different. One also distinguishes between fine and coarse moduli spaces for the same moduli problem.
The most basic problem is that of moduli of smooth complete curves of a fixed genus. Over the field of complex numbers these correspond precisely to compact Riemann surfaces of the given genus, for which Bernhard Riemann proved the first results about moduli spaces, in particular their dimensions ("number of parameters on which the complex structure depends").
== Moduli stacks of stable curves ==
The moduli stack
M
g
{\displaystyle {\mathcal {M}}_{g}}
classifies families of smooth projective curves, together with their isomorphisms. When
g
>
1
{\displaystyle g>1}
, this stack may be compactified by adding new "boundary" points which correspond to stable nodal curves (together with their isomorphisms). A curve is stable if it is complete, connected, has no singularities other than double points, and has only a finite group of automorphisms. The resulting stack is denoted
M
¯
g
{\displaystyle {\overline {\mathcal {M}}}_{g}}
. Both moduli stacks carry universal families of curves.
Both stacks above have dimension
3
g
−
3
{\displaystyle 3g-3}
; hence a stable nodal curve can be completely specified by choosing the values of
3
g
−
3
{\displaystyle 3g-3}
parameters, when
g
>
1
{\displaystyle g>1}
. In lower genus, one must account for the presence of smooth families of automorphisms, by subtracting their number. There is exactly one complex curve of genus zero, the Riemann sphere, and its group of isomorphisms is PGL(2). Hence the dimension of
M
0
{\displaystyle {\mathcal {M}}_{0}}
is equal to
dim
(
space of genus 0 curves
)
−
dim
(
group of automorphisms
)
=
0
−
dim
(
P
G
L
(
2
)
)
=
−
3.
{\displaystyle {\begin{aligned}\dim({\text{space of genus 0 curves}})-\dim({\text{group of automorphisms}})&=0-\dim(\mathrm {PGL} (2))\\&=-3.\end{aligned}}}
Likewise, in genus 1, there is a one-dimensional space of curves, but every such curve has a one-dimensional group of automorphisms. Hence, the stack
M
1
{\displaystyle {\mathcal {M}}_{1}}
has dimension 0.
=== Construction and irreducibility ===
It is a non-trivial theorem, proved by Pierre Deligne and David Mumford, that the moduli stack
M
g
{\displaystyle {\mathcal {M}}_{g}}
is irreducible, meaning it cannot be expressed as the union of two proper substacks. They prove this by analyzing the locus
H
g
{\displaystyle H_{g}}
of stable curves in the Hilbert scheme
H
i
l
b
P
5
g
−
5
−
1
P
g
(
n
)
{\displaystyle \mathrm {Hilb} _{\mathbb {P} ^{5g-5-1}}^{P_{g}(n)}}
of tri-canonically embedded curves (from the embedding of the very ample
ω
C
⊗
3
{\displaystyle \omega _{C}^{\otimes 3}}
for every curve) which have Hilbert polynomial
P
g
(
n
)
=
(
6
n
−
1
)
(
g
−
1
)
{\displaystyle P_{g}(n)=(6n-1)(g-1)}
. Then, the stack
[
H
g
/
P
G
L
(
5
g
−
6
)
]
{\displaystyle [H_{g}/\mathrm {PGL} (5g-6)]}
is a construction of the moduli space
M
g
{\displaystyle {\mathcal {M}}_{g}}
. Using deformation theory, Deligne and Mumford show this stack is smooth and use the stack of isomorphisms between stable curves
I
s
o
m
S
(
C
,
C
′
)
{\displaystyle \mathrm {Isom} _{S}(C,C')}
, to show that
M
g
{\displaystyle {\mathcal {M}}_{g}}
has finite stabilizers, hence it is a Deligne–Mumford stack. Moreover, they find a stratification of
H
g
{\displaystyle H_{g}}
as
H
g
o
∐
H
g
,
1
∐
⋯
∐
H
g
,
n
{\displaystyle H_{g}^{o}\coprod H_{g,1}\coprod \cdots \coprod H_{g,n}}
,
where
H
g
o
{\displaystyle H_{g}^{o}}
is the subscheme of smooth stable curves and
H
g
,
i
{\displaystyle H_{g,i}}
is an irreducible component of
S
∗
=
H
g
∖
H
g
o
{\displaystyle S^{*}=H_{g}\setminus H_{g}^{o}}
. They analyze the components of
M
g
0
=
H
g
0
/
P
G
L
(
5
g
−
6
)
{\displaystyle {\mathcal {M}}_{g}^{0}=H_{g}^{0}/\mathrm {PGL} (5g-6)}
(as a GIT quotient). If there existed multiple components of
H
g
o
{\displaystyle H_{g}^{o}}
, none of them would be complete. Also, any component of
H
g
{\displaystyle H_{g}}
must contain non-singular curves. Consequently, the singular locus
S
∗
{\displaystyle S^{*}}
is connected, hence it is contained in a single component of
H
g
{\displaystyle H_{g}}
. Furthermore, because every component intersects
S
∗
{\displaystyle S^{*}}
, all components must be contained in a single component, hence the coarse space
H
g
{\displaystyle H_{g}}
is irreducible. From the general theory of algebraic stacks, this implies the stack quotient
M
g
{\displaystyle {\mathcal {M}}_{g}}
is irreducible.
=== Properness ===
Properness, or compactness for orbifolds, follows from a theorem on stable reduction on curves. This can be found using a theorem of Grothendieck regarding the stable reduction of Abelian varieties, and showing its equivalence to the stable reduction of curves.section 5.2
=== Coarse moduli spaces ===
One can also consider the coarse moduli spaces representing isomorphism classes of smooth or stable curves. These coarse moduli spaces were actually studied before the notion of moduli stack was introduced. In fact, the idea of a moduli stack was introduced by Deligne and Mumford in an attempt to prove the projectivity of the coarse moduli spaces. In recent years, it has become apparent that the stack of curves is actually the more fundamental object.
The coarse moduli spaces have the same dimension as the stacks when
g
>
1
{\displaystyle g>1}
; however, in genus zero the coarse moduli space has dimension zero, and in genus one, it has dimension one.
== Examples of low genus moduli spaces ==
=== Genus 0 ===
Determining the geometry of the moduli space of genus
0
{\displaystyle 0}
curves can be established by using deformation Theory. The number of moduli for a genus
0
{\displaystyle 0}
curve, e.g.
P
1
{\displaystyle \mathbb {P} ^{1}}
, is given by the cohomology group
H
1
(
C
,
T
C
)
{\displaystyle H^{1}(C,T_{C})}
With Serre duality this cohomology group is isomorphic to
H
1
(
C
,
T
C
)
≅
H
0
(
C
,
ω
C
⊗
T
C
∨
)
≅
H
0
(
C
,
ω
C
⊗
2
)
{\displaystyle {\begin{aligned}H^{1}(C,T_{C})&\cong H^{0}(C,\omega _{C}\otimes T_{C}^{\vee })\\&\cong H^{0}(C,\omega _{C}^{\otimes 2})\end{aligned}}}
for the dualizing sheaf
ω
C
{\displaystyle \omega _{C}}
. But, using Riemann–Roch shows the degree of the canonical bundle is
−
2
{\displaystyle -2}
, so the degree of
ω
C
⊗
2
{\displaystyle \omega _{C}^{\otimes 2}}
is
−
4
{\displaystyle -4}
, hence there are no global sections, meaning
H
0
(
C
,
ω
C
⊗
2
)
=
0
{\displaystyle H^{0}(C,\omega _{C}^{\otimes 2})=0}
showing there are no deformations of genus
0
{\displaystyle 0}
curves. This proves
M
0
{\displaystyle {\mathcal {M}}_{0}}
is just a single point, and the only genus
0
{\displaystyle 0}
curves is given by
P
1
{\displaystyle \mathbb {P} ^{1}}
. The only technical difficulty is the automorphism group of
P
1
{\displaystyle \mathbb {P} ^{1}}
is the algebraic group
PGL
(
2
,
C
)
{\displaystyle {\text{PGL}}(2,\mathbb {C} )}
, which rigidifies once three points on
P
1
{\displaystyle \mathbb {P} ^{1}}
are fixed, so most authors take
M
0
{\displaystyle {\mathcal {M}}_{0}}
to mean
M
0
,
3
{\displaystyle {\mathcal {M}}_{0,3}}
.
=== Genus 1 ===
The genus 1 case is one of the first well-understood cases of moduli spaces, at least over the complex numbers, because isomorphism classes of elliptic curves are classified by the J-invariant
j
:
M
1
,
1
|
C
→
A
C
1
{\displaystyle j:{\mathcal {M}}_{1,1}|_{\mathbb {C} }\to \mathbb {A} _{\mathbb {C} }^{1}}
where
M
1
,
1
|
C
=
M
1
,
1
×
Spec
(
Z
)
Spec
(
C
)
{\displaystyle {\mathcal {M}}_{1,1}|_{\mathbb {C} }={\mathcal {M}}_{1,1}\times _{{\text{Spec}}(\mathbb {Z} )}{\text{Spec}}(\mathbb {C} )}
. Topologically,
M
1
,
1
|
C
{\displaystyle {\mathcal {M}}_{1,1}|_{\mathbb {C} }}
is just the affine line, but it can be compactified to a stack with underlying topological space
P
C
1
{\displaystyle \mathbb {P} _{\mathbb {C} }^{1}}
by adding a stable curve at infinity. This is an elliptic curve with a single cusp. The construction of the general case over
Spec
(
Z
)
{\displaystyle {\text{Spec}}(\mathbb {Z} )}
was originally completed by Deligne and Rapoport.
Note that most authors consider the case of genus one curves with one marked point as the origin of the group since otherwise the stabilizer group in a hypothetical moduli space
M
1
{\displaystyle {\mathcal {M}}_{1}}
would have stabilizer group at the point
[
C
]
∈
M
1
{\displaystyle [C]\in {\mathcal {M}}_{1}}
given by the curve, since elliptic curves have an Abelian group structure. This adds unneeded technical complexity to this hypothetical moduli space. On the other hand,
M
1
,
1
{\displaystyle {\mathcal {M}}_{1,1}}
is a smooth Deligne–Mumford stack.
=== Genus 2 ===
==== Affine parameter space ====
In genus 2 it is a classical result that all such curves are hyperelliptic,pg 298 so the moduli space can be determined completely from the branch locus of the curve using the Riemann–Hurwitz formula. Since an arbitrary genus 2 curve is given by a polynomial of the form
y
2
−
x
(
x
−
1
)
(
x
−
a
)
(
x
−
b
)
(
x
−
c
)
{\displaystyle y^{2}-x(x-1)(x-a)(x-b)(x-c)}
for some uniquely defined
a
,
b
,
c
∈
A
1
{\displaystyle a,b,c\in \mathbb {A} ^{1}}
, the parameter space for such curves is given by
A
3
∖
(
Δ
a
,
b
∪
Δ
a
,
c
∪
Δ
b
,
c
)
,
{\displaystyle \mathbb {A} ^{3}\setminus (\Delta _{a,b}\cup \Delta _{a,c}\cup \Delta _{b,c}),}
where
Δ
i
,
j
{\displaystyle \Delta _{i,j}}
corresponds to the locus
i
≠
j
{\displaystyle i\neq j}
.
==== Weighted projective space ====
Using a weighted projective space and the Riemann–Hurwitz formula, a hyperelliptic curve can be described as a polynomial of the form
z
2
=
a
x
6
+
b
x
5
y
+
c
x
4
y
2
+
d
x
3
y
3
+
e
x
2
y
4
+
f
x
y
5
+
g
y
6
,
{\displaystyle z^{2}=ax^{6}+bx^{5}y+cx^{4}y^{2}+dx^{3}y^{3}+ex^{2}y^{4}+fxy^{5}+gy^{6},}
where
a
,
…
,
f
{\displaystyle a,\ldots ,f}
are parameters for sections of
Γ
(
P
(
3
,
1
)
,
O
(
g
)
)
{\displaystyle \Gamma (\mathbb {P} (3,1),{\mathcal {O}}(g))}
. Then, the locus of sections which contain no triple root contains every curve
C
{\displaystyle C}
represented by a point
[
C
]
∈
M
2
{\displaystyle [C]\in {\mathcal {M}}_{2}}
.
=== Genus 3 ===
This is the first moduli space of curves which has both a hyperelliptic locus and a non-hyperelliptic locus. The non-hyperelliptic curves are all given by plane curves of degree 4 (using the genus degree formula), which are parameterized by the smooth locus in the Hilbert scheme of hypersurfaces
Hilb
P
2
8
t
−
4
≅
P
(
6
4
)
−
1
{\displaystyle \operatorname {Hilb} _{\mathbb {P} ^{2}}^{8t-4}\cong \mathbb {P} ^{{\binom {6}{4}}-1}}
.
Then, the moduli space is stratified by the substacks
M
3
=
[
H
2
/
P
G
L
(
3
)
)
]
∐
M
3
h
y
p
{\displaystyle {\mathcal {M}}_{3}=[H_{2}/\mathrm {PGL} (3))]\coprod {\mathcal {M}}_{3}^{\mathrm {hyp} }}
.
== Birational geometry ==
=== Unirationality conjecture ===
In all of the previous cases, the moduli spaces can be found to be unirational, meaning there exists a dominant rational morphism
P
n
→
M
g
{\displaystyle \mathbb {P} ^{n}\to {\mathcal {M}}_{g}}
and it was long expected this would be true in all genera. In fact, Severi had proved this to be true for genera up to
10
{\displaystyle 10}
. Although, it turns out that for genus
g
≥
23
{\displaystyle g\geq 23}
all such moduli spaces are of general type, meaning they are not unirational. They accomplished this by studying the Kodaira dimension of the coarse moduli spaces
κ
g
=
K
o
d
(
M
¯
g
)
,
{\displaystyle \kappa _{g}=\mathrm {Kod} ({\overline {\mathcal {M}}}_{g}),}
and found
κ
g
>
0
{\displaystyle \kappa _{g}>0}
for
g
≥
23
{\displaystyle g\geq 23}
. In fact, for
g
>
23
{\displaystyle g>23}
,
κ
g
=
3
g
−
3
=
dim
(
M
g
)
,
{\displaystyle \kappa _{g}=3g-3=\dim({\mathcal {M}}_{g}),}
and hence
M
g
{\displaystyle {\mathcal {M}}_{g}}
is of general type.
==== Geometric implication ====
This is significant geometrically because it implies any linear system on a ruled variety cannot contain the universal curve
C
g
{\displaystyle {\mathcal {C}}_{g}}
.
== Stratification of boundary ==
The moduli space
M
¯
g
{\displaystyle {\overline {\mathcal {M}}}_{g}}
has a natural stratification on the boundary
∂
M
¯
g
{\displaystyle \partial {\overline {\mathcal {M}}}_{g}}
whose points represent singular genus
g
{\displaystyle g}
curves. It decomposes into strata
∂
M
¯
g
=
∐
0
≤
h
≤
(
g
/
2
)
Δ
h
∗
{\displaystyle \partial {\overline {\mathcal {M}}}_{g}=\coprod _{0\leq h\leq (g/2)}\Delta _{h}^{*}}
,
where
Δ
h
∗
≅
M
¯
h
×
M
¯
g
−
h
{\displaystyle \Delta _{h}^{*}\cong {\overline {\mathcal {M}}}_{h}\times {\overline {\mathcal {M}}}_{g-h}}
for
1
≤
h
<
g
/
2
{\displaystyle 1\leq h<g/2}
.
Δ
0
∗
≅
M
¯
g
−
1
,
2
/
(
Z
/
2
)
{\displaystyle \Delta _{0}^{*}\cong {\overline {\mathcal {M}}}_{g-1,2}/(\mathbb {Z} /2)}
where the action permutes the two marked points.
Δ
g
/
2
≅
(
M
¯
g
/
2
×
M
¯
g
/
2
)
/
(
Z
/
2
)
{\displaystyle \Delta _{g/2}\cong ({\overline {\mathcal {M}}}_{g/2}\times {\overline {\mathcal {M}}}_{g/2})/(\mathbb {Z} /2)}
whenever
g
{\displaystyle g}
is even.
The curves lying above these loci correspond to
A pair of curves
C
,
C
′
{\displaystyle C,C'}
connected at a double point.
The normalization of a genus
g
{\displaystyle g}
curve at a single double point singularity.
A pair of curves of the same genus connected at a double point up to permutation.
=== Stratification for genus 2 ===
For the genus
2
{\displaystyle 2}
case, there is a stratification given by
∂
M
¯
2
=
Δ
0
∗
∐
Δ
1
∗
=
M
¯
1
,
2
/
(
Z
/
2
)
∐
(
M
¯
1
×
M
¯
1
)
/
(
Z
/
2
)
{\displaystyle {\begin{aligned}\partial {\overline {\mathcal {M}}}_{2}&=\Delta _{0}^{*}\coprod \Delta _{1}^{*}\\&={\overline {\mathcal {M}}}_{1,2}/(\mathbb {Z} /2)\coprod ({\overline {\mathcal {M}}}_{1}\times {\overline {\mathcal {M}}}_{1})/(\mathbb {Z} /2)\end{aligned}}}
.
Further analysis of these strata can be used to give the generators of the Chow ring
A
∗
(
M
¯
2
)
{\displaystyle A^{*}({\overline {\mathcal {M}}}_{2})}
proposition 9.1.
== Moduli of marked curves ==
One can also enrich the problem by considering the moduli stack of genus g nodal curves with n marked points, pairwise distinct and distinct from the nodes. Such marked curves are said to be stable if the subgroup of curve automorphisms which fix the marked points is finite. The resulting moduli stacks of smooth (or stable) genus g curves with n marked points are denoted
M
g
,
n
{\displaystyle {\mathcal {M}}_{g,n}}
(or
M
¯
g
,
n
{\displaystyle {\overline {\mathcal {M}}}_{g,n}}
), and have dimension
3
g
−
3
+
n
{\displaystyle 3g-3+n}
.
A case of particular interest is the moduli stack
M
¯
1
,
1
{\displaystyle {\overline {\mathcal {M}}}_{1,1}}
of genus 1 curves with one marked point. This is the stack of elliptic curves. Level 1 modular forms are sections of line bundles on this stack, and level N modular forms are sections of line bundles on the stack of elliptic curves with level N structure (roughly a marking of the points of order N).
== Boundary geometry ==
An important property of the compactified moduli spaces
M
¯
g
,
n
{\displaystyle {\overline {\mathcal {M}}}_{g,n}}
is that their boundary can be described in terms of moduli spaces
M
¯
g
′
,
n
′
{\displaystyle {\overline {\mathcal {M}}}_{g',n'}}
for genera
g
′
<
g
{\displaystyle g'<g}
. Given a marked, stable, nodal curve one can associate its dual graph, a graph with vertices labelled by nonnegative integers and allowed to have loops, multiple edges and also numbered half-edges. Here the vertices of the graph correspond to irreducible components of the nodal curve, the labelling of a vertex is the arithmetic genus of the corresponding component, edges correspond to nodes of the curve and the half-edges correspond to the markings. The closure of the locus of curves with a given dual graph in
M
¯
g
,
n
{\displaystyle {\overline {\mathcal {M}}}_{g,n}}
is isomorphic to the stack quotient of a product
∏
v
M
¯
g
v
,
n
v
{\displaystyle \prod _{v}{\overline {\mathcal {M}}}_{g_{v},n_{v}}}
of compactified moduli spaces of curves by a finite group. In the product the factor corresponding to a vertex v has genus gv taken from the labelling and number of markings
n
v
{\displaystyle n_{v}}
equal to the number of outgoing edges and half-edges at v. The total genus g is the sum of the gv plus the number of closed cycles in the graph.
Stable curves whose dual graph contains a vertex labelled by
g
v
=
g
{\displaystyle g_{v}=g}
(hence all other vertices have
g
v
=
0
{\displaystyle g_{v}=0}
and the graph is a tree) are called "rational tail" and their moduli space is denoted
M
g
,
n
r
.
t
.
{\displaystyle {\mathcal {M}}_{g,n}^{\mathrm {r.t.} }}
. Stable curves whose dual graph is a tree are called "compact type" (because the Jacobian is compact) and their moduli space is denoted
M
g
,
n
c
.
{\displaystyle {\mathcal {M}}_{g,n}^{\mathrm {c.} }}
.
== See also ==
Witten conjecture
Tautological ring
Grothendieck–Riemann–Roch theorem
== References ==
=== Classic references ===
Grothendieck, Alexander (1960–1961). "Techniques de construction en géométrie analytique. I. Description axiomatique de l'espace de Teichmüller et de ses variantes" (PDF). Séminaire Henri Cartan. 13 (1). Paris. Zbl 0142.33503. Exposés No. 7 and 8.
Mumford, David; Fogarty, John; Kirwan, Frances Clare (1994). Geometric invariant theory (3rd enl. ed.). Berlin: Springer-Verlag. ISBN 3-540-56963-4. MR 1304906. OCLC 29184987.
Deligne, Pierre; Mumford, David (1969). "The irreducibility of the space of curves of given genus" (PDF). Publications Mathématiques de l'IHÉS. 36: 75–109. CiteSeerX 10.1.1.589.288. doi:10.1007/bf02684599. S2CID 16482150.
=== Books on moduli of curves ===
Harris, Joe; Morrison, Ian (1998). Moduli of Curves. Springer Verlag. ISBN 978-0-387-98429-2.
Katz, Nicholas M; Mazur, Barry (1985). Arithmetic Moduli of Elliptic Curves. Princeton University Press. ISBN 978-0-691-08352-0.
Arbarello, Enrico; Cornalba, Maurizio; Griffiths, Phillip A. (2011). Geometry of Algebraic Curves II. Grundlehren der mathematischen Wissenschaften. Vol. 268. doi:10.1007/978-3-540-69392-5. ISBN 978-3-540-42688-2.
=== Cohomology and intersection theory ===
Zvonkine, Dimitri (2012). "An introduction to moduli spaces of curves and their intersection theory". In Papadopoulos, Athanase (ed.). Handbook of Teichmüller Theory, Volume III (PDF). IRMA Lectures in Mathematics and Theoretical Physics. Vol. 17. Zürich, Switzerland: European Mathematical Society Publishing House. pp. 667–716. doi:10.4171/103-1/12. ISBN 978-3-03719-103-3. MR 2952773.
Faber, Carel; Pandharipande, Rahul (2013). "Tautological and non-tautological cohomology of the moduli space of curves" (PDF). In Farkas, Gavril; Morrison, Ian (eds.). Handbook of Moduli, Vol. I. Advanced Lectures in Mathematics (ALM). Vol. 24. Somerville, MA: International Press. pp. 293–330. ISBN 9781571462572. MR 3184167.
== External links ==
"Topology and geometry of the moduli space of curves". aimath.org. American Institute of Mathematics.
"Moduli of Stable Maps, Gromov-Witten Invariants, and Quantum Cohomology" | Wikipedia/Moduli_of_algebraic_curves |
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories.
A covariety is the class of all coalgebraic structures of a given signature.
== Terminology ==
A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common.
The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication.
== Definition ==
A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as v = w.
A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function oA : An → A such that for each axiom v = w and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras.
Given two algebras of a theory T, say A and B, a homomorphism is a function f : A → B such that
f
(
o
A
(
a
1
,
…
,
a
n
)
)
=
o
B
(
f
(
a
1
)
,
…
,
f
(
a
n
)
)
{\displaystyle f(o_{A}(a_{1},\dots ,a_{n}))=o_{B}(f(a_{1}),\dots ,f(a_{n}))}
for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms.
== Examples ==
The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law:
x
(
y
z
)
=
(
x
y
)
z
.
{\displaystyle x(yz)=(xy)z.}
The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities:
x
(
y
z
)
=
(
x
y
)
z
{\displaystyle x(yz)=(xy)z}
1
x
=
x
1
=
x
{\displaystyle 1x=x1=x}
x
x
−
1
=
x
−
1
x
=
1.
{\displaystyle xx^{-1}=x^{-1}x=1.}
The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation).
If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras.
The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below).
The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity.
== Birkhoff's variety theorem ==
Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product.
One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult.
Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety.
== Subvarieties ==
A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities.
Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different.
Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains
⟨
Z
,
+
⟩
{\displaystyle \langle \mathbb {Z} ,+\rangle }
and does not contain its subalgebra (more precisely, submonoid)
⟨
N
,
+
⟩
{\displaystyle \langle \mathbb {N} ,+\rangle }
.
However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying xy = yx, with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated.
Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V.
== Free objects ==
Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map i : S → FS that satisfies the following universal property: given any algebra A in V and any map k : S → A, there exists a unique V-homomorphism f : FS → A such that f ∘ i = k.
This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra.
== Category theory ==
Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor G : V → Set has a left adjoint F : Set → V, namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad T = GF. Moreover the monad T is finitary, meaning it commutes with filtered colimits.
The monad T : Set → Set is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories.
Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set).
Every finitary algebraic category is a locally presentable category.
== Pseudovariety of finite algebras ==
Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities.
A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived. Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set of profinite identities.
Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups.
== See also ==
Quasivariety
== Notes ==
== External links == | Wikipedia/Variety_of_algebras |
In mathematics, an algebraic torus, where a one dimensional torus is typically denoted by
G
m
{\displaystyle \mathbf {G} _{\mathbf {m} }}
,
G
m
{\displaystyle \mathbb {G} _{m}}
, or
T
{\displaystyle \mathbb {T} }
, is a type of commutative affine algebraic group commonly found in projective algebraic geometry and toric geometry. Higher dimensional algebraic tori can be modelled as a product of algebraic groups
G
m
{\displaystyle \mathbf {G} _{\mathbf {m} }}
. These groups were named by analogy with the theory of tori in Lie group theory (see Cartan subgroup). For example, over the complex numbers
C
{\displaystyle \mathbb {C} }
the algebraic torus
G
m
{\displaystyle \mathbf {G} _{\mathbf {m} }}
is isomorphic to the group scheme
C
∗
=
Spec
(
C
[
t
,
t
−
1
]
)
{\displaystyle \mathbb {C} ^{*}={\text{Spec}}(\mathbb {C} [t,t^{-1}])}
, which is the scheme theoretic analogue of the Lie group
U
(
1
)
⊂
C
{\displaystyle U(1)\subset \mathbb {C} }
. In fact, any
G
m
{\displaystyle \mathbf {G} _{\mathbf {m} }}
-action on a complex vector space can be pulled back to a
U
(
1
)
{\displaystyle U(1)}
-action from the inclusion
U
(
1
)
⊂
C
∗
{\displaystyle U(1)\subset \mathbb {C} ^{*}}
as real manifolds.
Tori are of fundamental importance in the theory of algebraic groups and Lie groups and in the study of the geometric objects associated to them such as symmetric spaces and buildings.
== Algebraic tori over fields ==
In most places we suppose that the base field is perfect (for example finite or characteristic zero). This hypothesis is required to have a smooth group schemepg 64, since for an algebraic group
G
{\displaystyle G}
to be smooth over characteristic
p
{\displaystyle p}
, the maps
(
⋅
)
p
r
:
O
(
G
)
→
O
(
G
)
{\displaystyle (\cdot )^{p^{r}}:{\mathcal {O}}(G)\to {\mathcal {O}}(G)}
must be geometrically reduced for large enough
r
{\displaystyle r}
, meaning the image of the corresponding map on
G
{\displaystyle G}
is smooth for large enough
r
{\displaystyle r}
.
In general one has to use separable closures instead of algebraic closures.
=== Multiplicative group of a field ===
If
F
{\displaystyle F}
is a field then the multiplicative group over
F
{\displaystyle F}
is the algebraic group
G
m
{\displaystyle \mathbf {G} _{\mathbf {m} }}
such that for any field extension
E
/
F
{\displaystyle E/F}
the
E
{\displaystyle E}
-points are isomorphic to the group
E
×
{\displaystyle E^{\times }}
. To define it properly as an algebraic group one can take the affine variety defined by the equation
x
y
=
1
{\displaystyle xy=1}
in the affine plane over
F
{\displaystyle F}
with coordinates
x
,
y
{\displaystyle x,y}
. The multiplication is then given by restricting the regular rational map
F
2
×
F
2
→
F
2
{\displaystyle F^{2}\times F^{2}\to F^{2}}
defined by
(
(
x
,
y
)
,
(
x
′
,
y
′
)
)
↦
(
x
x
′
,
y
y
′
)
{\displaystyle ((x,y),(x',y'))\mapsto (xx',yy')}
and the inverse is the restriction of the regular rational map
(
x
,
y
)
↦
(
y
,
x
)
{\displaystyle (x,y)\mapsto (y,x)}
.
=== Definition ===
Let
F
{\displaystyle F}
be a field with algebraic closure
F
¯
{\displaystyle {\overline {F}}}
. Then a
F
{\displaystyle F}
-torus is an algebraic group defined over
F
{\displaystyle F}
which is isomorphic over
F
¯
{\displaystyle {\overline {F}}}
to a finite product of copies of the multiplicative group.
In other words, if
T
{\displaystyle \mathbf {T} }
is an
F
{\displaystyle F}
-group it is a torus if and only if
T
(
F
¯
)
≅
(
F
¯
×
)
r
{\displaystyle \mathbf {T} ({\overline {F}})\cong ({\overline {F}}^{\times })^{r}}
for some
r
≥
1
{\displaystyle r\geq 1}
. The basic terminology associated to tori is as follows.
The integer
r
{\displaystyle r}
is called the rank or absolute rank of the torus
T
{\displaystyle \mathrm {T} }
.
The torus is said to be split over a field extension
E
/
F
{\displaystyle E/F}
if
T
(
E
)
≅
(
E
×
)
r
{\displaystyle \mathbf {T} (E)\cong (E^{\times })^{r}}
. There is a unique minimal finite extension of
F
{\displaystyle F}
over which
T
{\displaystyle \mathbf {T} }
is split, which is called the splitting field of
T
{\displaystyle \mathbf {T} }
.
The
F
{\displaystyle F}
-rank of
T
{\displaystyle \mathbf {T} }
is the maximal rank of a split sub-torus of
T
{\displaystyle \mathbf {T} }
. A torus is split if and only if its
F
{\displaystyle F}
-rank equals its absolute rank.
A torus is said to be anisotropic if its
F
{\displaystyle F}
-rank is zero.
=== Isogenies ===
An isogeny between algebraic groups is a surjective morphism with finite kernel; two tori are said to be isogenous if there exists an isogeny from the first to the second. Isogenies between tori are particularly well-behaved: for any isogeny
ϕ
:
T
→
T
′
{\displaystyle \phi :\mathbf {T} \to \mathbf {T} '}
there exists a "dual" isogeny
ψ
:
T
′
→
T
{\displaystyle \psi :\mathbf {T} '\to \mathbf {T} }
such that
ψ
∘
ϕ
{\displaystyle \psi \circ \phi }
is a power map. In particular being isogenous is an equivalence relation between tori.
=== Examples ===
==== Over an algebraically closed field ====
Over any algebraically closed field
k
=
k
¯
{\displaystyle k={\overline {k}}}
there is up to isomorphism a unique torus of any given rank. For a rank
n
{\displaystyle n}
algebraic torus over
k
{\displaystyle k}
this is given by the group scheme
G
m
n
=
Spec
k
(
k
[
t
1
,
t
1
−
1
,
…
,
t
n
,
t
n
−
1
]
)
{\displaystyle \mathbf {G} _{m}^{n}={\text{Spec}}_{k}(k[t_{1},t_{1}^{-1},\ldots ,t_{n},t_{n}^{-1}])}
pg 230.
==== Over the real numbers ====
Over the field of real numbers
R
{\displaystyle \mathbb {R} }
there are exactly (up to isomorphism) two tori of rank 1:
the split torus
R
×
{\displaystyle \mathbb {R} ^{\times }}
the compact form, which can be realised as the unitary group
U
(
1
)
{\displaystyle \mathbf {U} (1)}
or as the special orthogonal group
S
O
(
2
)
{\displaystyle \mathrm {SO} (2)}
. It is an anisotropic torus. As a Lie group, it is also isomorphic to the 1-torus
T
1
{\displaystyle \mathbf {T} ^{1}}
, which explains the picture of diagonalisable algebraic groups as tori.
Any real torus is isogenous to a finite sum of those two; for example the real torus
C
×
{\displaystyle \mathbb {C} ^{\times }}
is doubly covered by (but not isomorphic to)
R
×
×
T
1
{\displaystyle \mathbb {R} ^{\times }\times \mathbb {T} ^{1}}
. This gives an example of isogenous, non-isomorphic tori.
==== Over a finite field ====
Over the finite field
F
q
{\displaystyle \mathbb {F} _{q}}
there are two rank-1 tori: the split one, of cardinality
q
−
1
{\displaystyle q-1}
, and the anisotropic one of cardinality
q
+
1
{\displaystyle q+1}
. The latter can be realised as the matrix group
{
(
t
d
u
u
t
)
:
t
,
u
∈
F
q
,
t
2
−
d
u
2
=
1
}
⊂
S
L
2
(
F
q
)
.
{\displaystyle \left\{{\begin{pmatrix}t&du\\u&t\end{pmatrix}}:t,u\in \mathbb {F} _{q},t^{2}-du^{2}=1\right\}\subset \mathrm {SL} _{2}(\mathbb {F} _{q}).}
More generally, if
E
/
F
{\displaystyle E/F}
is a finite field extension of degree
d
{\displaystyle d}
then the Weil restriction from
E
{\displaystyle E}
to
F
{\displaystyle F}
of the multiplicative group of
E
{\displaystyle E}
is an
F
{\displaystyle F}
-torus of rank
d
{\displaystyle d}
and
F
{\displaystyle F}
-rank 1 (note that restriction of scalars over an inseparable field extension will yield a commutative algebraic group that is not a torus). The kernel
N
E
/
F
{\displaystyle N_{E/F}}
of its field norm is also a torus, which is anisotropic and of rank
d
−
1
{\displaystyle d-1}
. Any
F
{\displaystyle F}
-torus of rank one is either split or isomorphic to the kernel of the norm of a quadratic extension. The two examples above are special cases of this: the compact real torus is the kernel of the field norm of
C
/
R
{\displaystyle \mathbb {C} /\mathbb {R} }
and the anisotropic torus over
F
q
{\displaystyle \mathbb {F} _{q}}
is the kernel of the field norm of
F
q
2
/
F
q
{\displaystyle \mathbb {F} _{q^{2}}/\mathbb {F} _{q}}
.
== Weights and coweights ==
Over a separably closed field, a torus T admits two primary invariants. The weight lattice
X
∙
(
T
)
{\displaystyle X^{\bullet }(T)}
is the group of algebraic homomorphisms T → Gm, and the coweight lattice
X
∙
(
T
)
{\displaystyle X_{\bullet }(T)}
is the group of algebraic homomorphisms Gm → T. These are both free abelian groups whose rank is that of the torus, and they have a canonical nondegenerate pairing
X
∙
(
T
)
×
X
∙
(
T
)
→
Z
{\displaystyle X^{\bullet }(T)\times X_{\bullet }(T)\to \mathbb {Z} }
given by
(
f
,
g
)
↦
deg
(
f
∘
g
)
{\displaystyle (f,g)\mapsto \deg(f\circ g)}
, where degree is the number n such that the composition is equal to the nth power map on the multiplicative group. The functor given by taking weights is an antiequivalence of categories between tori and free abelian groups, and the coweight functor is an equivalence. In particular, maps of tori are characterized by linear transformations on weights or coweights, and the automorphism group of a torus is a general linear group over Z. The quasi-inverse of the weights functor is given by a dualization functor from free abelian groups to tori, defined by its functor of points as:
D
(
M
)
S
(
X
)
:=
H
o
m
(
M
,
G
m
,
S
(
X
)
)
.
{\displaystyle D(M)_{S}(X):=\mathrm {Hom} (M,\mathbb {G} _{m,S}(X)).}
This equivalence can be generalized to pass between groups of multiplicative type (a distinguished class of formal groups) and arbitrary abelian groups, and such a generalization can be convenient if one wants to work in a well-behaved category, since the category of tori doesn't have kernels or filtered colimits.
When a field K is not separably closed, the weight and coweight lattices of a torus over K are defined as the respective lattices over the separable closure. This induces canonical continuous actions of the absolute Galois group of K on the lattices. The weights and coweights that are fixed by this action are precisely the maps that are defined over K. The functor of taking weights is an antiequivalence between the category of tori over K with algebraic homomorphisms and the category of finitely generated torsion free abelian groups with an action of the absolute Galois group of K.
Given a finite separable field extension L/K and a torus T over L, we have a Galois module isomorphism
X
∙
(
R
e
s
L
/
K
T
)
≅
I
n
d
G
L
G
K
X
∙
(
T
)
.
{\displaystyle X^{\bullet }(\mathrm {Res} _{L/K}T)\cong \mathrm {Ind} _{G_{L}}^{G_{K}}X^{\bullet }(T).}
If T is the multiplicative group, then this gives the restriction of scalars a permutation module structure. Tori whose weight lattices are permutation modules for the Galois group are called quasi-split, and all quasi-split tori are finite products of restrictions of scalars.
== Tori in semisimple groups ==
=== Linear representations of tori ===
As seen in the examples above tori can be represented as linear groups. An alternative definition for tori is:
A linear algebraic group is a torus if and only if it is diagonalisable over an algebraic closure.
The torus is split over a field if and only if it is diagonalisable over this field.
=== Split rank of a semisimple group ===
If
G
{\displaystyle \mathbf {G} }
is a semisimple algebraic group over a field
F
{\displaystyle F}
then:
its rank (or absolute rank) is the rank of a maximal torus subgroup in
G
{\displaystyle \mathbf {G} }
(note that all maximal tori are conjugated over
F
{\displaystyle F}
so the rank is well-defined);
its
F
{\displaystyle F}
-rank (sometimes called
F
{\displaystyle F}
-split rank) is the maximal rank of a torus subgroup in
G
{\displaystyle G}
which is split over
F
{\displaystyle F}
.
Obviously the rank is greater than or equal the
F
{\displaystyle F}
-rank; the group is called split if and only if equality holds (that is, there is a maximal torus in
G
{\displaystyle \mathbf {G} }
which is split over
F
{\displaystyle F}
). The group is called anisotropic if it contains no split tori (i.e. its
F
{\displaystyle F}
-rank is zero).
=== Classification of semisimple groups ===
In the classical theory of semisimple Lie algebras over the complex field the Cartan subalgebras play a fundamental rôle in the classification via root systems and Dynkin diagrams. This classification is equivalent to that of connected algebraic groups over the complex field, and Cartan subalgebras correspond to maximal tori in these. In fact the classification carries over to the case of an arbitrary base field under the assumption that there exists a split maximal torus (which is automatically satisfied over an algebraically closed field). Without the splitness assumption things become much more complicated and a more detailed theory has to be developed, which is still based in part on the study of adjoint actions of tori.
If
T
{\displaystyle \mathbf {T} }
is a maximal torus in a semisimple algebraic group
G
{\displaystyle \mathbf {G} }
then over the algebraic closure it gives rise to a root system
Φ
{\displaystyle \Phi }
in the vector space
V
=
X
∗
(
T
)
⊗
Z
R
{\displaystyle V=X^{*}(\mathbf {T} )\otimes _{\mathbb {Z} }\mathbb {R} }
. On the other hand, if
F
T
⊂
T
{\displaystyle {}_{F}\mathbf {T} \subset \mathbf {T} }
is a maximal
F
{\displaystyle F}
-split torus its action on the
F
{\displaystyle F}
-Lie algebra of
G
{\displaystyle \mathbf {G} }
gives rise to another root system
F
Φ
{\displaystyle {}_{F}\Phi }
. The restriction map
X
∗
(
T
)
→
X
∗
(
F
T
)
{\displaystyle X^{*}(\mathbf {T} )\to X^{*}(_{F}\mathbf {T} )}
induces a map
Φ
→
F
Φ
∪
{
0
}
{\displaystyle \Phi \to {}_{F}\Phi \cup \{0\}}
and the Tits index is a way to encode the properties of this map and of the action of the Galois group of
F
¯
/
F
{\displaystyle {\overline {F}}/F}
on
Φ
{\displaystyle \Phi }
. The Tits index is a "relative" version of the "absolute" Dynkin diagram associated to
Φ
{\displaystyle \Phi }
; obviously, only finitely many Tits indices can correspond to a given Dynkin diagram.
Another invariant associated to the split torus
F
T
{\displaystyle {}_{F}\mathbf {T} }
is the anisotropic kernel: this is the semisimple algebraic group obtained as the derived subgroup of the centraliser of
F
T
{\displaystyle {}_{F}\mathbf {T} }
in
G
{\displaystyle \mathbf {G} }
(the latter is only a reductive group). As its name indicates it is an anisotropic group, and its absolute type is uniquely determined by
F
Φ
{\displaystyle {}_{F}\Phi }
.
The first step towards a classification is then the following theorem
Two semisimple
F
{\displaystyle F}
-algebraic groups are isomorphic if and only if they have the same Tits indices and isomorphic anisotropic kernels.
This reduces the classification problem to anisotropic groups, and to determining which Tits indices can occur for a given Dynkin diagram. The latter problem has been solved in Tits (1966). The former is related to the Galois cohomology groups of
F
{\displaystyle F}
. More precisely to each Tits index there is associated a unique quasi-split group over
F
{\displaystyle F}
; then every
F
{\displaystyle F}
-group with the same index is an inner form of this quasi-split group, and those are classified by the Galois cohomology of
F
{\displaystyle F}
with coefficients in the adjoint group.
== Tori and geometry ==
=== Flat subspaces and rank of symmetric spaces ===
If
G
{\displaystyle G}
is a semisimple Lie group then its real rank is the
R
{\displaystyle \mathbb {R} }
-rank as defined above (for any
R
{\displaystyle \mathbb {R} }
-algebraic group whose group of real points is isomorphic to
G
{\displaystyle G}
), in other words the maximal
r
{\displaystyle r}
such that there exists an embedding
(
R
×
)
r
→
G
{\displaystyle (\mathbb {R} ^{\times })^{r}\to G}
. For example, the real rank of
S
L
n
(
R
)
{\displaystyle \mathrm {SL} _{n}(\mathbb {R} )}
is equal to
n
−
1
{\displaystyle n-1}
, and the real rank of
S
O
(
p
,
q
)
{\displaystyle \mathrm {SO} (p,q)}
is equal to
min
(
p
,
q
)
{\displaystyle \min(p,q)}
.
If
X
{\displaystyle X}
is the symmetric space associated to
G
{\displaystyle G}
and
T
⊂
G
{\displaystyle T\subset G}
is a maximal split torus then there exists a unique orbit of
T
{\displaystyle T}
in
X
{\displaystyle X}
which is a totally geodesic flat subspace in
X
{\displaystyle X}
. It is in fact a maximal flat subspace and all maximal such are obtained as orbits of split tori in this way. Thus there is a geometric definition of the real rank, as the maximal dimension of a flat subspace in
X
{\displaystyle X}
.
=== Q-rank of lattices ===
If the Lie group
G
{\displaystyle G}
is obtained as the real points of an algebraic group
G
{\displaystyle \mathbf {G} }
over the rational field
Q
{\displaystyle \mathbb {Q} }
then the
Q
{\displaystyle \mathbb {Q} }
-rank of
G
{\displaystyle \mathbf {G} }
has also a geometric significance. To get to it one has to introduce an arithmetic group
Γ
{\displaystyle \Gamma }
associated to
G
{\displaystyle \mathbf {G} }
, which roughly is the group of integer points of
G
{\displaystyle \mathbf {G} }
, and the quotient space
M
=
Γ
∖
X
{\displaystyle M=\Gamma \backslash X}
, which is a Riemannian orbifold and hence a metric space. Then any asymptotic cone of
M
{\displaystyle M}
is homeomorphic to a finite simplicial complex with top-dimensional simplices of dimension equal to the
Q
{\displaystyle \mathbb {Q} }
-rank of
G
{\displaystyle \mathbf {G} }
. In particular,
M
{\displaystyle M}
is compact if and only if
G
{\displaystyle \mathbf {G} }
is anisotropic.
Note that this allows to define the
Q
{\displaystyle \mathbf {Q} }
-rank of any lattice in a semisimple Lie group, as the dimension of its asymptotic cone.
=== Buildings ===
If
G
{\displaystyle \mathbf {G} }
is a semisimple group over
Q
p
{\displaystyle \mathbb {Q} _{p}}
the maximal split tori in
G
{\displaystyle \mathbf {G} }
correspond to the apartments of the Bruhat-Tits building
X
{\displaystyle X}
associated to
G
{\displaystyle \mathbf {G} }
. In particular the dimension of
X
{\displaystyle X}
is equal to the
Q
p
{\displaystyle \mathbb {Q} _{p}}
-rank of
G
{\displaystyle \mathbf {G} }
.
== Algebraic tori over an arbitrary base scheme ==
=== Definition ===
Given a base scheme S, an algebraic torus over S is defined to be a group scheme over S that is fpqc locally isomorphic to a finite product of copies of the multiplicative group scheme Gm/S over S. In other words, there exists a faithfully flat map X → S such that any point in X has a quasi-compact open neighborhood U whose image is an open affine subscheme of S, such that base change to U yields a finite product of copies of GL1,U = Gm/U. One particularly important case is when S is the spectrum of a field K, making a torus over S an algebraic group whose extension to some finite separable extension L is a finite product of copies of Gm/L. In general, the multiplicity of this product (i.e., the dimension of the scheme) is called the rank of the torus, and it is a locally constant function on S.
Most notions defined for tori over fields carry to this more general setting.
==== Examples ====
One common example of an algebraic torus is to consider the affine cone
Aff
(
X
)
⊂
A
n
+
1
{\displaystyle {\text{Aff}}(X)\subset \mathbb {A} ^{n+1}}
of a projective scheme
X
⊂
P
n
{\displaystyle X\subset \mathbb {P} ^{n}}
. Then, with the origin removed, the induced projection map
π
:
(
Aff
(
X
)
−
{
0
}
)
→
X
{\displaystyle \pi :({\text{Aff}}(X)-\{0\})\to X}
gives the structure of an algebraic torus over
X
{\displaystyle X}
.
=== Weights ===
For a general base scheme S, weights and coweights are defined as fpqc sheaves of free abelian groups on S. These provide representations of fundamental groupoids of the base with respect the fpqc topology. If the torus is locally trivializable with respect to a weaker topology such as the etale topology, then the sheaves of groups descend to the same topologies and these representations factor through the respective quotient groupoids. In particular, an etale sheaf gives rise to a quasi-isotrivial torus, and if S is locally noetherian and normal (more generally, geometrically unibranched), the torus is isotrivial. As a partial converse, a theorem of Grothendieck asserts that any torus of finite type is quasi-isotrivial, i.e., split by an etale surjection.
Given a rank n torus T over S, a twisted form is a torus over S for which there exists a fpqc covering of S for which their base extensions are isomorphic, i.e., it is a torus of the same rank. Isomorphism classes of twisted forms of a split torus are parametrized by nonabelian flat cohomology
H
1
(
S
,
G
L
n
(
Z
)
)
{\displaystyle H^{1}(S,GL_{n}(\mathbb {Z} ))}
, where the coefficient group forms a constant sheaf. In particular, twisted forms of a split torus T over a field K are parametrized by elements of the Galois cohomology pointed set
H
1
(
G
K
,
G
L
n
(
Z
)
)
{\displaystyle H^{1}(G_{K},GL_{n}(\mathbb {Z} ))}
with trivial Galois action on the coefficients. In the one-dimensional case, the coefficients form a group of order two, and isomorphism classes of twisted forms of Gm are in natural bijection with separable quadratic extensions of K.
Since taking a weight lattice is an equivalence of categories, short exact sequences of tori correspond to short exact sequences of the corresponding weight lattices. In particular, extensions of tori are classified by Ext1 sheaves. These are naturally isomorphic to the flat cohomology groups
H
1
(
S
,
H
o
m
Z
(
X
∙
(
T
1
)
,
X
∙
(
T
2
)
)
)
{\displaystyle H^{1}(S,\mathrm {Hom} _{\mathbb {Z} }(X^{\bullet }(T_{1}),X^{\bullet }(T_{2})))}
. Over a field, the extensions are parametrized by elements of the corresponding Galois cohomology group.
== Arithmetic invariants ==
In his work on Tamagawa numbers, T. Ono introduced a type of functorial invariants of tori over finite separable extensions of a chosen field k. Such an invariant is a collection of positive real-valued functions fK on isomorphism classes of tori over K, as K runs over finite separable extensions of k, satisfying three properties:
Multiplicativity: Given two tori T1 and T2 over K, fK(T1 × T2) = fK(T1) fK(T2)
Restriction: For a finite separable extension L/K, fL evaluated on an L torus is equal to fK evaluated on its restriction of scalars to K.
Projective triviality: If T is a torus over K whose weight lattice is a projective Galois module, then fK(T) = 1.
T. Ono showed that the Tamagawa number of a torus over a number field is such an invariant. Furthermore, he showed that it is a quotient of two cohomological invariants, namely the order of the group
H
1
(
G
k
,
X
∙
(
T
)
)
≅
E
x
t
1
(
T
,
G
m
)
{\displaystyle H^{1}(G_{k},X^{\bullet }(T))\cong Ext^{1}(T,\mathbb {G} _{m})}
(sometimes mistakenly called the Picard group of T, although it doesn't classify Gm torsors over T), and the order of the Tate–Shafarevich group.
The notion of invariant given above generalizes naturally to tori over arbitrary base schemes, with functions taking values in more general rings. While the order of the extension group is a general invariant, the other two invariants above do not seem to have interesting analogues outside the realm of fraction fields of one-dimensional domains and their completions.
== See also ==
Toric geometry
Torus
Torus based cryptography
Hopf algebra
== Notes ==
== References ==
A. Grothendieck, SGA 3 Exp. VIII–X
T. Ono, On Tamagawa Numbers
T. Ono, On the Tamagawa number of algebraic tori Annals of Mathematics 78 (1) 1963.
Tits, Jacques (1966). "Classification of algebraic semisimple groups". In Borel, Armand; Mostow, George D. (eds.). Algebraic groups and discontinuous groups. Proceedings of symposia in pure math. Vol. 9. American math. soc. pp. 33–62.
Witte-Morris, Dave (2015). Introduction to Arithmetic Groups. Deductive Press. p. 492. ISBN 978-0-9865716-0-2. | Wikipedia/Algebraic_torus |
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. Theta functions are parametrized by points in a tube domain inside a complex Lagrangian Grassmannian, namely the Siegel upper half space.
The most common form of theta function is that occurring in the theory of elliptic functions. With respect to one of the complex variables (conventionally called z), a theta function has a property expressing its behavior with respect to the addition of a period of the associated elliptic functions, making it a quasiperiodic function. In the abstract theory this quasiperiodicity comes from the cohomology class of a line bundle on a complex torus, a condition of descent.
One interpretation of theta functions when dealing with the heat equation is that "a theta function is a special function that describes the evolution of temperature on a segment domain subject to certain boundary conditions".
Throughout this article,
(
e
π
i
τ
)
α
{\displaystyle (e^{\pi i\tau })^{\alpha }}
should be interpreted as
e
α
π
i
τ
{\displaystyle e^{\alpha \pi i\tau }}
(in order to resolve issues of choice of branch).
== Jacobi theta function ==
There are several closely related functions called Jacobi theta functions, and many different and incompatible systems of notation for them.
One Jacobi theta function (named after Carl Gustav Jacob Jacobi) is a function defined for two complex variables z and τ, where z can be any complex number and τ is the half-period ratio, confined to the upper half-plane, which means it has a positive imaginary part. It is given by the formula
ϑ
(
z
;
τ
)
=
∑
n
=
−
∞
∞
exp
(
π
i
n
2
τ
+
2
π
i
n
z
)
=
1
+
2
∑
n
=
1
∞
q
n
2
cos
(
2
π
n
z
)
=
∑
n
=
−
∞
∞
q
n
2
η
n
{\displaystyle {\begin{aligned}\vartheta (z;\tau )&=\sum _{n=-\infty }^{\infty }\exp \left(\pi in^{2}\tau +2\pi inz\right)\\&=1+2\sum _{n=1}^{\infty }q^{n^{2}}\cos(2\pi nz)\\&=\sum _{n=-\infty }^{\infty }q^{n^{2}}\eta ^{n}\end{aligned}}}
where q = exp(πiτ) is the nome and η = exp(2πiz). It is a Jacobi form. The restriction ensures that it is an absolutely convergent series. At fixed τ, this is a Fourier series for a 1-periodic entire function of z. Accordingly, the theta function is 1-periodic in z:
ϑ
(
z
+
1
;
τ
)
=
ϑ
(
z
;
τ
)
.
{\displaystyle \vartheta (z+1;\tau )=\vartheta (z;\tau ).}
By completing the square, it is also τ-quasiperiodic in z, with
ϑ
(
z
+
τ
;
τ
)
=
exp
(
−
π
i
(
τ
+
2
z
)
)
ϑ
(
z
;
τ
)
.
{\displaystyle \vartheta (z+\tau ;\tau )=\exp {\bigl (}-\pi i(\tau +2z){\bigr )}\vartheta (z;\tau ).}
Thus, in general,
ϑ
(
z
+
a
+
b
τ
;
τ
)
=
exp
(
−
π
i
b
2
τ
−
2
π
i
b
z
)
ϑ
(
z
;
τ
)
{\displaystyle \vartheta (z+a+b\tau ;\tau )=\exp \left(-\pi ib^{2}\tau -2\pi ibz\right)\vartheta (z;\tau )}
for any integers a and b.
For any fixed
τ
{\displaystyle \tau }
, the function is an entire function on the complex plane, so by Liouville's theorem, it cannot be doubly periodic in
1
,
τ
{\displaystyle 1,\tau }
unless it is constant, and so the best we can do is to make it periodic in
1
{\displaystyle 1}
and quasi-periodic in
τ
{\displaystyle \tau }
. Indeed, since
|
ϑ
(
z
+
a
+
b
τ
;
τ
)
ϑ
(
z
;
τ
)
|
=
exp
(
π
(
b
2
ℑ
(
τ
)
+
2
b
ℑ
(
z
)
)
)
{\displaystyle \left|{\frac {\vartheta (z+a+b\tau ;\tau )}{\vartheta (z;\tau )}}\right|=\exp \left(\pi (b^{2}\Im (\tau )+2b\Im (z))\right)}
and
ℑ
(
τ
)
>
0
{\displaystyle \Im (\tau )>0}
, the function
ϑ
(
z
,
τ
)
{\displaystyle \vartheta (z,\tau )}
is unbounded, as required by Liouville's theorem.
It is in fact the most general entire function with 2 quasi-periods, in the following sense:
== Auxiliary functions ==
The Jacobi theta function defined above is sometimes considered along with three auxiliary theta functions, in which case it is written with a double 0 subscript:
ϑ
00
(
z
;
τ
)
=
ϑ
(
z
;
τ
)
{\displaystyle \vartheta _{00}(z;\tau )=\vartheta (z;\tau )}
The auxiliary (or half-period) functions are defined by
ϑ
01
(
z
;
τ
)
=
ϑ
(
z
+
1
2
;
τ
)
ϑ
10
(
z
;
τ
)
=
exp
(
1
4
π
i
τ
+
π
i
z
)
ϑ
(
z
+
1
2
τ
;
τ
)
ϑ
11
(
z
;
τ
)
=
exp
(
1
4
π
i
τ
+
π
i
(
z
+
1
2
)
)
ϑ
(
z
+
1
2
τ
+
1
2
;
τ
)
.
{\displaystyle {\begin{aligned}\vartheta _{01}(z;\tau )&=\vartheta \left(z+{\tfrac {1}{2}};\tau \right)\\[3pt]\vartheta _{10}(z;\tau )&=\exp \left({\tfrac {1}{4}}\pi i\tau +\pi iz\right)\vartheta \left(z+{\tfrac {1}{2}}\tau ;\tau \right)\\[3pt]\vartheta _{11}(z;\tau )&=\exp \left({\tfrac {1}{4}}\pi i\tau +\pi i\left(z+{\tfrac {1}{2}}\right)\right)\vartheta \left(z+{\tfrac {1}{2}}\tau +{\tfrac {1}{2}};\tau \right).\end{aligned}}}
This notation follows Riemann and Mumford; Jacobi's original formulation was in terms of the nome q = eiπτ rather than τ. In Jacobi's notation the θ-functions are written:
θ
1
(
z
;
q
)
=
θ
1
(
π
z
,
q
)
=
−
ϑ
11
(
z
;
τ
)
θ
2
(
z
;
q
)
=
θ
2
(
π
z
,
q
)
=
ϑ
10
(
z
;
τ
)
θ
3
(
z
;
q
)
=
θ
3
(
π
z
,
q
)
=
ϑ
00
(
z
;
τ
)
θ
4
(
z
;
q
)
=
θ
4
(
π
z
,
q
)
=
ϑ
01
(
z
;
τ
)
{\displaystyle {\begin{aligned}\theta _{1}(z;q)&=\theta _{1}(\pi z,q)=-\vartheta _{11}(z;\tau )\\\theta _{2}(z;q)&=\theta _{2}(\pi z,q)=\vartheta _{10}(z;\tau )\\\theta _{3}(z;q)&=\theta _{3}(\pi z,q)=\vartheta _{00}(z;\tau )\\\theta _{4}(z;q)&=\theta _{4}(\pi z,q)=\vartheta _{01}(z;\tau )\end{aligned}}}
The above definitions of the Jacobi theta functions are by no means unique. See Jacobi theta functions (notational variations) for further discussion.
If we set z = 0 in the above theta functions, we obtain four functions of τ only, defined on the upper half-plane. These functions are called Theta Nullwert functions, based on the German term for zero value because of the annullation of the left entry in the theta function expression. Alternatively, we obtain four functions of q only, defined on the unit disk
|
q
|
<
1
{\displaystyle |q|<1}
. They are sometimes called theta constants:
ϑ
11
(
0
;
τ
)
=
−
θ
1
(
q
)
=
−
∑
n
=
−
∞
∞
(
−
1
)
n
−
1
/
2
q
(
n
+
1
/
2
)
2
ϑ
10
(
0
;
τ
)
=
θ
2
(
q
)
=
∑
n
=
−
∞
∞
q
(
n
+
1
/
2
)
2
ϑ
00
(
0
;
τ
)
=
θ
3
(
q
)
=
∑
n
=
−
∞
∞
q
n
2
ϑ
01
(
0
;
τ
)
=
θ
4
(
q
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
q
n
2
{\displaystyle {\begin{aligned}\vartheta _{11}(0;\tau )&=-\theta _{1}(q)=-\sum _{n=-\infty }^{\infty }(-1)^{n-1/2}q^{(n+1/2)^{2}}\\\vartheta _{10}(0;\tau )&=\theta _{2}(q)=\sum _{n=-\infty }^{\infty }q^{(n+1/2)^{2}}\\\vartheta _{00}(0;\tau )&=\theta _{3}(q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}\\\vartheta _{01}(0;\tau )&=\theta _{4}(q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{n^{2}}\end{aligned}}}
with the nome q = eiπτ.
Observe that
θ
1
(
q
)
=
0
{\displaystyle \theta _{1}(q)=0}
.
These can be used to define a variety of modular forms, and to parametrize certain curves; in particular, the Jacobi identity is
θ
2
(
q
)
4
+
θ
4
(
q
)
4
=
θ
3
(
q
)
4
{\displaystyle \theta _{2}(q)^{4}+\theta _{4}(q)^{4}=\theta _{3}(q)^{4}}
or equivalently,
ϑ
01
(
0
;
τ
)
4
+
ϑ
10
(
0
;
τ
)
4
=
ϑ
00
(
0
;
τ
)
4
{\displaystyle \vartheta _{01}(0;\tau )^{4}+\vartheta _{10}(0;\tau )^{4}=\vartheta _{00}(0;\tau )^{4}}
which is the Fermat curve of degree four.
== Jacobi identities ==
Jacobi's identities describe how theta functions transform under the modular group, which is generated by τ ↦ τ + 1 and τ ↦ −1/τ. Equations for the first transform are easily found since adding one to τ in the exponent has the same effect as adding 1/2 to z (n ≡ n2 mod 2). For the second, let
α
=
(
−
i
τ
)
1
2
exp
(
π
τ
i
z
2
)
.
{\displaystyle \alpha =(-i\tau )^{\frac {1}{2}}\exp \left({\frac {\pi }{\tau }}iz^{2}\right).}
Then
ϑ
00
(
z
τ
;
−
1
τ
)
=
α
ϑ
00
(
z
;
τ
)
ϑ
01
(
z
τ
;
−
1
τ
)
=
α
ϑ
10
(
z
;
τ
)
ϑ
10
(
z
τ
;
−
1
τ
)
=
α
ϑ
01
(
z
;
τ
)
ϑ
11
(
z
τ
;
−
1
τ
)
=
−
i
α
ϑ
11
(
z
;
τ
)
.
{\displaystyle {\begin{aligned}\vartheta _{00}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{00}(z;\tau )\quad &\vartheta _{01}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{10}(z;\tau )\\[3pt]\vartheta _{10}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=\alpha \,\vartheta _{01}(z;\tau )\quad &\vartheta _{11}\!\left({\frac {z}{\tau }};{\frac {-1}{\tau }}\right)&=-i\alpha \,\vartheta _{11}(z;\tau ).\end{aligned}}}
== Theta functions in terms of the nome ==
Instead of expressing the Theta functions in terms of z and τ, we may express them in terms of arguments w and the nome q, where w = eπiz and q = eπiτ. In this form, the functions become
ϑ
00
(
w
,
q
)
=
∑
n
=
−
∞
∞
(
w
2
)
n
q
n
2
ϑ
01
(
w
,
q
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
(
w
2
)
n
q
n
2
ϑ
10
(
w
,
q
)
=
∑
n
=
−
∞
∞
(
w
2
)
n
+
1
2
q
(
n
+
1
2
)
2
ϑ
11
(
w
,
q
)
=
i
∑
n
=
−
∞
∞
(
−
1
)
n
(
w
2
)
n
+
1
2
q
(
n
+
1
2
)
2
.
{\displaystyle {\begin{aligned}\vartheta _{00}(w,q)&=\sum _{n=-\infty }^{\infty }\left(w^{2}\right)^{n}q^{n^{2}}\quad &\vartheta _{01}(w,q)&=\sum _{n=-\infty }^{\infty }(-1)^{n}\left(w^{2}\right)^{n}q^{n^{2}}\\[3pt]\vartheta _{10}(w,q)&=\sum _{n=-\infty }^{\infty }\left(w^{2}\right)^{n+{\frac {1}{2}}}q^{\left(n+{\frac {1}{2}}\right)^{2}}\quad &\vartheta _{11}(w,q)&=i\sum _{n=-\infty }^{\infty }(-1)^{n}\left(w^{2}\right)^{n+{\frac {1}{2}}}q^{\left(n+{\frac {1}{2}}\right)^{2}}.\end{aligned}}}
We see that the theta functions can also be defined in terms of w and q, without a direct reference to the exponential function. These formulas can, therefore, be used to define the Theta functions over other fields where the exponential function might not be everywhere defined, such as fields of p-adic numbers.
== Product representations ==
The Jacobi triple product (a special case of the Macdonald identities) tells us that for complex numbers w and q with |q| < 1 and w ≠ 0 we have
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
+
w
2
q
2
m
−
1
)
(
1
+
w
−
2
q
2
m
−
1
)
=
∑
n
=
−
∞
∞
w
2
n
q
n
2
.
{\displaystyle \prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+w^{2}q^{2m-1}\right)\left(1+w^{-2}q^{2m-1}\right)=\sum _{n=-\infty }^{\infty }w^{2n}q^{n^{2}}.}
It can be proven by elementary means, as for instance in Hardy and Wright's An Introduction to the Theory of Numbers.
If we express the theta function in terms of the nome q = eπiτ (noting some authors instead set q = e2πiτ) and take w = eπiz then
ϑ
(
z
;
τ
)
=
∑
n
=
−
∞
∞
exp
(
π
i
τ
n
2
)
exp
(
2
π
i
z
n
)
=
∑
n
=
−
∞
∞
w
2
n
q
n
2
.
{\displaystyle \vartheta (z;\tau )=\sum _{n=-\infty }^{\infty }\exp(\pi i\tau n^{2})\exp(2\pi izn)=\sum _{n=-\infty }^{\infty }w^{2n}q^{n^{2}}.}
We therefore obtain a product formula for the theta function in the form
ϑ
(
z
;
τ
)
=
∏
m
=
1
∞
(
1
−
exp
(
2
m
π
i
τ
)
)
(
1
+
exp
(
(
2
m
−
1
)
π
i
τ
+
2
π
i
z
)
)
(
1
+
exp
(
(
2
m
−
1
)
π
i
τ
−
2
π
i
z
)
)
.
{\displaystyle \vartheta (z;\tau )=\prod _{m=1}^{\infty }{\big (}1-\exp(2m\pi i\tau ){\big )}{\Big (}1+\exp {\big (}(2m-1)\pi i\tau +2\pi iz{\big )}{\Big )}{\Big (}1+\exp {\big (}(2m-1)\pi i\tau -2\pi iz{\big )}{\Big )}.}
In terms of w and q:
ϑ
(
z
;
τ
)
=
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
+
q
2
m
−
1
w
2
)
(
1
+
q
2
m
−
1
w
2
)
=
(
q
2
;
q
2
)
∞
(
−
w
2
q
;
q
2
)
∞
(
−
q
w
2
;
q
2
)
∞
=
(
q
2
;
q
2
)
∞
θ
(
−
w
2
q
;
q
2
)
{\displaystyle {\begin{aligned}\vartheta (z;\tau )&=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+q^{2m-1}w^{2}\right)\left(1+{\frac {q^{2m-1}}{w^{2}}}\right)\\&=\left(q^{2};q^{2}\right)_{\infty }\,\left(-w^{2}q;q^{2}\right)_{\infty }\,\left(-{\frac {q}{w^{2}}};q^{2}\right)_{\infty }\\&=\left(q^{2};q^{2}\right)_{\infty }\,\theta \left(-w^{2}q;q^{2}\right)\end{aligned}}}
where ( ; )∞ is the q-Pochhammer symbol and θ( ; ) is the q-theta function. Expanding terms out, the Jacobi triple product can also be written
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
+
(
w
2
+
w
−
2
)
q
2
m
−
1
+
q
4
m
−
2
)
,
{\displaystyle \prod _{m=1}^{\infty }\left(1-q^{2m}\right){\Big (}1+\left(w^{2}+w^{-2}\right)q^{2m-1}+q^{4m-2}{\Big )},}
which we may also write as
ϑ
(
z
∣
q
)
=
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
+
2
cos
(
2
π
z
)
q
2
m
−
1
+
q
4
m
−
2
)
.
{\displaystyle \vartheta (z\mid q)=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+2\cos(2\pi z)q^{2m-1}+q^{4m-2}\right).}
This form is valid in general but clearly is of particular interest when z is real. Similar product formulas for the auxiliary theta functions are
ϑ
01
(
z
∣
q
)
=
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
−
2
cos
(
2
π
z
)
q
2
m
−
1
+
q
4
m
−
2
)
,
ϑ
10
(
z
∣
q
)
=
2
q
1
4
cos
(
π
z
)
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
+
2
cos
(
2
π
z
)
q
2
m
+
q
4
m
)
,
ϑ
11
(
z
∣
q
)
=
−
2
q
1
4
sin
(
π
z
)
∏
m
=
1
∞
(
1
−
q
2
m
)
(
1
−
2
cos
(
2
π
z
)
q
2
m
+
q
4
m
)
.
{\displaystyle {\begin{aligned}\vartheta _{01}(z\mid q)&=\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1-2\cos(2\pi z)q^{2m-1}+q^{4m-2}\right),\\[3pt]\vartheta _{10}(z\mid q)&=2q^{\frac {1}{4}}\cos(\pi z)\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1+2\cos(2\pi z)q^{2m}+q^{4m}\right),\\[3pt]\vartheta _{11}(z\mid q)&=-2q^{\frac {1}{4}}\sin(\pi z)\prod _{m=1}^{\infty }\left(1-q^{2m}\right)\left(1-2\cos(2\pi z)q^{2m}+q^{4m}\right).\end{aligned}}}
In particular,
lim
q
→
0
ϑ
10
(
z
∣
q
)
2
q
1
4
=
cos
(
π
z
)
,
lim
q
→
0
−
ϑ
11
(
z
∣
q
)
2
q
−
1
4
=
sin
(
π
z
)
{\displaystyle \lim _{q\to 0}{\frac {\vartheta _{10}(z\mid q)}{2q^{\frac {1}{4}}}}=\cos(\pi z),\quad \lim _{q\to 0}{\frac {-\vartheta _{11}(z\mid q)}{2q^{-{\frac {1}{4}}}}}=\sin(\pi z)}
so we may interpret them as one-parameter deformations of the periodic functions
sin
,
cos
{\displaystyle \sin ,\cos }
, again validating the interpretation of the theta function as the most general 2 quasi-period function.
== Integral representations ==
The Jacobi theta functions have the following integral representations:
ϑ
00
(
z
;
τ
)
=
−
i
∫
i
−
∞
i
+
∞
e
i
π
τ
u
2
cos
(
2
π
u
z
+
π
u
)
sin
(
π
u
)
d
u
;
ϑ
01
(
z
;
τ
)
=
−
i
∫
i
−
∞
i
+
∞
e
i
π
τ
u
2
cos
(
2
π
u
z
)
sin
(
π
u
)
d
u
;
ϑ
10
(
z
;
τ
)
=
−
i
e
i
π
z
+
1
4
i
π
τ
∫
i
−
∞
i
+
∞
e
i
π
τ
u
2
cos
(
2
π
u
z
+
π
u
+
π
τ
u
)
sin
(
π
u
)
d
u
;
ϑ
11
(
z
;
τ
)
=
e
i
π
z
+
1
4
i
π
τ
∫
i
−
∞
i
+
∞
e
i
π
τ
u
2
cos
(
2
π
u
z
+
π
τ
u
)
sin
(
π
u
)
d
u
.
{\displaystyle {\begin{aligned}\vartheta _{00}(z;\tau )&=-i\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi u)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{01}(z;\tau )&=-i\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{10}(z;\tau )&=-ie^{i\pi z+{\frac {1}{4}}i\pi \tau }\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi u+\pi \tau u)}{\sin(\pi u)}}\mathrm {d} u;\\[6pt]\vartheta _{11}(z;\tau )&=e^{i\pi z+{\frac {1}{4}}i\pi \tau }\int _{i-\infty }^{i+\infty }e^{i\pi \tau u^{2}}{\frac {\cos(2\pi uz+\pi \tau u)}{\sin(\pi u)}}\mathrm {d} u.\end{aligned}}}
The Theta Nullwert function
θ
3
(
q
)
{\displaystyle \theta _{3}(q)}
as this integral identity:
θ
3
(
q
)
=
1
+
4
q
ln
(
1
/
q
)
π
∫
0
∞
exp
[
−
ln
(
1
/
q
)
x
2
]
{
1
−
q
2
cos
[
2
ln
(
1
/
q
)
x
]
}
1
−
2
q
2
cos
[
2
ln
(
1
/
q
)
x
]
+
q
4
d
x
{\displaystyle \theta _{3}(q)=1+{\frac {4q{\sqrt {\ln(1/q)}}}{\sqrt {\pi }}}\int _{0}^{\infty }{\frac {\exp[-\ln(1/q)\,x^{2}]\{1-q^{2}\cos[2\ln(1/q)\,x]\}}{1-2q^{2}\cos[2\ln(1/q)\,x]+q^{4}}}\,\mathrm {d} x}
This formula was discussed in the essay Square series generating function transformations by the mathematician Maxie Schmidt from Georgia in Atlanta.
Based on this formula following three eminent examples are given:
[
2
π
K
(
1
2
2
)
]
1
/
2
=
θ
3
[
exp
(
−
π
)
]
=
1
+
4
exp
(
−
π
)
∫
0
∞
exp
(
−
π
x
2
)
[
1
−
exp
(
−
2
π
)
cos
(
2
π
x
)
]
1
−
2
exp
(
−
2
π
)
cos
(
2
π
x
)
+
exp
(
−
4
π
)
d
x
{\displaystyle {\biggl [}{\frac {2}{\pi }}K{\bigl (}{\frac {1}{2}}{\sqrt {2}}{\bigr )}{\biggr ]}^{1/2}=\theta _{3}{\bigl [}\exp(-\pi ){\bigr ]}=1+4\exp(-\pi )\int _{0}^{\infty }{\frac {\exp(-\pi x^{2})[1-\exp(-2\pi )\cos(2\pi x)]}{1-2\exp(-2\pi )\cos(2\pi x)+\exp(-4\pi )}}\,\mathrm {d} x}
[
2
π
K
(
2
−
1
)
]
1
/
2
=
θ
3
[
exp
(
−
2
π
)
]
=
1
+
4
2
4
exp
(
−
2
π
)
∫
0
∞
exp
(
−
2
π
x
2
)
[
1
−
exp
(
−
2
2
π
)
cos
(
2
2
π
x
)
]
1
−
2
exp
(
−
2
2
π
)
cos
(
2
2
π
x
)
+
exp
(
−
4
2
π
)
d
x
{\displaystyle {\biggl [}{\frac {2}{\pi }}K({\sqrt {2}}-1){\biggr ]}^{1/2}=\theta _{3}{\bigl [}\exp(-{\sqrt {2}}\,\pi ){\bigr ]}=1+4\,{\sqrt[{4}]{2}}\exp(-{\sqrt {2}}\,\pi )\int _{0}^{\infty }{\frac {\exp(-{\sqrt {2}}\,\pi x^{2})[1-\exp(-2{\sqrt {2}}\,\pi )\cos(2{\sqrt {2}}\,\pi x)]}{1-2\exp(-2{\sqrt {2}}\,\pi )\cos(2{\sqrt {2}}\,\pi x)+\exp(-4{\sqrt {2}}\,\pi )}}\,\mathrm {d} x}
{
2
π
K
[
sin
(
π
12
)
]
}
1
/
2
=
θ
3
[
exp
(
−
3
π
)
]
=
1
+
4
3
4
exp
(
−
3
π
)
∫
0
∞
exp
(
−
3
π
x
2
)
[
1
−
exp
(
−
2
3
π
)
cos
(
2
3
π
x
)
]
1
−
2
exp
(
−
2
3
π
)
cos
(
2
3
π
x
)
+
exp
(
−
4
3
π
)
d
x
{\displaystyle {\biggl \{}{\frac {2}{\pi }}K{\bigl [}\sin {\bigl (}{\frac {\pi }{12}}{\bigr )}{\bigr ]}{\biggr \}}^{1/2}=\theta _{3}{\bigl [}\exp(-{\sqrt {3}}\,\pi ){\bigr ]}=1+4\,{\sqrt[{4}]{3}}\exp(-{\sqrt {3}}\,\pi )\int _{0}^{\infty }{\frac {\exp(-{\sqrt {3}}\,\pi x^{2})[1-\exp(-2{\sqrt {3}}\,\pi )\cos(2{\sqrt {3}}\,\pi x)]}{1-2\exp(-2{\sqrt {3}}\,\pi )\cos(2{\sqrt {3}}\,\pi x)+\exp(-4{\sqrt {3}}\,\pi )}}\,\mathrm {d} x}
Furthermore, the theta examples
θ
3
(
1
2
)
{\displaystyle \theta _{3}({\tfrac {1}{2}})}
and
θ
3
(
1
3
)
{\displaystyle \theta _{3}({\tfrac {1}{3}})}
shall be displayed:
θ
3
(
1
2
)
=
1
+
2
∑
n
=
1
∞
1
2
n
2
=
1
+
2
π
−
1
/
2
ln
(
2
)
∫
0
∞
exp
[
−
ln
(
2
)
x
2
]
{
16
−
4
cos
[
2
ln
(
2
)
x
]
}
17
−
8
cos
[
2
ln
(
2
)
x
]
d
x
{\displaystyle \theta _{3}\left({\frac {1}{2}}\right)=1+2\sum _{n=1}^{\infty }{\frac {1}{2^{n^{2}}}}=1+2\pi ^{-1/2}{\sqrt {\ln(2)}}\int _{0}^{\infty }{\frac {\exp[-\ln(2)\,x^{2}]\{16-4\cos[2\ln(2)\,x]\}}{17-8\cos[2\ln(2)\,x]}}\,\mathrm {d} x}
θ
3
(
1
2
)
=
2.128936827211877158669
…
{\displaystyle \theta _{3}\left({\frac {1}{2}}\right)=2.128936827211877158669\ldots }
θ
3
(
1
3
)
=
1
+
2
∑
n
=
1
∞
1
3
n
2
=
1
+
4
3
π
−
1
/
2
ln
(
3
)
∫
0
∞
exp
[
−
ln
(
3
)
x
2
]
{
81
−
9
cos
[
2
ln
(
3
)
x
]
}
82
−
18
cos
[
2
ln
(
3
)
x
]
d
x
{\displaystyle \theta _{3}\left({\frac {1}{3}}\right)=1+2\sum _{n=1}^{\infty }{\frac {1}{3^{n^{2}}}}=1+{\frac {4}{3}}\pi ^{-1/2}{\sqrt {\ln(3)}}\int _{0}^{\infty }{\frac {\exp[-\ln(3)\,x^{2}]\{81-9\cos[2\ln(3)\,x]\}}{82-18\cos[2\ln(3)\,x]}}\,\mathrm {d} x}
θ
3
(
1
3
)
=
1.691459681681715341348
…
{\displaystyle \theta _{3}\left({\frac {1}{3}}\right)=1.691459681681715341348\ldots }
== Some interesting relations ==
If
|
q
|
<
1
{\displaystyle |q|<1}
and
a
>
0
{\displaystyle a>0}
, then the following theta functions
θ
3
(
a
,
b
;
q
)
=
∑
n
=
−
∞
∞
q
a
n
2
+
b
n
{\displaystyle \theta _{3}(a,b;q)=\sum _{n=-\infty }^{\infty }q^{an^{2}+bn}}
θ
4
(
a
,
b
;
q
)
=
∑
n
=
−
∞
∞
(
−
1
)
n
q
a
n
2
+
b
n
{\displaystyle \theta _{4}(a,b;q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{an^{2}+bn}}
have interesting arithmetical and modular properties. When
a
,
b
,
p
{\displaystyle a,b,p}
are positive integers, then
log
(
θ
3
(
p
2
,
p
2
−
a
;
q
)
θ
3
(
p
2
,
p
2
−
b
;
q
)
)
=
−
∑
n
=
1
∞
q
n
(
∑
d
|
n
n
/
d
≡
±
a
(
p
)
(
−
1
)
d
d
−
∑
d
|
n
n
/
d
≡
±
b
(
p
)
(
−
1
)
d
d
)
{\displaystyle \log \left({\frac {\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}{\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-b;q\right)}}\right)=-\sum _{n=1}^{\infty }q^{n}\left(\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm a(p)\end{array}}{\frac {(-1)^{d}}{d}}-\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm b(p)\end{array}}{\frac {(-1)^{d}}{d}}\right)}
log
(
θ
4
(
p
2
,
p
2
−
a
;
q
)
θ
4
(
p
2
,
p
2
−
b
;
q
)
)
=
−
∑
n
=
1
∞
q
n
(
∑
d
|
n
n
/
d
≡
±
a
(
p
)
1
d
−
∑
d
|
n
n
/
d
≡
±
b
(
p
)
1
d
)
{\displaystyle \log \left({\frac {\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}{\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-b;q\right)}}\right)=-\sum _{n=1}^{\infty }q^{n}\left(\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm a(p)\end{array}}{\frac {1}{d}}-\sum _{\begin{array}{cc}d|n\\n/d\equiv \pm b(p)\end{array}}{\frac {1}{d}}\right)}
Also if
q
=
e
π
i
z
{\displaystyle q=e^{\pi iz}}
,
I
m
(
z
)
>
0
{\displaystyle Im(z)>0}
, the functions with :
ϑ
+
(
z
)
=
θ
+
(
a
,
p
;
z
)
=
q
p
/
8
+
a
2
/
(
2
p
)
−
a
/
2
θ
3
(
p
2
,
p
2
−
a
;
q
)
{\displaystyle \vartheta _{+}(z)=\theta _{+}(a,p;z)=q^{p/8+a^{2}/(2p)-a/2}\theta _{3}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}
and
ϑ
−
(
z
)
=
θ
−
(
a
,
p
;
z
)
=
q
p
/
8
+
a
2
/
(
2
p
)
−
a
/
2
θ
4
(
p
2
,
p
2
−
a
;
q
)
{\displaystyle \vartheta _{-}(z)=\theta _{-}(a,p;z)=q^{p/8+a^{2}/(2p)-a/2}\theta _{4}\left({\frac {p}{2}},{\frac {p}{2}}-a;q\right)}
are modular forms with weight
1
/
2
{\displaystyle 1/2}
in
Γ
(
2
p
)
{\displaystyle \Gamma (2p)}
i.e. If
a
1
,
b
1
,
c
1
,
d
1
{\displaystyle a_{1},b_{1},c_{1},d_{1}}
are integers such that
a
1
,
d
1
≡
1
(
2
p
)
{\displaystyle a_{1},d_{1}\equiv 1(2p)}
,
b
1
,
c
1
≡
0
(
2
p
)
{\displaystyle b_{1},c_{1}\equiv 0(2p)}
and
a
1
d
1
−
b
1
c
1
=
1
{\displaystyle a_{1}d_{1}-b_{1}c_{1}=1}
there exists
ϵ
±
=
ϵ
±
(
a
1
,
b
1
,
c
1
,
d
1
)
{\displaystyle \epsilon _{\pm }=\epsilon _{\pm }(a_{1},b_{1},c_{1},d_{1})}
,
(
ϵ
±
)
24
=
1
{\displaystyle (\epsilon _{\pm })^{24}=1}
, such that for all complex numbers
z
{\displaystyle z}
with
I
m
(
z
)
>
0
{\displaystyle Im(z)>0}
, we have
ϑ
±
(
a
1
z
+
b
1
c
1
z
+
d
1
)
=
ϵ
±
c
1
z
+
d
1
ϑ
±
(
z
)
{\displaystyle \vartheta _{\pm }\left({\frac {a_{1}z+b_{1}}{c_{1}z+d_{1}}}\right)=\epsilon _{\pm }{\sqrt {c_{1}z+d_{1}}}\vartheta _{\pm }(z)}
== Explicit values ==
=== Lemniscatic values ===
Proper credit for most of these results goes to Ramanujan. See Ramanujan's lost notebook and a relevant reference at Euler function. The Ramanujan results quoted at Euler function plus a few elementary operations give the results below, so they are either in Ramanujan's lost notebook or follow immediately from it. See also Yi (2004). Define,
φ
(
q
)
=
ϑ
00
(
0
;
τ
)
=
θ
3
(
0
;
q
)
=
∑
n
=
−
∞
∞
q
n
2
{\displaystyle \quad \varphi (q)=\vartheta _{00}(0;\tau )=\theta _{3}(0;q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}}
with the nome
q
=
e
π
i
τ
,
{\displaystyle q=e^{\pi i\tau },}
τ
=
n
−
1
,
{\displaystyle \tau =n{\sqrt {-1}},}
and Dedekind eta function
η
(
τ
)
.
{\displaystyle \eta (\tau ).}
Then for
n
=
1
,
2
,
3
,
…
{\displaystyle n=1,2,3,\dots }
φ
(
e
−
π
)
=
π
4
Γ
(
3
4
)
=
2
η
(
−
1
)
φ
(
e
−
2
π
)
=
π
4
Γ
(
3
4
)
2
+
2
2
φ
(
e
−
3
π
)
=
π
4
Γ
(
3
4
)
1
+
3
108
8
φ
(
e
−
4
π
)
=
π
4
Γ
(
3
4
)
2
+
8
4
4
φ
(
e
−
5
π
)
=
π
4
Γ
(
3
4
)
2
+
5
5
φ
(
e
−
6
π
)
=
π
4
Γ
(
3
4
)
1
4
+
3
4
+
4
4
+
9
4
12
3
8
φ
(
e
−
7
π
)
=
π
4
Γ
(
3
4
)
13
+
7
+
7
+
3
7
14
3
8
⋅
7
16
φ
(
e
−
8
π
)
=
π
4
Γ
(
3
4
)
2
+
2
+
128
8
4
φ
(
e
−
9
π
)
=
π
4
Γ
(
3
4
)
1
+
2
+
2
3
3
3
φ
(
e
−
10
π
)
=
π
4
Γ
(
3
4
)
64
4
+
80
4
+
81
4
+
100
4
200
4
φ
(
e
−
11
π
)
=
π
4
Γ
(
3
4
)
11
+
11
+
(
5
+
3
3
+
11
+
33
)
−
44
+
33
3
3
+
(
−
5
+
3
3
−
11
+
33
)
44
+
33
3
3
52180524
8
φ
(
e
−
12
π
)
=
π
4
Γ
(
3
4
)
1
4
+
2
4
+
3
4
+
4
4
+
9
4
+
18
4
+
24
4
2
108
8
φ
(
e
−
13
π
)
=
π
4
Γ
(
3
4
)
13
+
8
13
+
(
11
−
6
3
+
13
)
143
+
78
3
3
+
(
11
+
6
3
+
13
)
143
−
78
3
3
19773
4
φ
(
e
−
14
π
)
=
π
4
Γ
(
3
4
)
13
+
7
+
7
+
3
7
+
10
+
2
7
+
28
8
4
+
7
28
7
16
φ
(
e
−
15
π
)
=
π
4
Γ
(
3
4
)
7
+
3
3
+
5
+
15
+
60
4
+
1500
4
12
3
8
⋅
5
2
φ
(
e
−
16
π
)
=
φ
(
e
−
4
π
)
+
π
4
Γ
(
3
4
)
1
+
2
4
128
16
φ
(
e
−
17
π
)
=
π
4
Γ
(
3
4
)
2
(
1
+
17
4
)
+
17
8
5
+
17
17
+
17
17
2
φ
(
e
−
20
π
)
=
φ
(
e
−
5
π
)
+
π
4
Γ
(
3
4
)
3
+
2
5
4
5
2
6
φ
(
e
−
36
π
)
=
3
φ
(
e
−
9
π
)
+
2
φ
(
e
−
4
π
)
−
φ
(
e
−
π
)
+
π
4
Γ
(
3
4
)
2
4
+
18
4
+
216
4
3
{\displaystyle {\begin{aligned}\varphi \left(e^{-\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}={\sqrt {2}}\,\eta \left({\sqrt {-1}}\right)\\\varphi \left(e^{-2\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {2+{\sqrt {2}}}}{2}}\\\varphi \left(e^{-3\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {1+{\sqrt {3}}}}{\sqrt[{8}]{108}}}\\\varphi \left(e^{-4\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {2+{\sqrt[{4}]{8}}}{4}}\\\varphi \left(e^{-5\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt {\frac {2+{\sqrt {5}}}{5}}}\\\varphi \left(e^{-6\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{1}}+{\sqrt[{4}]{3}}+{\sqrt[{4}]{4}}+{\sqrt[{4}]{9}}}}{\sqrt[{8}]{12^{3}}}}\\\varphi \left(e^{-7\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt {13+{\sqrt {7}}}}+{\sqrt {7+3{\sqrt {7}}}}}}{{\sqrt[{8}]{14^{3}}}\cdot {\sqrt[{16}]{7}}}}\\\varphi \left(e^{-8\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {{\sqrt {2+{\sqrt {2}}}}+{\sqrt[{8}]{128}}}{4}}\\\varphi \left(e^{-9\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {1+{\sqrt[{3}]{2+2{\sqrt {3}}}}}{3}}\\\varphi \left(e^{-10\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{64}}+{\sqrt[{4}]{80}}+{\sqrt[{4}]{81}}+{\sqrt[{4}]{100}}}}{\sqrt[{4}]{200}}}\\\varphi \left(e^{-11\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {11+{\sqrt {11}}+(5+3{\sqrt {3}}+{\sqrt {11}}+{\sqrt {33}}){\sqrt[{3}]{-44+33{\sqrt {3}}}}+(-5+3{\sqrt {3}}-{\sqrt {11}}+{\sqrt {33}}){\sqrt[{3}]{44+33{\sqrt {3}}}}}}{\sqrt[{8}]{52180524}}}\\\varphi \left(e^{-12\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt[{4}]{1}}+{\sqrt[{4}]{2}}+{\sqrt[{4}]{3}}+{\sqrt[{4}]{4}}+{\sqrt[{4}]{9}}+{\sqrt[{4}]{18}}+{\sqrt[{4}]{24}}}}{2{\sqrt[{8}]{108}}}}\\\varphi \left(e^{-13\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {13+8{\sqrt {13}}+(11-6{\sqrt {3}}+{\sqrt {13}}){\sqrt[{3}]{143+78{\sqrt {3}}}}+(11+6{\sqrt {3}}+{\sqrt {13}}){\sqrt[{3}]{143-78{\sqrt {3}}}}}}{\sqrt[{4}]{19773}}}\\\varphi \left(e^{-14\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {{\sqrt {13+{\sqrt {7}}}}+{\sqrt {7+3{\sqrt {7}}}}+{\sqrt {10+2{\sqrt {7}}}}+{\sqrt[{8}]{28}}{\sqrt {4+{\sqrt {7}}}}}}{\sqrt[{16}]{28^{7}}}}\\\varphi \left(e^{-15\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt {7+3{\sqrt {3}}+{\sqrt {5}}+{\sqrt {15}}+{\sqrt[{4}]{60}}+{\sqrt[{4}]{1500}}}}{{\sqrt[{8}]{12^{3}}}\cdot {\sqrt {5}}}}\\2\varphi \left(e^{-16\pi }\right)&=\varphi \left(e^{-4\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {\sqrt[{4}]{1+{\sqrt {2}}}}{\sqrt[{16}]{128}}}\\\varphi \left(e^{-17\pi }\right)&={\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\frac {{\sqrt {2}}(1+{\sqrt[{4}]{17}})+{\sqrt[{8}]{17}}{\sqrt {5+{\sqrt {17}}}}}{\sqrt {17+17{\sqrt {17}}}}}\\2\varphi \left(e^{-20\pi }\right)&=\varphi \left(e^{-5\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt {\frac {3+2{\sqrt[{4}]{5}}}{5{\sqrt {2}}}}}\\6\varphi \left(e^{-36\pi }\right)&=3\varphi \left(e^{-9\pi }\right)+2\varphi \left(e^{-4\pi }\right)-\varphi \left(e^{-\pi }\right)+{\frac {\sqrt[{4}]{\pi }}{\Gamma \left({\frac {3}{4}}\right)}}{\sqrt[{3}]{{\sqrt[{4}]{2}}+{\sqrt[{4}]{18}}+{\sqrt[{4}]{216}}}}\end{aligned}}}
If the reciprocal of the Gelfond constant is raised to the power of the reciprocal of an odd number, then the corresponding
ϑ
00
{\displaystyle \vartheta _{00}}
values or
ϕ
{\displaystyle \phi }
values can be represented in a simplified way by using the hyperbolic lemniscatic sine:
φ
[
exp
(
−
1
5
π
)
]
=
π
4
Γ
(
3
4
)
−
1
slh
(
1
5
2
ϖ
)
slh
(
2
5
2
ϖ
)
{\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{5}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{5}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{5}}{\sqrt {2}}\,\varpi {\bigr )}}
φ
[
exp
(
−
1
7
π
)
]
=
π
4
Γ
(
3
4
)
−
1
slh
(
1
7
2
ϖ
)
slh
(
2
7
2
ϖ
)
slh
(
3
7
2
ϖ
)
{\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{7}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{7}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{7}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{7}}{\sqrt {2}}\,\varpi {\bigr )}}
φ
[
exp
(
−
1
9
π
)
]
=
π
4
Γ
(
3
4
)
−
1
slh
(
1
9
2
ϖ
)
slh
(
2
9
2
ϖ
)
slh
(
3
9
2
ϖ
)
slh
(
4
9
2
ϖ
)
{\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{9}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{9}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {4}{9}}{\sqrt {2}}\,\varpi {\bigr )}}
φ
[
exp
(
−
1
11
π
)
]
=
π
4
Γ
(
3
4
)
−
1
slh
(
1
11
2
ϖ
)
slh
(
2
11
2
ϖ
)
slh
(
3
11
2
ϖ
)
slh
(
4
11
2
ϖ
)
slh
(
5
11
2
ϖ
)
{\displaystyle \varphi {\bigl [}\exp(-{\tfrac {1}{11}}\pi ){\bigr ]}={\sqrt[{4}]{\pi }}\,{\Gamma \left({\tfrac {3}{4}}\right)}^{-1}\operatorname {slh} {\bigl (}{\tfrac {1}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {2}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {3}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {4}{11}}{\sqrt {2}}\,\varpi {\bigr )}\operatorname {slh} {\bigl (}{\tfrac {5}{11}}{\sqrt {2}}\,\varpi {\bigr )}}
With the letter
ϖ
{\displaystyle \varpi }
the Lemniscate constant is represented.
Note that the following modular identities hold:
2
φ
(
q
4
)
=
φ
(
q
)
+
2
φ
2
(
q
2
)
−
φ
2
(
q
)
3
φ
(
q
9
)
=
φ
(
q
)
+
9
φ
4
(
q
3
)
φ
(
q
)
−
φ
3
(
q
)
3
5
φ
(
q
25
)
=
φ
(
q
5
)
cot
(
1
2
arctan
(
2
5
φ
(
q
)
φ
(
q
5
)
φ
2
(
q
)
−
φ
2
(
q
5
)
1
+
s
(
q
)
−
s
2
(
q
)
s
(
q
)
)
)
{\displaystyle {\begin{aligned}2\varphi \left(q^{4}\right)&=\varphi (q)+{\sqrt {2\varphi ^{2}\left(q^{2}\right)-\varphi ^{2}(q)}}\\3\varphi \left(q^{9}\right)&=\varphi (q)+{\sqrt[{3}]{9{\frac {\varphi ^{4}\left(q^{3}\right)}{\varphi (q)}}-\varphi ^{3}(q)}}\\{\sqrt {5}}\varphi \left(q^{25}\right)&=\varphi \left(q^{5}\right)\cot \left({\frac {1}{2}}\arctan \left({\frac {2}{\sqrt {5}}}{\frac {\varphi (q)\varphi \left(q^{5}\right)}{\varphi ^{2}(q)-\varphi ^{2}\left(q^{5}\right)}}{\frac {1+s(q)-s^{2}(q)}{s(q)}}\right)\right)\end{aligned}}}
where
s
(
q
)
=
s
(
e
π
i
τ
)
=
−
R
(
−
e
−
π
i
/
(
5
τ
)
)
{\displaystyle s(q)=s\left(e^{\pi i\tau }\right)=-R\left(-e^{-\pi i/(5\tau )}\right)}
is the Rogers–Ramanujan continued fraction:
s
(
q
)
=
tan
(
1
2
arctan
(
5
2
φ
2
(
q
5
)
φ
2
(
q
)
−
1
2
)
)
cot
2
(
1
2
arccot
(
5
2
φ
2
(
q
5
)
φ
2
(
q
)
−
1
2
)
)
5
=
e
−
π
i
/
(
25
τ
)
1
−
e
−
π
i
/
(
5
τ
)
1
+
e
−
2
π
i
/
(
5
τ
)
1
−
⋱
{\displaystyle {\begin{aligned}s(q)&={\sqrt[{5}]{\tan \left({\frac {1}{2}}\arctan \left({\frac {5}{2}}{\frac {\varphi ^{2}\left(q^{5}\right)}{\varphi ^{2}(q)}}-{\frac {1}{2}}\right)\right)\cot ^{2}\left({\frac {1}{2}}\operatorname {arccot} \left({\frac {5}{2}}{\frac {\varphi ^{2}\left(q^{5}\right)}{\varphi ^{2}(q)}}-{\frac {1}{2}}\right)\right)}}\\&={\cfrac {e^{-\pi i/(25\tau )}}{1-{\cfrac {e^{-\pi i/(5\tau )}}{1+{\cfrac {e^{-2\pi i/(5\tau )}}{1-\ddots }}}}}}\end{aligned}}}
=== Equianharmonic values ===
The mathematician Bruce Berndt found out further values of the theta function:
φ
(
exp
(
−
3
π
)
)
=
π
−
1
Γ
(
4
3
)
3
/
2
2
−
2
/
3
3
13
/
8
φ
(
exp
(
−
2
3
π
)
)
=
π
−
1
Γ
(
4
3
)
3
/
2
2
−
2
/
3
3
13
/
8
cos
(
1
24
π
)
φ
(
exp
(
−
3
3
π
)
)
=
π
−
1
Γ
(
4
3
)
3
/
2
2
−
2
/
3
3
7
/
8
(
2
3
+
1
)
φ
(
exp
(
−
4
3
π
)
)
=
π
−
1
Γ
(
4
3
)
3
/
2
2
−
5
/
3
3
13
/
8
(
1
+
cos
(
1
12
π
)
)
φ
(
exp
(
−
5
3
π
)
)
=
π
−
1
Γ
(
4
3
)
3
/
2
2
−
2
/
3
3
5
/
8
sin
(
1
5
π
)
(
2
5
100
3
+
2
5
10
3
+
3
5
5
+
1
)
{\displaystyle {\begin{array}{lll}\varphi \left(\exp(-{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{13/8}\\\varphi \left(\exp(-2{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{13/8}\cos({\tfrac {1}{24}}\pi )\\\varphi \left(\exp(-3{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{7/8}({\sqrt[{3}]{2}}+1)\\\varphi \left(\exp(-4{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-5/3}3^{13/8}{\Bigl (}1+{\sqrt {\cos({\tfrac {1}{12}}\pi )}}{\Bigr )}\\\varphi \left(\exp(-5{\sqrt {3}}\,\pi )\right)&=&\pi ^{-1}{\Gamma \left({\tfrac {4}{3}}\right)}^{3/2}2^{-2/3}3^{5/8}\sin({\tfrac {1}{5}}\pi )({\tfrac {2}{5}}{\sqrt[{3}]{100}}+{\tfrac {2}{5}}{\sqrt[{3}]{10}}+{\tfrac {3}{5}}{\sqrt {5}}+1)\end{array}}}
=== Further values ===
Many values of the theta function and especially of the shown phi function can be represented in terms of the gamma function:
φ
(
exp
(
−
2
π
)
)
=
π
−
1
/
2
Γ
(
9
8
)
Γ
(
5
4
)
−
1
/
2
2
7
/
8
φ
(
exp
(
−
2
2
π
)
)
=
π
−
1
/
2
Γ
(
9
8
)
Γ
(
5
4
)
−
1
/
2
2
1
/
8
(
1
+
2
−
1
)
φ
(
exp
(
−
3
2
π
)
)
=
π
−
1
/
2
Γ
(
9
8
)
Γ
(
5
4
)
−
1
/
2
2
3
/
8
3
−
1
/
2
(
3
+
1
)
tan
(
5
24
π
)
φ
(
exp
(
−
4
2
π
)
)
=
π
−
1
/
2
Γ
(
9
8
)
Γ
(
5
4
)
−
1
/
2
2
−
1
/
8
(
1
+
2
2
−
2
4
)
φ
(
exp
(
−
5
2
π
)
)
=
π
−
1
/
2
Γ
(
9
8
)
Γ
(
5
4
)
−
1
/
2
1
15
2
3
/
8
×
×
[
5
3
10
+
2
5
(
5
+
2
+
3
3
3
+
5
+
2
−
3
3
3
)
−
(
2
−
2
)
25
−
10
5
]
φ
(
exp
(
−
6
π
)
)
=
π
−
1
/
2
Γ
(
5
24
)
Γ
(
5
12
)
−
1
/
2
2
−
13
/
24
3
−
1
/
8
sin
(
5
12
π
)
φ
(
exp
(
−
1
2
6
π
)
)
=
π
−
1
/
2
Γ
(
5
24
)
Γ
(
5
12
)
−
1
/
2
2
5
/
24
3
−
1
/
8
sin
(
5
24
π
)
{\displaystyle {\begin{array}{lll}\varphi \left(\exp(-{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{7/8}\\\varphi \left(\exp(-2{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{1/8}{\Bigl (}1+{\sqrt {{\sqrt {2}}-1}}{\Bigr )}\\\varphi \left(\exp(-3{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{3/8}3^{-1/2}({\sqrt {3}}+1){\sqrt {\tan({\tfrac {5}{24}}\pi )}}\\\varphi \left(\exp(-4{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}2^{-1/8}{\Bigl (}1+{\sqrt[{4}]{2{\sqrt {2}}-2}}{\Bigr )}\\\varphi \left(\exp(-5{\sqrt {2}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {9}{8}}\right){\Gamma \left({\tfrac {5}{4}}\right)}^{-1/2}{\frac {1}{15}}\,2^{3/8}\times \\&&\times {\biggl [}{\sqrt[{3}]{5}}\,{\sqrt {10+2{\sqrt {5}}}}{\biggl (}{\sqrt[{3}]{5+{\sqrt {2}}+3{\sqrt {3}}}}+{\sqrt[{3}]{5+{\sqrt {2}}-3{\sqrt {3}}}}\,{\biggr )}-{\bigl (}2-{\sqrt {2}}\,{\bigr )}{\sqrt {25-10{\sqrt {5}}}}\,{\biggr ]}\\\varphi \left(\exp(-{\sqrt {6}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {5}{24}}\right){\Gamma \left({\tfrac {5}{12}}\right)}^{-1/2}2^{-13/24}3^{-1/8}{\sqrt {\sin({\tfrac {5}{12}}\pi )}}\\\varphi \left(\exp(-{\tfrac {1}{2}}{\sqrt {6}}\,\pi )\right)&=&\pi ^{-1/2}\Gamma \left({\tfrac {5}{24}}\right){\Gamma \left({\tfrac {5}{12}}\right)}^{-1/2}2^{5/24}3^{-1/8}\sin({\tfrac {5}{24}}\pi )\end{array}}}
== Nome power theorems ==
=== Direct power theorems ===
For the transformation of the nome in the theta functions these formulas can be used:
θ
2
(
q
2
)
=
1
2
2
[
θ
3
(
q
)
2
−
θ
4
(
q
)
2
]
{\displaystyle \theta _{2}(q^{2})={\tfrac {1}{2}}{\sqrt {2[\theta _{3}(q)^{2}-\theta _{4}(q)^{2}]}}}
θ
3
(
q
2
)
=
1
2
2
[
θ
3
(
q
)
2
+
θ
4
(
q
)
2
]
{\displaystyle \theta _{3}(q^{2})={\tfrac {1}{2}}{\sqrt {2[\theta _{3}(q)^{2}+\theta _{4}(q)^{2}]}}}
θ
4
(
q
2
)
=
θ
4
(
q
)
θ
3
(
q
)
{\displaystyle \theta _{4}(q^{2})={\sqrt {\theta _{4}(q)\theta _{3}(q)}}}
The squares of the three theta zero-value functions with the square function as the inner function are also formed in the pattern of the Pythagorean triples according to the Jacobi Identity. Furthermore, those transformations are valid:
θ
3
(
q
4
)
=
1
2
θ
3
(
q
)
+
1
2
θ
4
(
q
)
{\displaystyle \theta _{3}(q^{4})={\tfrac {1}{2}}\theta _{3}(q)+{\tfrac {1}{2}}\theta _{4}(q)}
These formulas can be used to compute the theta values of the cube of the nome:
27
θ
3
(
q
3
)
8
−
18
θ
3
(
q
3
)
4
θ
3
(
q
)
4
−
θ
3
(
q
)
8
=
8
θ
3
(
q
3
)
2
θ
3
(
q
)
2
[
2
θ
4
(
q
)
4
−
θ
3
(
q
)
4
]
{\displaystyle 27\,\theta _{3}(q^{3})^{8}-18\,\theta _{3}(q^{3})^{4}\theta _{3}(q)^{4}-\,\theta _{3}(q)^{8}=8\,\theta _{3}(q^{3})^{2}\theta _{3}(q)^{2}[2\,\theta _{4}(q)^{4}-\theta _{3}(q)^{4}]}
27
θ
4
(
q
3
)
8
−
18
θ
4
(
q
3
)
4
θ
4
(
q
)
4
−
θ
4
(
q
)
8
=
8
θ
4
(
q
3
)
2
θ
4
(
q
)
2
[
2
θ
3
(
q
)
4
−
θ
4
(
q
)
4
]
{\displaystyle 27\,\theta _{4}(q^{3})^{8}-18\,\theta _{4}(q^{3})^{4}\theta _{4}(q)^{4}-\,\theta _{4}(q)^{8}=8\,\theta _{4}(q^{3})^{2}\theta _{4}(q)^{2}[2\,\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]}
And the following formulas can be used to compute the theta values of the fifth power of the nome:
[
θ
3
(
q
)
2
−
θ
3
(
q
5
)
2
]
[
5
θ
3
(
q
5
)
2
−
θ
3
(
q
)
2
]
5
=
256
θ
3
(
q
5
)
2
θ
3
(
q
)
2
θ
4
(
q
)
4
[
θ
3
(
q
)
4
−
θ
4
(
q
)
4
]
{\displaystyle [\theta _{3}(q)^{2}-\theta _{3}(q^{5})^{2}][5\,\theta _{3}(q^{5})^{2}-\theta _{3}(q)^{2}]^{5}=256\,\theta _{3}(q^{5})^{2}\theta _{3}(q)^{2}\theta _{4}(q)^{4}[\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]}
[
θ
4
(
q
5
)
2
−
θ
4
(
q
)
2
]
[
5
θ
4
(
q
5
)
2
−
θ
4
(
q
)
2
]
5
=
256
θ
4
(
q
5
)
2
θ
4
(
q
)
2
θ
3
(
q
)
4
[
θ
3
(
q
)
4
−
θ
4
(
q
)
4
]
{\displaystyle [\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}][5\,\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}]^{5}=256\,\theta _{4}(q^{5})^{2}\theta _{4}(q)^{2}\theta _{3}(q)^{4}[\theta _{3}(q)^{4}-\theta _{4}(q)^{4}]}
=== Transformation at the cube root of the nome ===
The formulas for the theta Nullwert function values from the cube root of the elliptic nome are obtained by contrasting the two real solutions of the corresponding quartic equations:
[
θ
3
(
q
1
/
3
)
2
θ
3
(
q
)
2
−
3
θ
3
(
q
3
)
2
θ
3
(
q
)
2
]
2
=
4
−
4
[
2
θ
2
(
q
)
2
θ
4
(
q
)
2
θ
3
(
q
)
4
]
2
/
3
{\displaystyle {\biggl [}{\frac {\theta _{3}(q^{1/3})^{2}}{\theta _{3}(q)^{2}}}-{\frac {3\,\theta _{3}(q^{3})^{2}}{\theta _{3}(q)^{2}}}{\biggr ]}^{2}=4-4{\biggl [}{\frac {2\,\theta _{2}(q)^{2}\theta _{4}(q)^{2}}{\theta _{3}(q)^{4}}}{\biggr ]}^{2/3}}
[
3
θ
4
(
q
3
)
2
θ
4
(
q
)
2
−
θ
4
(
q
1
/
3
)
2
θ
4
(
q
)
2
]
2
=
4
+
4
[
2
θ
2
(
q
)
2
θ
3
(
q
)
2
θ
4
(
q
)
4
]
2
/
3
{\displaystyle {\biggl [}{\frac {3\,\theta _{4}(q^{3})^{2}}{\theta _{4}(q)^{2}}}-{\frac {\theta _{4}(q^{1/3})^{2}}{\theta _{4}(q)^{2}}}{\biggr ]}^{2}=4+4{\biggl [}{\frac {2\,\theta _{2}(q)^{2}\theta _{3}(q)^{2}}{\theta _{4}(q)^{4}}}{\biggr ]}^{2/3}}
=== Transformation at the fifth root of the nome ===
The Rogers-Ramanujan continued fraction can be defined in terms of the Jacobi theta function in the following way:
R
(
q
)
=
tan
{
1
2
arctan
[
1
2
−
θ
4
(
q
)
2
2
θ
4
(
q
5
)
2
]
}
1
/
5
tan
{
1
2
arccot
[
1
2
−
θ
4
(
q
)
2
2
θ
4
(
q
5
)
2
]
}
2
/
5
{\displaystyle R(q)=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{1/5}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{2/5}}
R
(
q
2
)
=
tan
{
1
2
arctan
[
1
2
−
θ
4
(
q
)
2
2
θ
4
(
q
5
)
2
]
}
2
/
5
cot
{
1
2
arccot
[
1
2
−
θ
4
(
q
)
2
2
θ
4
(
q
5
)
2
]
}
1
/
5
{\displaystyle R(q^{2})=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{2/5}\cot {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {1}{2}}-{\frac {\theta _{4}(q)^{2}}{2\,\theta _{4}(q^{5})^{2}}}{\biggr ]}{\biggr \}}^{1/5}}
R
(
q
2
)
=
tan
{
1
2
arctan
[
θ
3
(
q
)
2
2
θ
3
(
q
5
)
2
−
1
2
]
}
2
/
5
tan
{
1
2
arccot
[
θ
3
(
q
)
2
2
θ
3
(
q
5
)
2
−
1
2
]
}
1
/
5
{\displaystyle R(q^{2})=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{2/5}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{1/5}}
The alternating Rogers-Ramanujan continued fraction function S(q) has the following two identities:
S
(
q
)
=
R
(
q
4
)
R
(
q
2
)
R
(
q
)
=
tan
{
1
2
arctan
[
θ
3
(
q
)
2
2
θ
3
(
q
5
)
2
−
1
2
]
}
1
/
5
cot
{
1
2
arccot
[
θ
3
(
q
)
2
2
θ
3
(
q
5
)
2
−
1
2
]
}
2
/
5
{\displaystyle S(q)={\frac {R(q^{4})}{R(q^{2})R(q)}}=\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{1/5}\cot {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {\theta _{3}(q)^{2}}{2\,\theta _{3}(q^{5})^{2}}}-{\frac {1}{2}}{\biggr ]}{\biggr \}}^{2/5}}
The theta function values from the fifth root of the nome can be represented as a rational combination of the continued fractions R and S and the theta function values from the fifth power of the nome and the nome itself. The following four equations are valid for all values q between 0 and 1:
θ
3
(
q
1
/
5
)
θ
3
(
q
5
)
−
1
=
1
S
(
q
)
[
S
(
q
)
2
+
R
(
q
2
)
]
[
1
+
R
(
q
2
)
S
(
q
)
]
{\displaystyle {\frac {\theta _{3}(q^{1/5})}{\theta _{3}(q^{5})}}-1={\frac {1}{S(q)}}{\bigl [}S(q)^{2}+R(q^{2}){\bigr ]}{\bigl [}1+R(q^{2})S(q){\bigr ]}}
1
−
θ
4
(
q
1
/
5
)
θ
4
(
q
5
)
=
1
R
(
q
)
[
R
(
q
2
)
+
R
(
q
)
2
]
[
1
−
R
(
q
2
)
R
(
q
)
]
{\displaystyle 1-{\frac {\theta _{4}(q^{1/5})}{\theta _{4}(q^{5})}}={\frac {1}{R(q)}}{\bigl [}R(q^{2})+R(q)^{2}{\bigr ]}{\bigl [}1-R(q^{2})R(q){\bigr ]}}
θ
3
(
q
1
/
5
)
2
−
θ
3
(
q
)
2
=
[
θ
3
(
q
)
2
−
θ
3
(
q
5
)
2
]
[
1
+
1
R
(
q
2
)
S
(
q
)
+
R
(
q
2
)
S
(
q
)
+
1
R
(
q
2
)
2
+
R
(
q
2
)
2
+
1
S
(
q
)
−
S
(
q
)
]
{\displaystyle \theta _{3}(q^{1/5})^{2}-\theta _{3}(q)^{2}={\bigl [}\theta _{3}(q)^{2}-\theta _{3}(q^{5})^{2}{\bigr ]}{\biggl [}1+{\frac {1}{R(q^{2})S(q)}}+R(q^{2})S(q)+{\frac {1}{R(q^{2})^{2}}}+R(q^{2})^{2}+{\frac {1}{S(q)}}-S(q){\biggr ]}}
θ
4
(
q
)
2
−
θ
4
(
q
1
/
5
)
2
=
[
θ
4
(
q
5
)
2
−
θ
4
(
q
)
2
]
[
1
−
1
R
(
q
2
)
R
(
q
)
−
R
(
q
2
)
R
(
q
)
+
1
R
(
q
2
)
2
+
R
(
q
2
)
2
−
1
R
(
q
)
+
R
(
q
)
]
{\displaystyle \theta _{4}(q)^{2}-\theta _{4}(q^{1/5})^{2}={\bigl [}\theta _{4}(q^{5})^{2}-\theta _{4}(q)^{2}{\bigr ]}{\biggl [}1-{\frac {1}{R(q^{2})R(q)}}-R(q^{2})R(q)+{\frac {1}{R(q^{2})^{2}}}+R(q^{2})^{2}-{\frac {1}{R(q)}}+R(q){\biggr ]}}
=== Modulus dependent theorems ===
In combination with the elliptic modulus, the following formulas can be displayed:
These are the formulas for the square of the elliptic nome:
θ
4
[
q
(
k
)
]
=
θ
4
[
q
(
k
)
2
]
1
−
k
2
8
{\displaystyle \theta _{4}[q(k)]=\theta _{4}[q(k)^{2}]{\sqrt[{8}]{1-k^{2}}}}
θ
4
[
q
(
k
)
2
]
=
θ
3
[
q
(
k
)
]
1
−
k
2
8
{\displaystyle \theta _{4}[q(k)^{2}]=\theta _{3}[q(k)]{\sqrt[{8}]{1-k^{2}}}}
θ
3
[
q
(
k
)
2
]
=
θ
3
[
q
(
k
)
]
cos
[
1
2
arcsin
(
k
)
]
{\displaystyle \theta _{3}[q(k)^{2}]=\theta _{3}[q(k)]\cos[{\tfrac {1}{2}}\arcsin(k)]}
And this is an efficient formula for the cube of the nome:
θ
4
⟨
q
{
tan
[
1
2
arctan
(
t
3
)
]
}
3
⟩
=
θ
4
⟨
q
{
tan
[
1
2
arctan
(
t
3
)
]
}
⟩
3
−
1
/
2
(
2
t
4
−
t
2
+
1
−
t
2
+
2
+
t
2
+
1
)
1
/
2
{\displaystyle \theta _{4}{\biggl \langle }q{\bigl \{}\tan {\bigl [}{\tfrac {1}{2}}\arctan(t^{3}){\bigr ]}{\bigr \}}^{3}{\biggr \rangle }=\theta _{4}{\biggl \langle }q{\bigl \{}\tan {\bigl [}{\tfrac {1}{2}}\arctan(t^{3}){\bigr ]}{\bigr \}}{\biggr \rangle }\,3^{-1/2}{\bigl (}{\sqrt {2{\sqrt {t^{4}-t^{2}+1}}-t^{2}+2}}+{\sqrt {t^{2}+1}}\,{\bigr )}^{1/2}}
For all real values
t
∈
R
{\displaystyle t\in \mathbb {R} }
the now mentioned formula is valid.
And for this formula two examples shall be given:
First calculation example with the value
t
=
1
{\displaystyle t=1}
inserted:
Second calculation example with the value
t
=
Φ
−
2
{\displaystyle t=\Phi ^{-2}}
inserted:
The constant
Φ
{\displaystyle \Phi }
represents the Golden ratio number
Φ
=
1
2
(
5
+
1
)
{\displaystyle \Phi ={\tfrac {1}{2}}({\sqrt {5}}+1)}
exactly.
== Some series identities ==
=== Sums with theta function in the result ===
The infinite sum of the reciprocals of Fibonacci numbers with odd indices has the identity:
∑
n
=
1
∞
1
F
2
n
−
1
=
5
2
∑
n
=
1
∞
2
(
Φ
−
2
)
n
−
1
/
2
1
+
(
Φ
−
2
)
2
n
−
1
=
5
4
∑
a
=
−
∞
∞
2
(
Φ
−
2
)
a
−
1
/
2
1
+
(
Φ
−
2
)
2
a
−
1
=
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}={\frac {\sqrt {5}}{2}}\,\sum _{n=1}^{\infty }{\frac {2(\Phi ^{-2})^{n-1/2}}{1+(\Phi ^{-2})^{2n-1}}}={\frac {\sqrt {5}}{4}}\sum _{a=-\infty }^{\infty }{\frac {2(\Phi ^{-2})^{a-1/2}}{1+(\Phi ^{-2})^{2a-1}}}=}
=
5
4
θ
2
(
Φ
−
2
)
2
=
5
8
[
θ
3
(
Φ
−
1
)
2
−
θ
4
(
Φ
−
1
)
2
]
{\displaystyle ={\frac {\sqrt {5}}{4}}\,\theta _{2}(\Phi ^{-2})^{2}={\frac {\sqrt {5}}{8}}{\bigl [}\theta _{3}(\Phi ^{-1})^{2}-\theta _{4}(\Phi ^{-1})^{2}{\bigr ]}}
By not using the theta function expression, following identity between two sums can be formulated:
∑
n
=
1
∞
1
F
2
n
−
1
=
5
4
[
∑
n
=
1
∞
2
Φ
−
(
2
n
−
1
)
2
/
2
]
2
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}={\frac {\sqrt {5}}{4}}\,{\biggl [}\sum _{n=1}^{\infty }2\,\Phi ^{-(2n-1)^{2}/2}{\biggr ]}^{2}}
∑
n
=
1
∞
1
F
2
n
−
1
=
1.82451515740692456814215840626732817332
…
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{2n-1}}}=1.82451515740692456814215840626732817332\ldots }
Also in this case
Φ
=
1
2
(
5
+
1
)
{\displaystyle \Phi ={\tfrac {1}{2}}({\sqrt {5}}+1)}
is Golden ratio number again.
Infinite sum of the reciprocals of the Fibonacci number squares:
∑
n
=
1
∞
1
F
n
2
=
5
24
[
2
θ
2
(
Φ
−
2
)
4
−
θ
3
(
Φ
−
2
)
4
+
1
]
=
5
24
[
θ
3
(
Φ
−
2
)
4
−
2
θ
4
(
Φ
−
2
)
4
+
1
]
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{F_{n}^{2}}}={\frac {5}{24}}{\bigl [}2\,\theta _{2}(\Phi ^{-2})^{4}-\theta _{3}(\Phi ^{-2})^{4}+1{\bigr ]}={\frac {5}{24}}{\bigl [}\theta _{3}(\Phi ^{-2})^{4}-2\,\theta _{4}(\Phi ^{-2})^{4}+1{\bigr ]}}
Infinite sum of the reciprocals of the Pell numbers with odd indices:
∑
n
=
1
∞
1
P
2
n
−
1
=
1
2
θ
2
[
(
2
−
1
)
2
]
2
=
1
2
2
[
θ
3
(
2
−
1
)
2
−
θ
4
(
2
−
1
)
2
]
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{P_{2n-1}}}={\frac {1}{\sqrt {2}}}\,\theta _{2}{\bigl [}({\sqrt {2}}-1)^{2}{\bigr ]}^{2}={\frac {1}{2{\sqrt {2}}}}{\bigl [}\theta _{3}({\sqrt {2}}-1)^{2}-\theta _{4}({\sqrt {2}}-1)^{2}{\bigr ]}}
=== Sums with theta function in the summand ===
The next two series identities were proved by István Mező:
θ
4
2
(
q
)
=
i
q
1
4
∑
k
=
−
∞
∞
q
2
k
2
−
k
θ
1
(
2
k
−
1
2
i
ln
q
,
q
)
,
θ
4
2
(
q
)
=
∑
k
=
−
∞
∞
q
2
k
2
θ
4
(
k
ln
q
i
,
q
)
.
{\displaystyle {\begin{aligned}\theta _{4}^{2}(q)&=iq^{\frac {1}{4}}\sum _{k=-\infty }^{\infty }q^{2k^{2}-k}\theta _{1}\left({\frac {2k-1}{2i}}\ln q,q\right),\\[6pt]\theta _{4}^{2}(q)&=\sum _{k=-\infty }^{\infty }q^{2k^{2}}\theta _{4}\left({\frac {k\ln q}{i}},q\right).\end{aligned}}}
These relations hold for all 0 < q < 1. Specializing the values of q, we have the next parameter free sums
π
e
π
2
⋅
1
Γ
2
(
3
4
)
=
i
∑
k
=
−
∞
∞
e
π
(
k
−
2
k
2
)
θ
1
(
i
π
2
(
2
k
−
1
)
,
e
−
π
)
{\displaystyle {\sqrt {\frac {\pi {\sqrt {e^{\pi }}}}{2}}}\cdot {\frac {1}{\Gamma ^{2}\left({\frac {3}{4}}\right)}}=i\sum _{k=-\infty }^{\infty }e^{\pi \left(k-2k^{2}\right)}\theta _{1}\left({\frac {i\pi }{2}}(2k-1),e^{-\pi }\right)}
π
2
⋅
1
Γ
2
(
3
4
)
=
∑
k
=
−
∞
∞
θ
4
(
i
k
π
,
e
−
π
)
e
2
π
k
2
{\displaystyle {\sqrt {\frac {\pi }{2}}}\cdot {\frac {1}{\Gamma ^{2}\left({\frac {3}{4}}\right)}}=\sum _{k=-\infty }^{\infty }{\frac {\theta _{4}\left(ik\pi ,e^{-\pi }\right)}{e^{2\pi k^{2}}}}}
== Zeros of the Jacobi theta functions ==
All zeros of the Jacobi theta functions are simple zeros and are given by the following:
ϑ
(
z
;
τ
)
=
ϑ
00
(
z
;
τ
)
=
0
⟺
z
=
m
+
n
τ
+
1
2
+
τ
2
ϑ
11
(
z
;
τ
)
=
0
⟺
z
=
m
+
n
τ
ϑ
10
(
z
;
τ
)
=
0
⟺
z
=
m
+
n
τ
+
1
2
ϑ
01
(
z
;
τ
)
=
0
⟺
z
=
m
+
n
τ
+
τ
2
{\displaystyle {\begin{aligned}\vartheta (z;\tau )=\vartheta _{00}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {1}{2}}+{\frac {\tau }{2}}\\[3pt]\vartheta _{11}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau \\[3pt]\vartheta _{10}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {1}{2}}\\[3pt]\vartheta _{01}(z;\tau )&=0\quad &\Longleftrightarrow &&\quad z&=m+n\tau +{\frac {\tau }{2}}\end{aligned}}}
where m, n are arbitrary integers.
== Relation to the Riemann zeta function ==
The relation
ϑ
(
0
;
−
1
τ
)
=
(
−
i
τ
)
1
2
ϑ
(
0
;
τ
)
{\displaystyle \vartheta \left(0;-{\frac {1}{\tau }}\right)=\left(-i\tau \right)^{\frac {1}{2}}\vartheta (0;\tau )}
was used by Riemann to prove the functional equation for the Riemann zeta function, by means of the Mellin transform
Γ
(
s
2
)
π
−
s
2
ζ
(
s
)
=
1
2
∫
0
∞
(
ϑ
(
0
;
i
t
)
−
1
)
t
s
2
d
t
t
{\displaystyle \Gamma \left({\frac {s}{2}}\right)\pi ^{-{\frac {s}{2}}}\zeta (s)={\frac {1}{2}}\int _{0}^{\infty }{\bigl (}\vartheta (0;it)-1{\bigr )}t^{\frac {s}{2}}{\frac {\mathrm {d} t}{t}}}
which can be shown to be invariant under substitution of s by 1 − s. The corresponding integral for z ≠ 0 is given in the article on the Hurwitz zeta function.
== Relation to the Weierstrass elliptic function ==
The theta function was used by Jacobi to construct (in a form adapted to easy calculation) his elliptic functions as the quotients of the above four theta functions, and could have been used by him to construct Weierstrass's elliptic functions also, since
℘
(
z
;
τ
)
=
−
(
log
ϑ
11
(
z
;
τ
)
)
″
+
c
{\displaystyle \wp (z;\tau )=-{\big (}\log \vartheta _{11}(z;\tau ){\big )}''+c}
where the second derivative is with respect to z and the constant c is defined so that the Laurent expansion of ℘(z) at z = 0 has zero constant term.
== Relation to the q-gamma function ==
The fourth theta function – and thus the others too – is intimately connected to the Jackson q-gamma function via the relation
(
Γ
q
2
(
x
)
Γ
q
2
(
1
−
x
)
)
−
1
=
q
2
x
(
1
−
x
)
(
q
−
2
;
q
−
2
)
∞
3
(
q
2
−
1
)
θ
4
(
1
2
i
(
1
−
2
x
)
log
q
,
1
q
)
.
{\displaystyle \left(\Gamma _{q^{2}}(x)\Gamma _{q^{2}}(1-x)\right)^{-1}={\frac {q^{2x(1-x)}}{\left(q^{-2};q^{-2}\right)_{\infty }^{3}\left(q^{2}-1\right)}}\theta _{4}\left({\frac {1}{2i}}(1-2x)\log q,{\frac {1}{q}}\right).}
== Relations to Dedekind eta function ==
Let η(τ) be the Dedekind eta function, and the argument of the theta function as the nome q = eπiτ. Then,
θ
2
(
q
)
=
ϑ
10
(
0
;
τ
)
=
2
η
2
(
2
τ
)
η
(
τ
)
,
θ
3
(
q
)
=
ϑ
00
(
0
;
τ
)
=
η
5
(
τ
)
η
2
(
1
2
τ
)
η
2
(
2
τ
)
=
η
2
(
1
2
(
τ
+
1
)
)
η
(
τ
+
1
)
,
θ
4
(
q
)
=
ϑ
01
(
0
;
τ
)
=
η
2
(
1
2
τ
)
η
(
τ
)
,
{\displaystyle {\begin{aligned}\theta _{2}(q)=\vartheta _{10}(0;\tau )&={\frac {2\eta ^{2}(2\tau )}{\eta (\tau )}},\\[3pt]\theta _{3}(q)=\vartheta _{00}(0;\tau )&={\frac {\eta ^{5}(\tau )}{\eta ^{2}\left({\frac {1}{2}}\tau \right)\eta ^{2}(2\tau )}}={\frac {\eta ^{2}\left({\frac {1}{2}}(\tau +1)\right)}{\eta (\tau +1)}},\\[3pt]\theta _{4}(q)=\vartheta _{01}(0;\tau )&={\frac {\eta ^{2}\left({\frac {1}{2}}\tau \right)}{\eta (\tau )}},\end{aligned}}}
and,
θ
2
(
q
)
θ
3
(
q
)
θ
4
(
q
)
=
2
η
3
(
τ
)
.
{\displaystyle \theta _{2}(q)\,\theta _{3}(q)\,\theta _{4}(q)=2\eta ^{3}(\tau ).}
See also the Weber modular functions.
== Elliptic modulus ==
The elliptic modulus is
k
(
τ
)
=
ϑ
10
(
0
;
τ
)
2
ϑ
00
(
0
;
τ
)
2
{\displaystyle k(\tau )={\frac {\vartheta _{10}(0;\tau )^{2}}{\vartheta _{00}(0;\tau )^{2}}}}
and the complementary elliptic modulus is
k
′
(
τ
)
=
ϑ
01
(
0
;
τ
)
2
ϑ
00
(
0
;
τ
)
2
{\displaystyle k'(\tau )={\frac {\vartheta _{01}(0;\tau )^{2}}{\vartheta _{00}(0;\tau )^{2}}}}
== Derivatives of theta functions ==
These are two identical definitions of the complete elliptic integral of the second kind:
E
(
k
)
=
∫
0
π
/
2
1
−
k
2
sin
(
φ
)
2
d
φ
{\displaystyle E(k)=\int _{0}^{\pi /2}{\sqrt {1-k^{2}\sin(\varphi )^{2}}}d\varphi }
E
(
k
)
=
π
2
∑
a
=
0
∞
[
(
2
a
)
!
]
2
(
1
−
2
a
)
16
a
(
a
!
)
4
k
2
a
{\displaystyle E(k)={\frac {\pi }{2}}\sum _{a=0}^{\infty }{\frac {[(2a)!]^{2}}{(1-2a)16^{a}(a!)^{4}}}k^{2a}}
The derivatives of the Theta Nullwert functions have these MacLaurin series:
θ
2
′
(
x
)
=
d
d
x
θ
2
(
x
)
=
1
2
x
−
3
/
4
+
∑
n
=
1
∞
1
2
(
2
n
+
1
)
2
x
(
2
n
−
1
)
(
2
n
+
3
)
/
4
{\displaystyle \theta _{2}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{2}(x)={\frac {1}{2}}x^{-3/4}+\sum _{n=1}^{\infty }{\frac {1}{2}}(2n+1)^{2}x^{(2n-1)(2n+3)/4}}
θ
3
′
(
x
)
=
d
d
x
θ
3
(
x
)
=
2
+
∑
n
=
1
∞
2
(
n
+
1
)
2
x
n
(
n
+
2
)
{\displaystyle \theta _{3}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{3}(x)=2+\sum _{n=1}^{\infty }2(n+1)^{2}x^{n(n+2)}}
θ
4
′
(
x
)
=
d
d
x
θ
4
(
x
)
=
−
2
+
∑
n
=
1
∞
2
(
n
+
1
)
2
(
−
1
)
n
+
1
x
n
(
n
+
2
)
{\displaystyle \theta _{4}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x)=-2+\sum _{n=1}^{\infty }2(n+1)^{2}(-1)^{n+1}x^{n(n+2)}}
The derivatives of theta zero-value functions are as follows:
θ
2
′
(
x
)
=
d
d
x
θ
2
(
x
)
=
1
2
π
x
θ
2
(
x
)
θ
3
(
x
)
2
E
[
θ
2
(
x
)
2
θ
3
(
x
)
2
]
{\displaystyle \theta _{2}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{2}(x)={\frac {1}{2\pi x}}\theta _{2}(x)\theta _{3}(x)^{2}E{\biggl [}{\frac {\theta _{2}(x)^{2}}{\theta _{3}(x)^{2}}}{\biggr ]}}
θ
3
′
(
x
)
=
d
d
x
θ
3
(
x
)
=
θ
3
(
x
)
[
θ
3
(
x
)
2
+
θ
4
(
x
)
2
]
{
1
2
π
x
E
[
θ
3
(
x
)
2
−
θ
4
(
x
)
2
θ
3
(
x
)
2
+
θ
4
(
x
)
2
]
−
θ
4
(
x
)
2
4
x
}
{\displaystyle \theta _{3}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{3}(x)=\theta _{3}(x){\bigl [}\theta _{3}(x)^{2}+\theta _{4}(x)^{2}{\bigr ]}{\biggl \{}{\frac {1}{2\pi x}}E{\biggl [}{\frac {\theta _{3}(x)^{2}-\theta _{4}(x)^{2}}{\theta _{3}(x)^{2}+\theta _{4}(x)^{2}}}{\biggr ]}-{\frac {\theta _{4}(x)^{2}}{4\,x}}{\biggr \}}}
θ
4
′
(
x
)
=
d
d
x
θ
4
(
x
)
=
θ
4
(
x
)
[
θ
3
(
x
)
2
+
θ
4
(
x
)
2
]
{
1
2
π
x
E
[
θ
3
(
x
)
2
−
θ
4
(
x
)
2
θ
3
(
x
)
2
+
θ
4
(
x
)
2
]
−
θ
3
(
x
)
2
4
x
}
{\displaystyle \theta _{4}'(x)={\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x)=\theta _{4}(x){\bigl [}\theta _{3}(x)^{2}+\theta _{4}(x)^{2}{\bigr ]}{\biggl \{}{\frac {1}{2\pi x}}E{\biggl [}{\frac {\theta _{3}(x)^{2}-\theta _{4}(x)^{2}}{\theta _{3}(x)^{2}+\theta _{4}(x)^{2}}}{\biggr ]}-{\frac {\theta _{3}(x)^{2}}{4\,x}}{\biggr \}}}
The two last mentioned formulas are valid for all real numbers of the real definition interval:
−
1
<
x
<
1
∩
x
∈
R
{\displaystyle -1<x<1\,\cap \,x\in \mathbb {R} }
And these two last named theta derivative functions are related to each other in this way:
ϑ
4
(
x
)
[
d
d
x
ϑ
3
(
x
)
]
−
ϑ
3
(
x
)
[
d
d
x
θ
4
(
x
)
]
=
1
4
x
θ
3
(
x
)
θ
4
(
x
)
[
θ
3
(
x
)
4
−
θ
4
(
x
)
4
]
{\displaystyle \vartheta _{4}(x){\biggl [}{\frac {\mathrm {d} }{\mathrm {d} x}}\,\vartheta _{3}(x){\biggr ]}-\vartheta _{3}(x){\biggl [}{\frac {\mathrm {d} }{\mathrm {d} x}}\,\theta _{4}(x){\biggr ]}={\frac {1}{4\,x}}\,\theta _{3}(x)\,\theta _{4}(x){\bigl [}\theta _{3}(x)^{4}-\theta _{4}(x)^{4}{\bigr ]}}
The derivatives of the quotients from two of the three theta functions mentioned here always have a rational relationship to those three functions:
d
d
x
θ
2
(
x
)
θ
3
(
x
)
=
θ
2
(
x
)
θ
4
(
x
)
4
4
x
θ
3
(
x
)
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{2}(x)}{\theta _{3}(x)}}={\frac {\theta _{2}(x)\,\theta _{4}(x)^{4}}{4\,x\,\theta _{3}(x)}}}
d
d
x
θ
2
(
x
)
θ
4
(
x
)
=
θ
2
(
x
)
θ
3
(
x
)
4
4
x
θ
4
(
x
)
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{2}(x)}{\theta _{4}(x)}}={\frac {\theta _{2}(x)\,\theta _{3}(x)^{4}}{4\,x\,\theta _{4}(x)}}}
d
d
x
θ
3
(
x
)
θ
4
(
x
)
=
θ
3
(
x
)
5
−
θ
3
(
x
)
θ
4
(
x
)
4
4
x
θ
4
(
x
)
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\,{\frac {\theta _{3}(x)}{\theta _{4}(x)}}={\frac {\theta _{3}(x)^{5}-\theta _{3}(x)\,\theta _{4}(x)^{4}}{4\,x\,\theta _{4}(x)}}}
For the derivation of these derivation formulas see the articles Nome (mathematics) and Modular lambda function!
== Integrals of theta functions ==
For the theta functions these integrals are valid:
∫
0
1
θ
2
(
x
)
d
x
=
∑
k
=
−
∞
∞
4
(
2
k
+
1
)
2
+
4
=
π
tanh
(
π
)
≈
3.129881
{\displaystyle \int _{0}^{1}\theta _{2}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {4}{(2k+1)^{2}+4}}=\pi \tanh(\pi )\approx 3.129881}
∫
0
1
θ
3
(
x
)
d
x
=
∑
k
=
−
∞
∞
1
k
2
+
1
=
π
coth
(
π
)
≈
3.153348
{\displaystyle \int _{0}^{1}\theta _{3}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {1}{k^{2}+1}}=\pi \coth(\pi )\approx 3.153348}
∫
0
1
θ
4
(
x
)
d
x
=
∑
k
=
−
∞
∞
(
−
1
)
k
k
2
+
1
=
π
csch
(
π
)
≈
0.272029
{\displaystyle \int _{0}^{1}\theta _{4}(x)\,\mathrm {d} x=\sum _{k=-\infty }^{\infty }{\frac {(-1)^{k}}{k^{2}+1}}=\pi \,\operatorname {csch} (\pi )\approx 0.272029}
The final results now shown are based on the general Cauchy sum formulas.
== A solution to the heat equation ==
The Jacobi theta function is the fundamental solution of the one-dimensional heat equation with spatially periodic boundary conditions. Taking z = x to be real and τ = it with t real and positive, we can write
ϑ
(
x
;
i
t
)
=
1
+
2
∑
n
=
1
∞
exp
(
−
π
n
2
t
)
cos
(
2
π
n
x
)
{\displaystyle \vartheta (x;it)=1+2\sum _{n=1}^{\infty }\exp \left(-\pi n^{2}t\right)\cos(2\pi nx)}
which solves the heat equation
∂
∂
t
ϑ
(
x
;
i
t
)
=
1
4
π
∂
2
∂
x
2
ϑ
(
x
;
i
t
)
.
{\displaystyle {\frac {\partial }{\partial t}}\vartheta (x;it)={\frac {1}{4\pi }}{\frac {\partial ^{2}}{\partial x^{2}}}\vartheta (x;it).}
This theta-function solution is 1-periodic in x, and as t → 0 it approaches the periodic delta function, or Dirac comb, in the sense of distributions
lim
t
→
0
ϑ
(
x
;
i
t
)
=
∑
n
=
−
∞
∞
δ
(
x
−
n
)
{\displaystyle \lim _{t\to 0}\vartheta (x;it)=\sum _{n=-\infty }^{\infty }\delta (x-n)}
.
General solutions of the spatially periodic initial value problem for the heat equation may be obtained by convolving the initial data at t = 0 with the theta function.
== Relation to the Heisenberg group ==
The Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. This invariance is presented in the article on the theta representation of the Heisenberg group.
== Generalizations ==
If F is a quadratic form in n variables, then the theta function associated with F is
θ
F
(
z
)
=
∑
m
∈
Z
n
e
2
π
i
z
F
(
m
)
{\displaystyle \theta _{F}(z)=\sum _{m\in \mathbb {Z} ^{n}}e^{2\pi izF(m)}}
with the sum extending over the lattice of integers
Z
n
{\displaystyle \mathbb {Z} ^{n}}
. This theta function is a modular form of weight n/2 (on an appropriately defined subgroup) of the modular group. In the Fourier expansion,
θ
^
F
(
z
)
=
∑
k
=
0
∞
R
F
(
k
)
e
2
π
i
k
z
,
{\displaystyle {\hat {\theta }}_{F}(z)=\sum _{k=0}^{\infty }R_{F}(k)e^{2\pi ikz},}
the numbers RF(k) are called the representation numbers of the form.
=== Theta series of a Dirichlet character ===
For χ a primitive Dirichlet character modulo q and ν = 1 − χ(−1)/2 then
θ
χ
(
z
)
=
1
2
∑
n
=
−
∞
∞
χ
(
n
)
n
ν
e
2
i
π
n
2
z
{\displaystyle \theta _{\chi }(z)={\frac {1}{2}}\sum _{n=-\infty }^{\infty }\chi (n)n^{\nu }e^{2i\pi n^{2}z}}
is a weight 1/2 + ν modular form of level 4q2 and character
χ
(
d
)
(
−
1
d
)
ν
,
{\displaystyle \chi (d)\left({\frac {-1}{d}}\right)^{\nu },}
which means
θ
χ
(
a
z
+
b
c
z
+
d
)
=
χ
(
d
)
(
−
1
d
)
ν
(
θ
1
(
a
z
+
b
c
z
+
d
)
θ
1
(
z
)
)
1
+
2
ν
θ
χ
(
z
)
{\displaystyle \theta _{\chi }\left({\frac {az+b}{cz+d}}\right)=\chi (d)\left({\frac {-1}{d}}\right)^{\nu }\left({\frac {\theta _{1}\left({\frac {az+b}{cz+d}}\right)}{\theta _{1}(z)}}\right)^{1+2\nu }\theta _{\chi }(z)}
whenever
a
,
b
,
c
,
d
∈
Z
4
,
a
d
−
b
c
=
1
,
c
≡
0
mod
4
q
2
.
{\displaystyle a,b,c,d\in \mathbb {Z} ^{4},ad-bc=1,c\equiv 0{\bmod {4}}q^{2}.}
=== Ramanujan theta function ===
=== Riemann theta function ===
Let
H
n
=
{
F
∈
M
(
n
,
C
)
|
F
=
F
T
,
Im
F
>
0
}
{\displaystyle \mathbb {H} _{n}=\left\{F\in M(n,\mathbb {C} )\,{\big |}\,F=F^{\mathsf {T}}\,,\,\operatorname {Im} F>0\right\}}
be the set of symmetric square matrices whose imaginary part is positive definite.
H
n
{\displaystyle \mathbb {H} _{n}}
is called the Siegel upper half-space and is the multi-dimensional analog of the upper half-plane. The n-dimensional analogue of the modular group is the symplectic group Sp(2n,
Z
{\displaystyle \mathbb {Z} }
); for n = 1, Sp(2,
Z
{\displaystyle \mathbb {Z} }
) = SL(2,
Z
{\displaystyle \mathbb {Z} }
). The n-dimensional analogue of the congruence subgroups is played by
ker
{
Sp
(
2
n
,
Z
)
→
Sp
(
2
n
,
Z
/
k
Z
)
}
.
{\displaystyle \ker {\big \{}\operatorname {Sp} (2n,\mathbb {Z} )\to \operatorname {Sp} (2n,\mathbb {Z} /k\mathbb {Z} ){\big \}}.}
Then, given τ ∈
H
n
{\displaystyle \mathbb {H} _{n}}
, the Riemann theta function is defined as
θ
(
z
,
τ
)
=
∑
m
∈
Z
n
exp
(
2
π
i
(
1
2
m
T
τ
m
+
m
T
z
)
)
.
{\displaystyle \theta (z,\tau )=\sum _{m\in \mathbb {Z} ^{n}}\exp \left(2\pi i\left({\tfrac {1}{2}}m^{\mathsf {T}}\tau m+m^{\mathsf {T}}z\right)\right).}
Here, z ∈
C
n
{\displaystyle \mathbb {C} ^{n}}
is an n-dimensional complex vector, and the superscript T denotes the transpose. The Jacobi theta function is then a special case, with n = 1 and τ ∈
H
{\displaystyle \mathbb {H} }
where
H
{\displaystyle \mathbb {H} }
is the upper half-plane. One major application of the Riemann theta function is that it allows one to give explicit formulas for meromorphic functions on compact Riemann surfaces, as well as other auxiliary objects that figure prominently in their function theory, by taking τ to be the period matrix with respect to a canonical basis for its first homology group.
The Riemann theta converges absolutely and uniformly on compact subsets of
C
n
×
H
n
{\displaystyle \mathbb {C} ^{n}\times \mathbb {H} _{n}}
.
The functional equation is
θ
(
z
+
a
+
τ
b
,
τ
)
=
exp
(
2
π
i
(
−
b
T
z
−
1
2
b
T
τ
b
)
)
θ
(
z
,
τ
)
{\displaystyle \theta (z+a+\tau b,\tau )=\exp \left(2\pi i\left(-b^{\mathsf {T}}z-{\tfrac {1}{2}}b^{\mathsf {T}}\tau b\right)\right)\theta (z,\tau )}
which holds for all vectors a, b ∈
Z
n
{\displaystyle \mathbb {Z} ^{n}}
, and for all z ∈
C
n
{\displaystyle \mathbb {C} ^{n}}
and τ ∈
H
n
{\displaystyle \mathbb {H} _{n}}
.
=== Poincaré series ===
The Poincaré series generalizes the theta series to automorphic forms with respect to arbitrary Fuchsian groups.
== Derivation of the theta values ==
=== Identity of the Euler beta function ===
In the following, three important theta function values are to be derived as examples:
This is how the Euler beta function is defined in its reduced form:
β
(
x
)
=
Γ
(
x
)
2
Γ
(
2
x
)
{\displaystyle \beta (x)={\frac {\Gamma (x)^{2}}{\Gamma (2x)}}}
In general, for all natural numbers
n
∈
N
{\displaystyle n\in \mathbb {N} }
this formula of the Euler beta function is valid:
4
−
1
/
(
n
+
2
)
n
+
2
csc
(
π
n
+
2
)
β
[
n
2
(
n
+
2
)
]
=
∫
0
∞
1
x
n
+
2
+
1
d
x
{\displaystyle {\frac {4^{-1/(n+2)}}{n+2}}\csc {\bigl (}{\frac {\pi }{n+2}}{\bigr )}\beta {\biggl [}{\frac {n}{2(n+2)}}{\biggr ]}=\int _{0}^{\infty }{\frac {1}{\sqrt {x^{n+2}+1}}}\,\mathrm {d} x}
=== Exemplary elliptic integrals ===
In the following some Elliptic Integral Singular Values are derived:
=== Combination of the integral identities with the nome ===
The elliptic nome function has these important values:
q
(
1
2
2
)
=
exp
(
−
π
)
{\displaystyle q({\tfrac {1}{2}}{\sqrt {2}})=\exp(-\pi )}
q
[
1
4
(
6
−
2
)
]
=
exp
(
−
3
π
)
{\displaystyle q[{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}})]=\exp(-{\sqrt {3}}\,\pi )}
q
(
2
−
1
)
=
exp
(
−
2
π
)
{\displaystyle q({\sqrt {2}}-1)=\exp(-{\sqrt {2}}\,\pi )}
For the proof of the correctness of these nome values, see the article Nome (mathematics)!
On the basis of these integral identities and the above-mentioned Definition and identities to the theta functions in the same section of this article, exemplary theta zero values shall be determined now:
θ
3
[
exp
(
−
π
)
]
=
θ
3
[
q
(
1
2
2
)
]
=
2
π
−
1
K
(
1
2
2
)
=
2
−
1
/
2
π
−
1
/
2
β
(
1
4
)
1
/
2
=
2
−
1
/
4
π
4
Γ
(
3
4
)
−
1
{\displaystyle \theta _{3}[\exp(-\pi )]=\theta _{3}[q({\tfrac {1}{2}}{\sqrt {2}})]={\sqrt {2\pi ^{-1}K({\tfrac {1}{2}}{\sqrt {2}})}}=2^{-1/2}\pi ^{-1/2}\beta ({\tfrac {1}{4}})^{1/2}=2^{-1/4}{\sqrt[{4}]{\pi }}\,{\Gamma {\bigl (}{\tfrac {3}{4}}{\bigr )}}^{-1}}
θ
3
[
exp
(
−
3
π
)
]
=
θ
3
{
q
[
1
4
(
6
−
2
)
]
}
=
2
π
−
1
K
[
1
4
(
6
−
2
)
]
=
2
−
1
/
6
3
−
1
/
8
π
−
1
/
2
β
(
1
3
)
1
/
2
{\displaystyle \theta _{3}[\exp(-{\sqrt {3}}\,\pi )]=\theta _{3}{\bigl \{}q{\bigl [}{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}}){\bigr ]}{\bigr \}}={\sqrt {2\pi ^{-1}K{\bigl [}{\tfrac {1}{4}}({\sqrt {6}}-{\sqrt {2}}){\bigr ]}}}=2^{-1/6}3^{-1/8}\pi ^{-1/2}\beta ({\tfrac {1}{3}})^{1/2}}
θ
3
[
exp
(
−
2
π
)
]
=
θ
3
[
q
(
2
−
1
)
]
=
2
π
−
1
K
(
2
−
1
)
=
2
−
1
/
8
cos
(
1
8
π
)
π
−
1
/
2
β
(
3
8
)
1
/
2
{\displaystyle \theta _{3}[\exp(-{\sqrt {2}}\,\pi )]=\theta _{3}[q({\sqrt {2}}-1)]={\sqrt {2\pi ^{-1}K({\sqrt {2}}-1)}}=2^{-1/8}\cos({\tfrac {1}{8}}\pi )\,\pi ^{-1/2}\beta ({\tfrac {3}{8}})^{1/2}}
θ
4
[
exp
(
−
2
π
)
]
=
θ
4
[
q
(
2
−
1
)
]
=
2
2
−
2
4
2
π
−
1
K
(
2
−
1
)
=
2
−
1
/
4
cos
(
1
8
π
)
1
/
2
π
−
1
/
2
β
(
3
8
)
1
/
2
{\displaystyle \theta _{4}[\exp(-{\sqrt {2}}\,\pi )]=\theta _{4}[q({\sqrt {2}}-1)]={\sqrt[{4}]{2{\sqrt {2}}-2}}\,{\sqrt {2\pi ^{-1}K({\sqrt {2}}-1)}}=2^{-1/4}\cos({\tfrac {1}{8}}\pi )^{1/2}\,\pi ^{-1/2}\beta ({\tfrac {3}{8}})^{1/2}}
== Partition sequences and Pochhammer products ==
=== Regular partition number sequence ===
The regular partition sequence
P
(
n
)
{\displaystyle P(n)}
itself indicates the number of ways in which a positive integer number
n
{\displaystyle n}
can be split into positive integer summands. For the numbers
n
=
1
{\displaystyle n=1}
to
n
=
5
{\displaystyle n=5}
, the associated partition numbers
P
{\displaystyle P}
with all associated number partitions are listed in the following table:
The generating function of the regular partition number sequence can be represented via Pochhammer product in the following way:
∑
k
=
0
∞
P
(
k
)
x
k
=
1
(
x
;
x
)
∞
=
θ
3
(
x
)
−
1
/
6
θ
4
(
x
)
−
2
/
3
[
θ
3
(
x
)
4
−
θ
4
(
x
)
4
16
x
]
−
1
/
24
{\displaystyle \sum _{k=0}^{\infty }P(k)x^{k}={\frac {1}{(x;x)_{\infty }}}=\theta _{3}(x)^{-1/6}\theta _{4}(x)^{-2/3}{\biggl [}{\frac {\theta _{3}(x)^{4}-\theta _{4}(x)^{4}}{16\,x}}{\biggr ]}^{-1/24}}
The summandization of the now mentioned Pochhammer product is described by the Pentagonal number theorem in this way:
(
x
;
x
)
∞
=
1
+
∑
n
=
1
∞
[
−
x
Fn
(
2
n
−
1
)
−
x
Kr
(
2
n
−
1
)
+
x
Fn
(
2
n
)
+
x
Kr
(
2
n
)
]
{\displaystyle (x;x)_{\infty }=1+\sum _{n=1}^{\infty }{\bigl [}-x^{{\text{Fn}}(2n-1)}-x^{{\text{Kr}}(2n-1)}+x^{{\text{Fn}}(2n)}+x^{{\text{Kr}}(2n)}{\bigr ]}}
The following basic definitions apply to the pentagonal numbers and the card house numbers:
Fn
(
z
)
=
1
2
z
(
3
z
−
1
)
{\displaystyle {\text{Fn}}(z)={\tfrac {1}{2}}z(3z-1)}
Kr
(
z
)
=
1
2
z
(
3
z
+
1
)
{\displaystyle {\text{Kr}}(z)={\tfrac {1}{2}}z(3z+1)}
As a further application one obtains a formula for the third power of the Euler product:
(
x
;
x
)
3
=
∏
n
=
1
∞
(
1
−
x
n
)
3
=
∑
m
=
0
∞
(
−
1
)
m
(
2
m
+
1
)
x
m
(
m
+
1
)
/
2
{\displaystyle (x;x)^{3}=\prod _{n=1}^{\infty }(1-x^{n})^{3}=\sum _{m=0}^{\infty }(-1)^{m}(2m+1)x^{m(m+1)/2}}
=== Strict partition number sequence ===
And the strict partition sequence
Q
(
n
)
{\displaystyle Q(n)}
indicates the number of ways in which such a positive integer number
n
{\displaystyle n}
can be splitted into positive integer summands such that each summand appears at most once and no summand value occurs repeatedly. Exactly the same sequence is also generated if in the partition only odd summands are included, but these odd summands may occur more than once. Both representations for the strict partition number sequence are compared in the following table:
The generating function of the strict partition number sequence can be represented using Pochhammer's product:
∑
k
=
0
∞
Q
(
k
)
x
k
=
1
(
x
;
x
2
)
∞
=
θ
3
(
x
)
1
/
6
θ
4
(
x
)
−
1
/
3
[
θ
3
(
x
)
4
−
θ
4
(
x
)
4
16
x
]
1
/
24
{\displaystyle \sum _{k=0}^{\infty }Q(k)x^{k}={\frac {1}{(x;x^{2})_{\infty }}}=\theta _{3}(x)^{1/6}\theta _{4}(x)^{-1/3}{\biggl [}{\frac {\theta _{3}(x)^{4}-\theta _{4}(x)^{4}}{16\,x}}{\biggr ]}^{1/24}}
=== Overpartition number sequence ===
The Maclaurin series for the reciprocal of the function ϑ01 has the numbers of over partition sequence as coefficients with a positive sign:
1
θ
4
(
x
)
=
∏
n
=
1
∞
1
+
x
n
1
−
x
n
=
∑
k
=
0
∞
P
¯
(
k
)
x
k
{\displaystyle {\frac {1}{\theta _{4}(x)}}=\prod _{n=1}^{\infty }{\frac {1+x^{n}}{1-x^{n}}}=\sum _{k=0}^{\infty }{\overline {P}}(k)x^{k}}
1
θ
4
(
x
)
=
1
+
2
x
+
4
x
2
+
8
x
3
+
14
x
4
+
24
x
5
+
40
x
6
+
64
x
7
+
100
x
8
+
154
x
9
+
232
x
10
+
…
{\displaystyle {\frac {1}{\theta _{4}(x)}}=1+2x+4x^{2}+8x^{3}+14x^{4}+24x^{5}+40x^{6}+64x^{7}+100x^{8}+154x^{9}+232x^{10}+\dots }
If, for a given number
k
{\displaystyle k}
, all partitions are set up in such a way that the summand size never increases, and all those summands that do not have a summand of the same size to the left of themselves can be marked for each partition of this type, then it will be the resulting number of the marked partitions depending on
k
{\displaystyle k}
by the overpartition function
P
¯
(
k
)
{\displaystyle {\overline {P}}(k)}
.
First example:
P
¯
(
4
)
=
14
{\displaystyle {\overline {P}}(4)=14}
These 14 possibilities of partition markings exist for the sum 4:
Second example:
P
¯
(
5
)
=
24
{\displaystyle {\overline {P}}(5)=24}
These 24 possibilities of partition markings exist for the sum 5:
=== Relations of the partition number sequences to each other ===
In the Online Encyclopedia of Integer Sequences (OEIS), the sequence of regular partition numbers
P
(
n
)
{\displaystyle P(n)}
is under the code A000041, the sequence of strict partitions is
Q
(
n
)
{\displaystyle Q(n)}
under the code A000009 and the sequence of superpartitions
P
¯
(
n
)
{\displaystyle {\overline {P}}(n)}
under the code A015128. All parent partitions from index
n
=
1
{\displaystyle n=1}
are even.
The sequence of superpartitions
P
¯
(
n
)
{\displaystyle {\overline {P}}(n)}
can be written with the regular partition sequence P and the strict partition sequence Q can be generated like this:
P
¯
(
n
)
=
∑
k
=
0
n
P
(
n
−
k
)
Q
(
k
)
{\displaystyle {\overline {P}}(n)=\sum _{k=0}^{n}P(n-k)Q(k)}
In the following table of sequences of numbers, this formula should be used as an example:
Related to this property, the following combination of two series of sums can also be set up via the function ϑ01:
θ
4
(
x
)
=
[
∑
k
=
0
∞
P
(
k
)
x
k
]
−
1
[
∑
k
=
0
∞
Q
(
k
)
x
k
]
−
1
{\displaystyle \theta _{4}(x)={\biggl [}\sum _{k=0}^{\infty }P(k)x^{k}{\biggr ]}^{-1}{\biggl [}\sum _{k=0}^{\infty }Q(k)x^{k}{\biggr ]}^{-1}}
== Notes ==
== References ==
Abramowitz, Milton; Stegun, Irene A. (1964). Handbook of Mathematical Functions. New York: Dover Publications. sec. 16.27ff. ISBN 978-0-486-61272-0. {{cite book}}: ISBN / Date incompatibility (help)
Akhiezer, Naum Illyich (1990) [1970]. Elements of the Theory of Elliptic Functions. AMS Translations of Mathematical Monographs. Vol. 79. Providence, RI: AMS. ISBN 978-0-8218-4532-5.
Farkas, Hershel M.; Kra, Irwin (1980). Riemann Surfaces. New York: Springer-Verlag. ch. 6. ISBN 978-0-387-90465-8.. (for treatment of the Riemann theta)
Hardy, G. H.; Wright, E. M. (1959). An Introduction to the Theory of Numbers (4th ed.). Oxford: Clarendon Press.
Mumford, David (1983). Tata Lectures on Theta I. Boston: Birkhauser. ISBN 978-3-7643-3109-2.
Pierpont, James (1959). Functions of a Complex Variable. New York: Dover Publications.
Rauch, Harry E.; Farkas, Hershel M. (1974). Theta Functions with Applications to Riemann Surfaces. Baltimore: Williams & Wilkins. ISBN 978-0-683-07196-2.
Reinhardt, William P.; Walker, Peter L. (2010), "Theta Functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
Whittaker, E. T.; Watson, G. N. (1927). A Course in Modern Analysis (4th ed.). Cambridge: Cambridge University Press. ch. 21. (history of Jacobi's θ functions)
== Further reading ==
Farkas, Hershel M. (2008). "Theta functions in complex analysis and number theory". In Alladi, Krishnaswami (ed.). Surveys in Number Theory. Developments in Mathematics. Vol. 17. Springer-Verlag. pp. 57–87. ISBN 978-0-387-78509-7. Zbl 1206.11055.
Schoeneberg, Bruno (1974). "IX. Theta series". Elliptic modular functions. Die Grundlehren der mathematischen Wissenschaften. Vol. 203. Springer-Verlag. pp. 203–226. ISBN 978-3-540-06382-7.
Ackerman, Michael (1 February 1979). "On the generating functions of certain Eisenstein series". Mathematische Annalen. 244 (1): 75–81. doi:10.1007/BF01420339. S2CID 120045753.
Harry Rauch with Hershel M. Farkas: Theta functions with applications to Riemann Surfaces, Williams and Wilkins, Baltimore MD 1974, ISBN 0-683-07196-3.
Charles Hermite: Sur la résolution de l'Équation du cinquiéme degré Comptes rendus, C. R. Acad. Sci. Paris, Nr. 11, March 1858.
== External links ==
Moiseev Igor. "Elliptic functions for Matlab and Octave".
This article incorporates material from Integral representations of Jacobi theta functions on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | Wikipedia/Theta_function |
In mathematics, an algebraic manifold is an algebraic variety which is also a manifold. As such, algebraic manifolds are a generalisation of the concept of smooth curves and surfaces defined by polynomials. An example is the sphere, which can be defined as the zero set of the polynomial x2 + y2 + z2 – 1, and hence is an algebraic variety.
For an algebraic manifold, the ground field will be the real numbers or complex numbers; in the case of the real numbers, the manifold of real points is sometimes called a Nash manifold.
Every sufficiently small local patch of an algebraic manifold is isomorphic to km where k is the ground field. Equivalently the variety is smooth (free from singular points). The Riemann sphere is one example of a complex algebraic manifold, since it is the complex projective line.
== Examples ==
Elliptic curves
Grassmannian
== See also ==
Algebraic geometry and analytic geometry
== References ==
Nash, John Forbes (1952). "Real algebraic manifolds". Annals of Mathematics. 56 (3): 405–21. doi:10.2307/1969649. JSTOR 1969649. MR 0050928. (See also Proc. Internat. Congr. Math., 1950, (AMS, 1952), pp. 516–517.)
== External links ==
K-Algebraic manifold at PlanetMath
Algebraic manifold at Mathworld
Lecture notes on algebraic manifolds
Lecture notes on algebraic manifolds | Wikipedia/Projective_algebraic_manifold |
In mathematics, specifically in group theory, two groups are commensurable if they differ only by a finite amount, in a precise sense. The commensurator of a subgroup is another subgroup, related to the normalizer.
== Abstract commensurability ==
Two groups G1 and G2 are said to be (abstractly) commensurable if there are subgroups H1 ⊂ G1 and H2 ⊂ G2 of finite index such that H1 is isomorphic to H2. For example:
A group is finite if and only if it is commensurable with the trivial group.
Any two finitely generated free groups on at least 2 generators are commensurable with each other. The group SL(2,Z) is also commensurable with these free groups.
Any two surface groups of genus at least 2 are commensurable with each other.
In geometric group theory, a finitely generated group is viewed as a metric space using the word metric. If two groups are (abstractly) commensurable, then they are quasi-isometric. It has been fruitful to ask when the converse holds.
== Commensurability of subgroups ==
A different but related notion is used for subgroups of a given group. Namely, two subgroups Γ1 and Γ2 of a group G are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2. Clearly this implies that Γ1 and Γ2 are abstractly commensurable.
Example: for nonzero real numbers a and b, the subgroup of R generated by a is commensurable with the subgroup generated by b if and only if the real numbers a and b are commensurable, meaning that a/b belongs to the rational numbers Q. If a and b are commensurable, with smallest positive common integer multiple c, then
⟨
a
⟩
∩
⟨
b
⟩
=
⟨
c
⟩
{\displaystyle \langle a\rangle \cap \langle b\rangle =\langle c\rangle }
, which has index c/|a| in
⟨
a
⟩
{\displaystyle \langle a\rangle }
and c/|b| in
⟨
b
⟩
{\displaystyle \langle b\rangle }
.
== Related notions ==
=== In linear algebra ===
There is an analogous notion in linear algebra: two linear subspaces S and T of a vector space V are commensurable if the intersection S ∩ T has finite codimension in both S and T.
=== In topology ===
Two path-connected topological spaces are sometimes called commensurable if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. By the relation between covering spaces and the fundamental group, commensurable spaces have commensurable fundamental groups.
Example: the Gieseking manifold is commensurable with the complement of the figure-eight knot; these are both noncompact hyperbolic 3-manifolds of finite volume. On the other hand, there are infinitely many different commensurability classes of compact hyperbolic 3-manifolds, and also of noncompact hyperbolic 3-manifolds of finite volume.
== Commensurators ==
The commensurator of a subgroup Γ of a group G, denoted CommG(Γ), is the set of elements g of G that such that the conjugate subgroup gΓg−1 is commensurable with Γ. In other words,
Comm
G
(
Γ
)
=
{
g
∈
G
:
g
Γ
g
−
1
∩
Γ
has finite index in both
Γ
and
g
Γ
g
−
1
}
.
{\displaystyle \operatorname {Comm} _{G}(\Gamma )=\{g\in G:g\Gamma g^{-1}\cap \Gamma {\text{ has finite index in both }}\Gamma {\text{ and }}g\Gamma g^{-1}\}.}
This is a subgroup of G that contains the normalizer NG(Γ) (and hence contains Γ).
For example, the commensurator of the special linear group SL(n,Z) in SL(n,R) contains SL(n,Q). In particular, the commensurator of SL(n,Z) in SL(n,R) is dense in SL(n,R). More generally, Grigory Margulis showed that the commensurator of a lattice Γ in a semisimple Lie group G is dense in G if and only if Γ is an arithmetic subgroup of G.
== Abstract commensurators ==
The abstract commensurator of a group
G
{\displaystyle G}
, denoted
Comm
(
G
)
{\displaystyle {\text{Comm}}(G)}
, is the group of equivalence classes of isomorphisms
ϕ
:
H
→
K
{\displaystyle \phi :H\to K}
, where
H
{\displaystyle H}
and
K
{\displaystyle K}
are finite index subgroups of
G
{\displaystyle G}
, under composition. Elements of
Comm
(
G
)
{\displaystyle {\text{Comm}}(G)}
are called commensurators of
G
{\displaystyle G}
.
If
G
{\displaystyle G}
is a connected semisimple Lie group not isomorphic to
PSL
2
(
R
)
{\displaystyle {\text{PSL}}_{2}(\mathbb {R} )}
, with trivial center and no compact factors, then by the Mostow rigidity theorem, the abstract commensurator of any irreducible lattice
Γ
≤
G
{\displaystyle \Gamma \leq G}
is linear. Moreover, if
Γ
{\displaystyle \Gamma }
is arithmetic, then Comm
(
Γ
)
{\displaystyle (\Gamma )}
is virtually isomorphic to a dense subgroup of
G
{\displaystyle G}
, otherwise Comm
(
Γ
)
{\displaystyle (\Gamma )}
is virtually isomorphic to
Γ
{\displaystyle \Gamma }
.
== Notes ==
== References ==
Druțu, Cornelia; Kapovich, Michael (2018), Geometric Group Theory, American Mathematical Society, ISBN 9781470411046, MR 3753580
Maclachlan, Colin; Reid, Alan W. (2003), The Arithmetic of Hyperbolic 3-Manifolds, Springer Nature, ISBN 0-387-98386-4, MR 1937957
Margulis, Grigory (1991), Discrete Subgroups of Semisimple Lie Groups, Springer Nature, ISBN 3-540-12179-X, MR 1090825 | Wikipedia/Commensurability_(group_theory) |
In mathematics, the concept of abelian variety is the higher-dimensional generalization of the elliptic curve. The equations defining abelian varieties are a topic of study because every abelian variety is a projective variety. In dimension d ≥ 2, however, it is no longer as straightforward to discuss such equations.
There is a large classical literature on this question, which in a reformulation is, for complex algebraic geometry, a question of describing relations between theta functions. The modern geometric treatment now refers to some basic papers of David Mumford, from 1966 to 1967, which reformulated that theory in terms from abstract algebraic geometry valid over general fields.
== Complete intersections ==
The only 'easy' cases are those for d = 1, for an elliptic curve with linear span the projective plane or projective 3-space. In the plane, every elliptic curve is given by a cubic curve. In P3, an elliptic curve can be obtained as the intersection of two quadrics.
In general abelian varieties are not complete intersections. Computer algebra techniques are now able to have some impact on the direct handling of equations for small values of d > 1.
== Kummer surfaces ==
The interest in nineteenth century geometry in the Kummer surface came in part from the way a quartic surface represented a quotient of an abelian variety with d = 2, by the group of order 2 of automorphisms generated by x → −x on the abelian variety.
== General case ==
Mumford defined a theta group associated to an invertible sheaf L on an abelian variety A. This is a group of self-automorphisms of L, and is a finite analogue of the Heisenberg group. The primary results are on the action of the theta group on the global sections of L. When L is very ample, the linear representation can be described, by means of the structure of the theta group. In fact the theta group is abstractly a simple type of nilpotent group, a central extension of a group of torsion points on A, and the extension is known (it is in effect given by the Weil pairing). There is a uniqueness result for irreducible linear representations of the theta group with given central character, or in other words an analogue of the Stone–von Neumann theorem. (It is assumed for this that the characteristic of the field of coefficients doesn't divide the order of the theta group.)
Mumford showed how this abstract algebraic formulation could account for the classical theory of theta functions with theta characteristics, as being the case where the theta group was an extension of the two-torsion of A.
An innovation in this area is to use the Mukai–Fourier transform.
== The coordinate ring ==
The goal of the theory is to prove results on the homogeneous coordinate ring of the embedded abelian variety A, that is, set in a projective space according to a very ample L and its global sections. The graded commutative ring that is formed by the direct sum of the global sections of the
L
n
,
{\displaystyle L^{n},\ }
meaning the n-fold tensor product of itself, is represented as the quotient ring of a polynomial algebra by a homogeneous ideal I. The graded parts of I have been the subject of intense study.
Quadratic relations were provided by Bernhard Riemann. Koizumi's theorem states the third power of an ample line bundle is normally generated. The Mumford–Kempf theorem states that the fourth power of an ample line bundle is quadratically presented. For a base field of characteristic zero, Giuseppe Pareschi proved a result including these (as the cases p = 0, 1) which had been conjectured by Lazarsfeld: let L be an ample line bundle on an abelian variety A. If n ≥ p + 3, then the n-th tensor power of L satisfies condition Np. Further results have been proved by Pareschi and Popa, including previous work in the field.
== See also ==
Timeline of abelian varieties
Horrocks–Mumford bundle
== References ==
David Mumford, On the equations defining abelian varieties I Invent. Math., 1 (1966) pp. 287–354
____, On the equations defining abelian varieties II–III Invent. Math., 3 (1967) pp. 71–135; 215–244
____, Abelian varieties (1974)
Jun-ichi Igusa, Theta functions (1972)
== Further reading ==
David Mumford, Selected papers on the classification of varieties and moduli spaces, editorial comment by G. Kempf and H. Lange, pp. 293–5 | Wikipedia/Equations_defining_abelian_varieties |
Nuclear magnetic resonance crystallography (NMR crystallography) is a method which utilizes primarily NMR spectroscopy to determine the structure of solid materials on the atomic scale. Thus, solid-state NMR spectroscopy would be used primarily, possibly supplemented by quantum chemistry calculations (e.g. density functional theory), powder diffraction etc. If suitable crystals can be grown, any crystallographic method would generally be preferred to determine the crystal structure comprising in case of organic compounds the molecular structures and molecular packing. The main interest in NMR crystallography is in microcrystalline materials which are amenable to this method but not to X-ray, neutron and electron diffraction. This is largely because interactions of comparably short range are measured in NMR crystallography.
== Introduction ==
When applied to organic molecules, NMR crystallography aims at including structural information not only of a single molecule but also on the molecular packing (i.e. crystal structure). Contrary to X-ray, single crystals are not necessary with solid-state NMR and structural information can be obtained from high-resolution spectra of disordered solids. E.g. polymorphism is an area of interest for NMR crystallography since this is encountered occasionally (and may often be previously undiscovered) in organic compounds. In this case a change in the molecular structure and/or in the molecular packing can lead to polymorphism, and this can be investigated by NMR crystallography.
== Dipolar couplings-based approach ==
The spin interaction that is usually employed for structural analyses via solid state NMR spectroscopy is the magnetic dipolar interaction.
Additional knowledge about other interactions within the studied system like the chemical shift or the electric quadrupole interaction can be helpful as well, and in some cases solely the chemical shift has been employed as e.g. for zeolites.
The “dipole coupling”-based approach parallels protein NMR spectroscopy to some extent in that e.g. multiple residual dipolar couplings are measured for proteins in solution, and these couplings are used as constraints in the protein structure calculation.
In NMR crystallography the observed spins in case of organic molecules would often be spin-1/2 nuclei of moderate frequency (13C, 15N, 31P, etc.). I.e. 1H is excluded due to its large magnetogyric ratio and high spin concentration leading to a network of strong homonuclear dipolar couplings. There are two solutions with respect to 1H: 1H spin diffusion experiments (see below) and specific labelling with 2H spins (spin = 1). The latter is also popular e.g. in NMR spectroscopic investigations of hydrogen bonds in solution and the solid state.
Both intra- and intermolecular structural elements can be investigated e.g. via deuterium REDOR (an established solid state NMR pulse sequence to measure dipolar couplings between deuterons and other spins).
This can provide an additional constraint for an NMR crystallographic structural investigation in that it can be used to find and characterize e.g. intermolecular hydrogen bonds.
=== Dipolar interaction ===
The above-mentioned dipolar interaction can be measured directly, e.g. between pairs of heteronuclear spins like 13C/15N in many organic compounds. Furthermore, the strength of the dipolar interaction modulates parameters like the longitudinal relaxation time or the spin diffusion rate which therefore can be examined to obtain structural information. E.g. 1H spin diffusion has been measured providing rich structural information.
=== Chemical shift interaction ===
The chemical shift interaction can be used in conjunction with the dipolar interaction to determine the orientation of the dipolar interaction frame (principal axes system) with respect to the molecular frame (dipolar chemical shift spectroscopy). For some cases there are rules for the chemical shift interaction tensor orientation as for the 13C spin in ketones due to symmetry arguments (sp2 hybridisation). If the orientation of a dipolar interaction (between the spin of interest and e.g. another heteronucleus) is measured with respect to the chemical shift interaction coordinate system, these two pieces of information (chemical shift tensor/molecular orientation and the dipole tensor/chemical shift tensor orientation) combined give the orientation of the dipole tensor in the molecular frame. However, this method is only suitable for small molecules (or polymers with a small repetition unit like polyglycine) and it provides only selective (and usually intramolecular) structural information.
== Crystal Structure Refinements ==
The dipolar interaction yields the most direct information with respect to structure as it makes it possible to measure the distances between the spins. The sensitivity of this interaction is however lacking and even though dipolar-based NMR crystallography makes the elucidation of structures possible, other methods are necessary to obtain high resolution structures. For these reasons much work was done to include the use other NMR observables such as chemical shift anisotropy, J-coupling and the quadrupolar interaction. These anisotropic interactions are highly sensitive to the 3D local environment making it possible to refine the structures of powdered samples to structures rivaling the quality of single crystal X-ray diffraction. These however rely on adequate methods for predicting these interactions as they do not depend in a straightforward fashion on the structure.
== Comparison with diffraction methods ==
A drawback of NMR crystallography is that the method is typically more time-consuming and more expensive (due to spectrometer costs and isotope labelling) than X-ray crystallography, it often elucidates only part of the structure, and isotope labelling and experiments may have to be tailored to obtain key structural information. Also a given molecular structure may not always be suitable for a pure NMR-based NMR crystallographic approach, but it can still play an important role in a multimodality (NMR+diffraction) study.
Unlike in the case of diffraction methods, it appears that NMR crystallography needs to work on a case-by-case basis. The reason is that different molecular systems will exhibit different spin physics and different observables which can be probed. The method may therefore not find widespread use as different systems will require tailored experimental designs to study them.
== References == | Wikipedia/NMR_crystallography |
Imperfections in the crystal lattice of diamond are common. Such defects may be the result of lattice irregularities or extrinsic substitutional or interstitial impurities, introduced during or after the diamond growth. The defects affect the material properties of diamond and determine to which type a diamond is assigned; the most dramatic effects are on the diamond color and electrical conductivity, as explained by the electronic band structure.
The defects can be detected by different types of spectroscopy, including electron paramagnetic resonance (EPR), luminescence induced by light (photoluminescence, PL) or electron beam (cathodoluminescence, CL), and absorption of light in the infrared (IR), visible and UV parts of the spectrum. The absorption spectrum is used not only to identify the defects, but also to estimate their concentration; it can also distinguish natural from synthetic or enhanced diamonds.
== Labeling of diamond centers ==
There is a tradition in diamond spectroscopy to label a defect-induced spectrum by a numbered acronym (e.g. GR1). This tradition has been followed in general with some notable deviations, such as A, B and C centers. Many acronyms are confusing though:
Some symbols are too similar (e.g., 3H and H3).
Accidentally, the same labels were given to different centers detected by EPR and optical techniques (e.g., N3 EPR center and N3 optical center have no relation).
Whereas some acronyms are logical, such as N3 (N for natural, i.e. observed in natural diamond) or H3 (H for heated, i.e. observed after irradiation and heating), many are not. In particular, there is no clear distinction between the meaning of labels GR (general radiation), R (radiation) and TR (type-II radiation).
== Defect symmetry ==
The symmetry of defects in crystals is described by the point groups. They differ from the space groups describing the symmetry of crystals by absence of translations, and thus are much fewer in number. In diamond, only defects of the following symmetries have been observed thus far: tetrahedral (Td), tetragonal (D2d), trigonal (D3d, C3v), rhombic (C2v), monoclinic (C2h, C1h, C2) and triclinic (C1 or CS).
The defect symmetry allows predicting many optical properties. For example, one-phonon (infrared) absorption in pure diamond lattice is forbidden because the lattice has an inversion center. However, introducing any defect (even "very symmetrical", such as N-N substitutional pair) breaks the crystal symmetry resulting in defect-induced infrared absorption, which is the most common tool to measure the defect concentrations in diamond.
In synthetic diamond grown by the high-pressure high-temperature synthesis or chemical vapor deposition, defects with symmetry lower than tetrahedral align to the direction of the growth. Such alignment has also been observed in gallium arsenide and thus is not unique to diamond.
== Extrinsic defects ==
Various elemental analyses of diamond reveal a wide range of impurities. They mostly originate, however, from inclusions of foreign materials in diamond, which could be nanometer-small and invisible in an optical microscope. Also, virtually any element can be hammered into diamond by ion implantation. More essential are elements that can be introduced into the diamond lattice as isolated atoms (or small atomic clusters) during the diamond growth. By 2008, those elements are nitrogen, boron, hydrogen, silicon, phosphorus, nickel, cobalt and perhaps sulfur. Manganese and tungsten have been unambiguously detected in diamond, but they might originate from foreign inclusions. Detection of isolated iron in diamond has later been re-interpreted in terms of micro-particles of ruby produced during the diamond synthesis. Oxygen is believed to be a major impurity in diamond, but it has not been spectroscopically identified in diamond yet. Two electron paramagnetic resonance centers (OK1 and N3) have been initially assigned to nitrogen–oxygen complexes, and later to titanium-related complexes. However, the assignment is indirect and the corresponding concentrations are rather low (few parts per million).
=== Nitrogen ===
The most common impurity in diamond is nitrogen, which can comprise up to 1% of a diamond by mass. Previously, all lattice defects in diamond were thought to be the result of structural anomalies; later research revealed nitrogen to be present in most diamonds and in many different configurations. Most nitrogen enters the diamond lattice as a single atom (i.e. nitrogen-containing molecules dissociate before incorporation), however, molecular nitrogen incorporates into diamond as well.
Absorption of light and other material properties of diamond are highly dependent upon nitrogen content and aggregation state. Although all aggregate configurations cause absorption in the infrared, diamonds containing aggregated nitrogen are usually colorless, i.e. have little absorption in the visible spectrum. The four main nitrogen forms are as follows:
==== C-nitrogen center ====
The C center corresponds to electrically neutral single substitutional nitrogen atoms in the diamond lattice. These are easily seen in electron paramagnetic resonance spectra (in which they are confusingly called P1 centers). C centers impart a deep yellow to brown color; these diamonds are classed as type Ib and are commonly known as "canary diamonds", which are rare in gem form. Most synthetic diamonds produced by high-pressure high-temperature (HPHT) technique contain a high level of nitrogen in the C form; nitrogen impurity originates from the atmosphere or from the graphite source. One nitrogen atom per 100,000 carbon atoms will produce yellow color. Because the nitrogen atoms have five available electrons (one more than the carbon atoms they replace), they act as "deep donors"; that is, each substituting nitrogen has an extra electron to donate and forms a donor energy level within the band gap. Light with energy above ~2.2 eV can excite the donor electrons into the conduction band, resulting in the yellow color.
The C center produces a characteristic infrared absorption spectrum with a sharp peak at 1344 cm−1 and a broader feature at 1130 cm−1. Absorption at those peaks is routinely used to measure the concentration of single nitrogen. Another proposed way, using the UV absorption at ~260 nm, has later been discarded as unreliable.
Acceptor defects in diamond ionize the fifth nitrogen electron in the C center converting it into C+ center. The latter has a characteristic IR absorption spectrum with a sharp peak at 1332 cm−1 and broader and weaker peaks at 1115, 1046 and 950 cm−1.
==== A-nitrogen center ====
The A center is probably the most common defect in natural diamonds. It consists of a neutral nearest-neighbor pair of nitrogen atoms substituting for the carbon atoms. The A center produces UV absorption threshold at ~4 eV (310 nm, i.e. invisible to eye) and thus causes no coloration. Diamond containing nitrogen predominantly in the A form as classed as type IaA.
The A center is diamagnetic, but if ionized by UV light or deep acceptors, it produces an electron paramagnetic resonance spectrum W24, whose analysis unambiguously proves the N=N structure.
The A center shows an IR absorption spectrum with no sharp features, which is distinctly different from that of the C or B centers. Its strongest peak at 1282 cm−1 is routinely used to estimate the nitrogen concentration in the A form.
==== B-nitrogen center ====
There is a general consensus that B center (sometimes called B1) consists of a carbon vacancy surrounded by four nitrogen atoms substituting for carbon atoms. This model is consistent with other experimental results, but there is no direct spectroscopic data corroborating it. Diamonds where most nitrogen forms B centers are rare and are classed as type IaB; most gem diamonds contain a mixture of A and B centers, together with N3 centers.
Similar to the A centers, B centers do not induce color, and no UV or visible absorption can be attributed to the B centers. Early assignment of the N9 absorption system to the B center have been disproven later. The B center has a characteristic IR absorption spectrum (see the infrared absorption picture above) with a sharp peak at 1332 cm−1 and a broader feature at 1280 cm−1. The latter is routinely used to estimate the nitrogen concentration in the B form.
Many optical peaks in diamond accidentally have similar spectral positions, which causes much confusion among gemologists. Spectroscopists use the whole spectrum rather than one peak for defect identification and consider the history of the growth and processing of individual diamond.
==== N3 nitrogen center ====
The N3 center consists of three nitrogen atoms surrounding a vacancy. Its concentration is always just a fraction of the A and B centers. The N3 center is paramagnetic, so its structure is well justified from the analysis of the EPR spectrum P2. This defect produces a characteristic absorption and luminescence line at 415 nm and thus does not induce color on its own. However, the N3 center is always accompanied by the N2 center, having an absorption line at 478 nm (and no luminescence). As a result, diamonds rich in N3/N2 centers are yellow in color.
=== Boron ===
Diamonds containing boron as a substitutional impurity are termed type IIb. Only one percent of natural diamonds are of this type, and most are blue to grey. Boron is an acceptor in diamond: boron atoms have one less available electron than the carbon atoms; therefore, each boron atom substituting for a carbon atom creates an electron hole in the band gap that can accept an electron from the valence band. This allows red light absorption, and due to the small energy (0.37 eV) needed for the electron to leave the valence band, holes can be thermally released from the boron atoms to the valence band even at room temperatures. These holes can move in an electric field and render the diamond electrically conductive (i.e., a p-type semiconductor). Very few boron atoms are required for this to happen—a typical ratio is one boron atom per 1,000,000 carbon atoms.
Boron-doped diamonds transmit light down to ~250 nm and absorb some red and infrared light (hence the blue color); they may phosphoresce blue after exposure to shortwave ultraviolet light. Apart from optical absorption, boron acceptors have been detected by electron paramagnetic resonance.
=== Phosphorus ===
Phosphorus could be intentionally introduced into diamond grown by chemical vapor deposition (CVD) at concentrations up to ~0.01%. Phosphorus substitutes carbon in the diamond lattice. Similar to nitrogen, phosphorus has one more electron than carbon and thus acts as a donor; however, the ionization energy of phosphorus (0.6 eV) is much smaller than that of nitrogen (1.7 eV) and is small enough for room-temperature thermal ionization. This important property of phosphorus in diamond favors electronic applications, such as UV light-emitting diodes (LEDs, at 235 nm).
=== Hydrogen ===
Hydrogen is one of the most technological important impurities in semiconductors, including diamond. Hydrogen-related defects are very different in natural diamond and in synthetic diamond films. Those films are produced by various chemical vapor deposition (CVD) techniques in an atmosphere rich in hydrogen (typical hydrogen/carbon ratio >100), under strong bombardment of growing diamond by the plasma ions. As a result, CVD diamond is always rich in hydrogen and lattice vacancies. In polycrystalline films, much of the hydrogen may be located at the boundaries between diamond 'grains', or in non-diamond carbon inclusions. Within the diamond lattice itself, hydrogen-vacancy and hydrogen-nitrogen-vacancy complexes have been identified in negative charge states by electron paramagnetic resonance. In addition, numerous hydrogen-related IR absorption peaks are documented.
It is experimentally demonstrated that hydrogen passivates electrically active boron and phosphorus impurities. As a result of such passivation, shallow donor centers are presumably produced.
In natural diamonds, several hydrogen-related IR absorption peaks are commonly observed; the strongest ones are located at 1405, 3107 and 3237 cm−1 (see IR absorption figure above). The microscopic structure of the corresponding defects is yet unknown and it is not even certain whether or not those defects originate in diamond or in foreign inclusions. Gray color in some diamonds from the Argyle mine in Australia is often associated with those hydrogen defects, but again, this assignment is yet unproven.
=== Nickel, cobalt and chromium ===
When diamonds are grown by the high-pressure high-temperature technique, nickel, cobalt, chromium or some other metals are usually added into the growth medium to facilitate catalytically the conversion of graphite into diamond. As a result, metallic inclusions are formed. Besides, isolated nickel and cobalt atoms incorporate into diamond lattice, as demonstrated through characteristic hyperfine structure in electron paramagnetic resonance, optical absorption and photoluminescence spectra, and the concentration of isolated nickel can reach 0.01%. This fact is by all means unusual considering the large difference in size between carbon and transition metal atoms and the superior rigidity of the diamond lattice.
Numerous Ni-related defects have been detected by electron paramagnetic resonance, optical absorption and photoluminescence, both in synthetic and natural diamonds. Three major structures can be distinguished: substitutional Ni, nickel-vacancy and nickel-vacancy complex decorated by one or more substitutional nitrogen atoms. The "nickel-vacancy" structure, also called "semi-divacancy" is specific for most large impurities in diamond and silicon (e.g., tin in silicon). Its production mechanism is generally accepted as follows: large nickel atom incorporates substitutionally, then expels a nearby carbon (creating a neighboring vacancy), and shifts in-between the two sites.
Although the physical and chemical properties of cobalt and nickel are rather similar, the concentrations of isolated cobalt in diamond are much smaller than those of nickel (parts per billion range). Several defects related to isolated cobalt have been detected by electron paramagnetic resonance and photoluminescence, but their structure is yet unknown.
A chromium-related optical center was reported after ion implantation and subsequent annealing of Type IIA synthetic diamonds. However a subsequent study repeating the annealing conditions but without chromium implantation has questioned the original attribution of the defect centre to chromium.
=== Silicon, germanium, tin and lead ===
Silicon is a common impurity in diamond films grown by chemical vapor deposition and it originates either from silicon substrate or from silica windows or walls of the CVD reactor. It was also observed in natural diamonds in dispersed form. Isolated silicon defects have been detected in diamond lattice through the sharp optical absorption peak at 738 nm and electron paramagnetic resonance. Similar to other large impurities, the major form of silicon in diamond has been identified with a Si-vacancy complex (semi-divacancy site). This center is a deep donor having an ionization energy of 2 eV, and thus again is unsuitable for electronic applications.
Si-vacancies constitute minor fraction of total silicon. It is believed (though no proof exists) that much silicon substitutes for carbon thus becoming invisible to most spectroscopic techniques because silicon and carbon atoms have the same configuration of the outer electronic shells.
Germanium, tin and lead are normally absent in diamond, but they can be introduced during the growth or by subsequent ion implantation. Those impurities can be detected optically via the germanium-vacancy, tin-vacancy and lead-vacancy centers, respectively, which have similar properties to those of the Si-vacancy center.
Similar to N-V centers, Si-V, Ge-V, Sn-V and Pb-V complexes all have potential applications in quantum computing.
=== Sulfur ===
Around the year 2000, there was a wave of attempts to dope synthetic CVD diamond films by sulfur aiming at n-type conductivity with low activation energy. Successful reports have been published, but then dismissed as the conductivity was rendered p-type instead of n-type and associated not with sulfur, but with residual boron, which is a highly efficient p-type dopant in diamond.
So far (2009), there is only one reliable evidence (through hyperfine interaction structure in electron paramagnetic resonance) for isolated sulfur defects in diamond. The corresponding center called W31 has been observed in natural type-Ib diamonds in small concentrations (parts per million). It was assigned to a sulfur-vacancy complex – again, as in case of nickel and silicon, a semi-divacancy site.
== Intrinsic defects ==
The easiest way to produce intrinsic defects in diamond is by displacing carbon atoms through irradiation with high-energy particles, such as alpha (helium), beta (electrons) or gamma particles, protons, neutrons, ions, etc. The irradiation can occur in the laboratory or in nature (see Diamond enhancement – Irradiation); it produces primary defects named Frenkel defects (carbon atoms knocked off their normal lattice sites to interstitial sites) and remaining lattice vacancies. An important difference between the vacancies and interstitials in diamond is that whereas interstitials are mobile during the irradiation, even at liquid nitrogen temperatures, however vacancies start migrating only at temperatures ~700 °C.
Vacancies and interstitials can also be produced in diamond by plastic deformation, though in much smaller concentrations.
=== Isolated carbon interstitial ===
Isolated interstitial has never been observed in diamond and is considered unstable. Its interaction with a regular carbon lattice atom produces a "split-interstitial", a defect where two carbon atoms share a lattice site and are covalently bonded with the carbon neighbors. This defect has been thoroughly characterized by electron paramagnetic resonance (R2 center) and optical absorption, and unlike most other defects in diamond, it does not produce photoluminescence.
=== Interstitial complexes ===
The isolated split-interstitial moves through the diamond crystal during irradiation. When it meets other interstitials it aggregates into larger complexes of two and three split-interstitials, identified by electron paramagnetic resonance (R1 and O3 centers), optical absorption and photoluminescence.
=== Vacancy-interstitial complexes ===
Most high-energy particles, beside displacing carbon atom from the lattice site, also pass it enough surplus energy for a rapid migration through the lattice. However, when relatively gentle gamma irradiation is used, this extra energy is minimal. Thus the interstitials remain near the original vacancies and form vacancy-interstitials pairs identified through optical absorption.
Vacancy-di-interstitial pairs have been also produced, though by electron irradiation and through a different mechanism: Individual interstitials migrate during the irradiation and aggregate to form di-interstitials; this process occurs preferentially near the lattice vacancies.
=== Isolated vacancy ===
Isolated vacancy is the most studied defect in diamond, both experimentally and theoretically. Its most important practical property is optical absorption, like in the color centers, which gives diamond green, or sometimes even green–blue color (in pure diamond). The characteristic feature of this absorption is a series of sharp lines called GR1-8, where GR1 line at 741 nm is the most prominent and important.
The vacancy behaves as a deep electron donor/acceptor, whose electronic properties depend on the charge state. The energy level for the +/0 states is at 0.6 eV and for the 0/- states is at 2.5 eV above the valence band.
=== Multivacancy complexes ===
Upon annealing of pure diamond at ~700 °C, vacancies migrate and form divacancies, characterized by optical absorption and electron paramagnetic resonance.
Similar to single interstitials, divacancies do not produce photoluminescence. Divacancies, in turn, anneal out at ~900 °C creating multivacancy chains detected by EPR and presumably hexavacancy rings. The latter should be invisible to most spectroscopies, and indeed, they have not been detected thus far. Annealing of vacancies changes diamond color from green to yellow-brown. Similar mechanism (vacancy aggregation) is also believed to cause brown color of plastically deformed natural diamonds.
=== Dislocations ===
Dislocations are the most common structural defect in natural diamond. The two major types of dislocations are the glide set, in which bonds break between layers of atoms with different indices (those not lying directly above each other) and the shuffle set, in which the breaks occur between atoms of the same index. The dislocations produce dangling bonds which introduce energy levels into the band gap, enabling the absorption of light. Broadband blue photoluminescence has been reliably identified with dislocations by direct observation in an electron microscope, however, it was noted that not all dislocations are luminescent, and there is no correlation between the dislocation type and the parameters of the emission.
=== Platelets ===
Most natural diamonds contain extended planar defects in the <100> lattice planes, which are called "platelets". Their size ranges from nanometers to many micrometers, and large ones are easily observed in an optical microscope via their luminescence. For a long time, platelets were tentatively associated with large nitrogen complexes — nitrogen sinks produced as a result of nitrogen aggregation at high temperatures of the diamond synthesis. However, the direct measurement of nitrogen in the platelets by EELS (an analytical technique of electron microscopy) revealed very little nitrogen. The currently accepted model of platelets is a large regular array of carbon interstitials.
Platelets produce sharp absorption peaks at 1359–1375 and 330 cm−1 in IR absorption spectra; remarkably, the position of the first peak depends on the platelet size. As with dislocations, a broad photoluminescence centered at ~1000 nm was associated with platelets by direct observation in an electron microscope. By studying this luminescence, it was deduced that platelets have a "bandgap" of ~1.7 eV.
=== Voidites ===
Voidites are octahedral nanometer-sized clusters present in many natural diamonds, as revealed by electron microscopy. Laboratory experiments demonstrated that annealing of type-IaB diamond at high temperatures and pressures (>2600 °C) results in break-up of the platelets and formation of dislocation loops and voidites, i.e. that voidites are a result of thermal degradation of platelets. Contrary to platelets, voidites do contain much nitrogen, in the molecular form.
== Interaction between intrinsic and extrinsic defects ==
Extrinsic and intrinsic defects can interact producing new defect complexes. Such interaction usually occurs if a diamond containing extrinsic defects (impurities) is either plastically deformed or is irradiated and annealed.
Most important is the interaction of vacancies and interstitials with nitrogen. Carbon interstitials react with substitutional nitrogen producing a bond-centered nitrogen interstitial showing strong IR absorption at 1450 cm−1. Vacancies are efficiently trapped by the A, B and C nitrogen centers. The trapping rate is the highest for the C centers, 8 times lower for the A centers and 30 times lower for the B centers. The C center (single nitrogen) by trapping a vacancy forms the famous nitrogen-vacancy center, which can be neutral or negatively charged; the negatively charged state has potential applications in quantum computing. A and B centers upon trapping a vacancy create corresponding 2N-V (H3 and H2 centers, where H2 is simply a negatively charged H3 center) and the neutral 4N-2V (H4 center). The H2, H3 and H4 centers are important because they are present in many natural diamonds and their optical absorption can be strong enough to alter the diamond color (H3 or H4 – yellow, H2 – green).
Boron interacts with carbon interstitials forming a neutral boron–interstitial complex with a sharp optical absorption at 0.552 eV (2250 nm). No evidence is known so far (2009) for complexes of boron and vacancy.
In contrast, silicon does react with vacancies, creating the described above optical absorption at 738 nm. The assumed mechanism is trapping of migrating vacancy by substitutional silicon resulting in the Si-V (semi-divacancy) configuration.
A similar mechanism is expected for nickel, for which both substitutional and semi-divacancy configurations are reliably identified (see subsection "nickel and cobalt" above). In an unpublished study, diamonds rich in substitutional nickel were electron irradiated and annealed, with following careful optical measurements performed after each annealing step, but no evidence for creation or enhancement of Ni-vacancy centers was obtained.
== See also ==
== References == | Wikipedia/Crystallographic_defects_in_diamond |
Geological modelling, geologic modelling or geomodelling is the applied science of creating computerized representations of portions of the Earth's crust based on geophysical and geological observations made on and below the Earth surface. A geomodel is the numerical equivalent of a three-dimensional geological map complemented by a description of physical quantities in the domain of interest.
Geomodelling is related to the concept of Shared Earth Model;
which is a multidisciplinary, interoperable and updatable knowledge base about the subsurface.
Geomodelling is commonly used for managing natural resources, identifying natural hazards, and quantifying geological processes, with main applications to oil and gas fields, groundwater aquifers and ore deposits. For example, in the oil and gas industry, realistic geological models are required as input to reservoir simulator programs, which predict the behavior of the rocks under various hydrocarbon recovery scenarios. A reservoir can only be developed and produced once; therefore, making a mistake by selecting a site with poor conditions for development is tragic and wasteful. Using geological models and reservoir simulation allows reservoir engineers to identify which recovery options offer the safest and most economic, efficient, and effective development plan for a particular reservoir.
Geological modelling is a relatively recent subdiscipline of geology which integrates structural geology, sedimentology, stratigraphy, paleoclimatology, and diagenesis;
In 2-dimensions (2D), a geologic formation or unit is represented by a polygon, which can be bounded by faults, unconformities or by its lateral extent, or crop. In geological models a geological unit is bounded by 3-dimensional (3D) triangulated or gridded surfaces. The equivalent to the mapped polygon is the fully enclosed geological unit, using a triangulated mesh. For the purpose of property or fluid modelling these volumes can be separated further into an array of cells, often referred to as voxels (volumetric elements). These 3D grids are the equivalent to 2D grids used to express properties of single surfaces.
Geomodelling generally involves the following steps:
Preliminary analysis of geological context of the domain of study.
Interpretation of available data and observations as point sets or polygonal lines (e.g. "fault sticks" corresponding to faults on a vertical seismic section).
Construction of a structural model describing the main rock boundaries (horizons, unconformities, intrusions, faults)
Definition of a three-dimensional mesh honoring the structural model to support volumetric representation of heterogeneity (see Geostatistics) and solving the Partial Differential Equations which govern physical processes in the subsurface (e.g. seismic wave propagation, fluid transport in porous media).
== Geological modelling components ==
=== Structural framework ===
Incorporating the spatial positions of the major formation boundaries, including the effects of faulting, folding, and erosion (unconformities). The major stratigraphic divisions are further subdivided into layers of cells with differing geometries with relation to the bounding surfaces (parallel to top, parallel to base, proportional). Maximum cell dimensions are dictated by the minimum sizes of the features to be resolved (everyday example: On a digital map of a city, the location of a city park might be adequately resolved by one big green pixel, but to define the locations of the basketball court, the baseball field, and the pool, much smaller pixels – higher resolution – need to be used).
=== Rock type ===
Each cell in the model is assigned a rock type. In a coastal clastic environment, these might be beach sand, high water energy marine upper shoreface sand, intermediate water energy marine lower shoreface sand, and deeper low energy marine silt and shale. The distribution of these rock types within the model is controlled by several methods, including map boundary polygons, rock type probability maps, or statistically emplaced based on sufficiently closely spaced well data.
=== Reservoir quality ===
Reservoir quality parameters almost always include porosity and permeability, but may include measures of clay content, cementation factors, and other factors that affect the storage and deliverability of fluids contained in the pores of those rocks. Geostatistical techniques are most often used to populate the cells with porosity and permeability values that are appropriate for the rock type of each cell.
=== Fluid saturation ===
Most rock is completely saturated with groundwater. Sometimes, under the right conditions, some of the pore space in the rock is occupied by other liquids or gases. In the energy industry, oil and natural gas are the fluids most commonly being modelled. The preferred methods for calculating hydrocarbon saturations in a geological model incorporate an estimate of pore throat size, the densities of the fluids, and the height of the cell above the water contact, since these factors exert the strongest influence on capillary action, which ultimately controls fluid saturations.
=== Geostatistics ===
An important part of geological modelling is related to geostatistics. In order to represent the observed data, often
not on regular grids, we have to use certain interpolation techniques. The most widely used technique is kriging
which uses the spatial correlation among data and intends to construct the interpolation via semi-variograms. To reproduce more realistic spatial variability and help assess spatial uncertainty between data, geostatistical simulation based on variograms, training images, or parametric geological objects is often used, e.g.
=== Mineral Deposits ===
Geologists involved in mining and mineral exploration use geological modelling to determine the geometry and placement of mineral deposits in the subsurface of the earth. Geological models help define the volume and concentration of minerals, to which economic constraints are applied to determine the economic value of the mineralization. Mineral deposits that are deemed to be economic may be developed into a mine.
== Technology ==
Geomodelling and CAD share a lot of common technologies. Software is usually implemented using object-oriented programming technologies in C++, Java or C# on one or multiple computer platforms. The graphical user interface generally consists of one or several 3D and 2D graphics windows to visualize spatial data, interpretations and modelling output. Such visualization is generally achieved by exploiting graphics hardware. User interaction is mostly performed through mouse and keyboard, although 3D pointing devices and immersive environments may be used in some specific cases. GIS (Geographic Information System) is also a widely used tool to manipulate geological data.
Geometric objects are represented with parametric curves and surfaces or discrete models such as polygonal meshes.
== Research in Geomodelling ==
Problems pertaining to Geomodelling cover:
Defining an appropriate Ontology to describe geological objects at various scales of interest,
Integrating diverse types of observations into 3D geomodels: geological mapping data, borehole data and interpretations, seismic images and interpretations, potential field data, well test data, etc.,
Better accounting for geological processes during model building,
Characterizing uncertainty about the geomodels to help assess risk. Therefore, Geomodelling has a close connection to Geostatistics and Inverse problem theory,
Applying of the recent developed Multiple Point Geostatistical Simulations (MPS) for integrating different data sources,
Automated geometry optimization and topology conservation
== History ==
In the 70's, geomodelling mainly consisted of automatic 2D cartographic techniques such as contouring, implemented as FORTRAN routines communicating directly with plotting hardware. The advent of workstations with 3D graphics capabilities during the 80's gave birth to a new generation of geomodelling software with graphical user interface which became mature during the 90's.
Since its inception, geomodelling has been mainly motivated and supported by oil and gas industry.
== Geological modelling software ==
Software developers have built several packages for geological modelling purposes. Such software can display, edit, digitise and automatically calculate the parameters required by engineers, geologists and surveyors. Current software is mainly developed and commercialized by oil and gas or mining industry software vendors:
Geologial modelling and visualisation
IRAP RMS Suite
GeoticMine
Geomodeller3D
DecisionSpace Geosciences Suite
Dassault Systèmes GEOVIA provides Surpac, GEMS and Minex for geological modeling
GSI3D
Mira Geoscience provides GOCAD Mining Suite, a 3D geological modelling software that compiles, models, and analyzes for valid interpretation that honours all data.
Seequent provides Leapfrog 3D geological modeling & Geosoft GM-SYS and VOXI 3D modelling software.
Maptek provides Vulcan, 3D modular software visualisation for geological modelling and mine planning
Micromine is a comprehensive and easy to use exploration and mine design solution, which offers integrated tools for modelling, estimation, design, optimisation and scheduling.
Petrel
Rockworks
SGS Genesis
Move
SKUA-GOCAD
Datamine Software provides Studio EM and Studio RM for geological modelling
BGS Groundhog Desktop free-to-use software developed by the GeoAnalytics and Modelling directorate of British Geological Survey.
GeoScene3D
Groundwater modelling
FEFLOW
PORFLOW
FEHM
MODFLOW
GMS
Visual MODFLOW
ZOOMQ3D
Moreover, industry Consortia or companies are specifically working at improving standardization and interoperability of earth science databases and geomodelling software:
Standardization: GeoSciML by the Commission for the Management and Application of Geoscience Information, of the International Union of Geological Sciences.
Standardization: RESQML(tm) by Energistics
Interoperability: OpenSpirit, by TIBCO(r)
== See also ==
Numerical modeling (geology)
Petroleum engineering
Seismic to simulation
== References ==
Bolduc, A.M., Riverin, M-N., Lefebvre, R., Fallara, F. et Paradis, S.J., 2006. Eskers: À la recherche de l'or bleu. La Science au Québec : http://www.sciencepresse.qc.ca/archives/quebec/capque0606f.html
Faure, Stéphane, Godey, Stéphanie, Fallara, Francine and Trépanier, Sylvain. (2011). Seismic Architecture of the Archean North American Mantle and Its Relationship to Diamondiferous Kimberlite Fields. Economic Geology, March–April 2011, v. 106, p. 223–240. http://econgeol.geoscienceworld.org/content/106/2/223.abstract
Fallara, Francine, Legault, Marc and Rabeau, Olivier (2006). 3-D Integrated Geological Modeling in the Abitibi Subprovince (Québec, Canada): Techniques and Applications. Exploration and Mining Geology, Vol. 15, Nos. 1–2, pp. 27–41. [1]
Berg, R.C., Mathers, S.J., Kessler, H., and Keefer, D. A., 2011. Synopsis of Current Three-dimensional Geological Mapping and Modeling in Geological Survey Organization, Champaign, Illinois: Illinois State Geological Survey, Circular 578. https://web.archive.org/web/20111009122101/http://library.isgs.uiuc.edu/Pubs/pdfs/circulars/c578.pdf
Turner, A. K.; Gable, C. (2007). "A review of geological modelling. In: Three-dimensional geological mapping for groundwater applications, Workshop extended abstracts" (PDF). Denver, Colorado. Archived from the original (PDF) on 2008-11-21.
Kessler, H., Mathers, S., Napier, B., Terrington, R. & Sobisch, H.-G. (2007). "The present and future construction and delivery of 3D geological models at the British Geological Survey".{{cite web}}: CS1 maint: multiple names: authors list (link) (GSA Denver Annual Meeting. Poster)
Wycisk, P., Gossel W., Schlesier, D. & Neumann, C. (2007). "Integrated 3D modelling of subsurface geology and hydrogeology for urban groundwater management" (PDF). International Symposium on New Directions in Urban Water Management. Archived from the original (PDF) on 2008-12-17.{{cite web}}: CS1 maint: multiple names: authors list (link)
Kessler, H., Mathers, S., Lelliott, M., Hughes, A. & MacDonald, D. (2007). "Rigorous 3D geological models as the basis for groundwater modelling. In: Three-dimensional geological mapping for groundwater applications, Workshop extended abstracts" (PDF). Denver, Colorado. Archived from the original (PDF) on 2008-12-03.{{cite web}}: CS1 maint: multiple names: authors list (link)
Merritt, J.E., Monaghan, A., Entwisle, D., Hughes, A., Campbell, D. & Browne, M. (August 2007). "3D attributed models for addressing environmental and engineering geoscience problems in areas of urban regeneration – a case study in Glasgow, UK. In: First Break, Special Topic Environmental and Engineering Geoscience" (PDF). pp. Volume 25, pp 79–84.{{cite web}}: CS1 maint: multiple names: authors list (link)
Kevin B. Sprague & Eric A. de Kemp. (2005) Interpretive Tools for 3-D Structural Geological Modelling Part II: Surface Design from Sparse Spatial Data http://portal.acm.org/citation.cfm?id=1046957.1046969&coll=&dl=ACM
de Kemp, E.A. (2007). 3-D geological modelling supporting mineral exploration. In: Goodfellow, W.D., ed., Mineral Deposits of Canada: A Synthesis of Major Deposit Types, District Metallogeny, the Evolution of Geological Provinces, and Exploration Methods: Geological Association of Canada, Mineral Deposits Division, Special Publication 5, p. 1051–1061. https://web.archive.org/web/20081217170553/http://gsc.nrcan.gc.ca/mindep/method/3d/pdf/dekemp_3dgis.pdf
== Footnotes ==
== External links ==
Geological Modelling at the British Geological Survey | Wikipedia/Geologic_modelling |
Kikuchi lines are patterns of electrons formed by scattering. They pair up to form bands in electron diffraction from single crystal specimens, there to serve as "roads in orientation-space" for microscopists uncertain of what they are looking at. In transmission electron microscopes, they are easily seen in diffraction from regions of the specimen thick enough for multiple scattering. Unlike diffraction spots, which blink on and off as one tilts the crystal, Kikuchi bands mark orientation space with well-defined intersections (called zones or poles) as well as paths connecting one intersection to the next.
Experimental and theoretical maps of Kikuchi band geometry, as well as their direct-space analogs e.g. bend contours, electron channeling patterns, and fringe visibility maps are increasingly useful tools in electron microscopy of crystalline and nanocrystalline materials. Because each Kikuchi line is associated with Bragg diffraction from one side of a single set of lattice planes, these lines can be labeled with the same Miller or reciprocal-lattice indices that are used to identify individual diffraction spots. Kikuchi band intersections, or zones, on the other hand are indexed with direct-lattice indices i.e. indices which represent integer multiples of the lattice basis vectors a, b and c.
Kikuchi lines are formed in diffraction patterns by diffusely scattered electrons, e.g. as a result of thermal atom vibrations. The main features of their geometry can be deduced from a simple elastic mechanism proposed in 1928 by Seishi Kikuchi, although the dynamical theory of diffuse inelastic scattering is needed to understand them quantitatively.
In x-ray scattering, these lines are referred to as Kossel lines (named after Walther Kossel).
== Recording experimental Kikuchi patterns and maps ==
The figure on the left shows the Kikuchi lines leading to a silicon [100] zone, taken with the beam direction approximately 7.9° away from the zone along the (004) Kikuchi band. The dynamic range in the image is so large that only portions of the film are not overexposed. Kikuchi lines are much easier to follow with dark-adapted eyes on a fluorescent screen, than they are to capture unmoving on paper or film, even though eyes and photographic media both have a roughly logarithmic response to illumination intensity. Fully quantitative work on such diffraction features is therefore assisted by the large linear dynamic range of CCD detectors.
This image subtends an angular range of over 10° and required use of a shorter than usual camera length L. The Kikuchi band widths themselves (roughly λL/d where λ/d is approximately twice the Bragg angle for the corresponding plane) are well under 1°, because the wavelength λ of electrons (about 1.97 picometres in this case) is much less than the lattice plane d-spacing itself. For comparison, the d-spacing for silicon (022) is about 192 picometres while the d-spacing for silicon (004) is about 136 picometres.
The image was taken from a region of the crystal which is thicker than the inelastic mean free path (about 200 nanometres), so that diffuse scattering features (the Kikuchi lines) would be strong in comparison to coherent scattering features (diffraction spots). The fact that surviving diffraction spots appear as disks intersected by bright Kikuchi lines means that the diffraction pattern was taken with a convergent electron beam. In practice, Kikuchi lines are easily seen in thick regions of either selected area or convergent beam electron diffraction patterns, but difficult to see in diffraction from crystals much less than 100 nm in size (where lattice-fringe visibility effects become important instead). This image was recorded in convergent beam, because that too reduces the range of contrasts that have to be recorded on film.
Compiling Kikuchi maps which cover more than a steradian requires that one take many images at tilts changed only incrementally (e.g. by 2° in each direction). This can be tedious work, but may be useful when investigating a crystal with unknown structure as it can clearly unveil the lattice symmetry in three dimensions.
== Kikuchi line maps and their stereographic projection ==
The figure on the left plots Kikuchi lines for a larger section of silicon's orientation space. The angle subtended between the large [011] and [001] zones at the bottom is 45° for silicon. Note that four-fold zone in the lower right (here labeled [001]) has the same symmetry and orientation as the zone labeled [100] in the experimental pattern above, although that experimental pattern only subtends about 10°.
Note also that the figure at left is excerpted from a stereographic projection centered on that [001] zone. Such conformal projections allow one to map pieces of spherical surface onto a plane while preserving the local angles of intersection, and hence zone symmetries. Plotting such maps requires that one be able to draw arcs of circles with a very large radius of curvature. The figure at left, for example, was drawn before the advent of computers and hence required the use of a beam compass. Finding a beam compass today might be fairly difficult, since it is much easier to draw curves having a large radius of curvature (in two or three dimensions) with help from a computer.
The angle-preserving effect of stereographic plots is even more obvious in the figure at right, which subtends a full 180° of the orientation space of a face-centered or cubic close packed crystal e.g. like that of Gold or Aluminum. The animation follows {220} fringe-visibility bands of that face-centered cubic crystal between <111> zones, at which point rotation by 60° sets up travel to the next <111> zone via a repeat of the original sequence. Fringe-visibility bands have the same global geometry as do Kikuchi bands, but for thin specimens their width is proportional (rather than inversely proportional) to d-spacing. Although the angular field width (and tilt range) obtainable experimentally with Kikuchi bands is generally much smaller, the animation offers a wide-angle view of how Kikuchi bands help informed crystallographers find their way between landmarks in the orientation space of a single crystal specimen.
== Real space analogs ==
Kikuchi lines serve to highlight the edge on lattice planes in diffraction images of thicker specimens. Because Bragg angles in the diffraction of high energy electrons are very small (~1⁄4 degrees for 300 keV), Kikuchi bands are quite narrow in reciprocal space. This also means that in real space images, lattice planes edge-on are decorated not by diffuse scattering features but by contrast associated with coherent scattering. These coherent scattering features include added diffraction (responsible for bend contours in curved foils), more electron penetration (which gives rise to electron channeling patterns in scanning electron images of crystal surfaces), and lattice fringe contrast (which results in a dependence of lattice fringe intensity on beam orientation which is linked to specimen thickness). Although the contrast details differ, the lattice plane trace geometry of these features and of Kikuchi maps are the same.
=== Bend contours and rocking curves ===
Rocking curves (left) are plots of scattered electron intensity, as a function of the angle between an incident electron beam and the normal to a set of lattice planes in the specimen. As this angle changes in either direction from edge-on (at which orientation the electron beam runs parallel to the lattice planes and perpendicular to their normal), the beam moves into Bragg diffracting condition and more electrons are diffracted outside the microscope's back focal plane aperture, giving rise to the dark-line pairs (bands) seen in the image of the bent silicon foil shown in the image on the right.
The [100] bend contour "spider" of this image, trapped in a region of silicon that was shaped like an oval watchglass less than a micrometre in size, was imaged with 300 keV electrons. If you tilt the crystal, the spider moves toward the edges of the oval as though it is trying to get out. For example, in this image the spider's [100] intersection has moved to the right side of the ellipse as the specimen was tilted to the left.
The spider's legs, and their intersections, can be indexed as shown in precisely the same way as the Kikuchi pattern near [100] in the section on experimental Kikuchi patterns above. In principle, one could therefore use this bend contour to model the foil's vector tilt (with milliradian accuracy) at all points across the oval.
=== Lattice fringe visibility maps ===
As you can see from the rocking curve above, as specimen thickness moves into the 10 nanometre and smaller range (e.g. for 300 keV electrons and lattice spacings near 0.23 nm) the angular range of tilts that give rise to diffraction and/or lattice-fringe contrast becomes inversely proportional to specimen thickness. The geometry of lattice-fringe visibility therefore becomes useful in the electron microscope study of nanomaterials, just as bend contours and Kikuchi lines are useful in the study of single crystal specimens (e.g. metals and semiconductor specimens with thickness in the tenth-micrometre range). Applications to nanostructure for example include: (i) determining the 3D lattice parameters of individual nanoparticles from images taken at different tilts, (ii) fringe fingerprinting of randomly oriented nanoparticle collections, (iii) particle thickness maps based on fringe contrast changes under tilt, (iv) detection of icosahedral twins from the lattice image of a randomly oriented nanoparticle, and (v) analysis of orientation relationships between nanoparticles and a cylindrical support.
=== Electron channeling patterns ===
The above techniques all involve detection of electrons which have passed through a thin specimen, usually in a transmission electron microscope. Scanning electron microscopes, on the other hand, typically look at electrons "kicked up" when one rasters a focussed electron beam across a thick specimen. Electron channeling patterns are contrast effects associated with edge-on lattice planes that show up in scanning electron microscope secondary and/or backscattered electron images.
The contrast effects are to first order similar to those of bend contours, i.e. electrons which enter a crystalline surface under diffracting conditions tend to channel (penetrate deeper into the specimen without losing energy) and thus kick up fewer electrons near the entry surface for detection. Hence bands form, depending on beam/lattice orientation, with the now-familiar Kikuchi line geometry.
The first scanning electron microscope (SEM) image was an image of electron channeling contrast in silicon steel. However, practical uses for the technique are limited because only a thin layer of abrasion damage or amorphous coating is generally adequate to obscure the contrast. If the specimen had to be given a conductive coating before examination to prevent charging, this too could obscure the contrast. On cleaved surfaces, and surfaces self-assembled on the atomic scale, electron channeling patterns are likely to see growing application with modern microscopes in the years ahead.
== See also ==
Electron diffraction
Electron backscatter diffraction (EBSD)
== References ==
== External links ==
Calculate patterns with WebEMApS at UIUC.
Some interactive 3D maps at UM Saint Louis.
Calculate Kikuchi map or patterns with free software PTCLab [1]. | Wikipedia/Kikuchi_lines_(physics) |
Zeitschrift für Kristallographie – Crystalline Materials is a monthly peer-reviewed scientific journal published in English. The journal publishes theoretical and experimental studies in crystallography of both organic and inorganic substances. The editor-in-chief of the journal is Rainer Pöttgen from the University of Münster. The journal was founded in 1877 under the title Zeitschrift für Krystallographie und Mineralogie by crystallographer and mineralogist Paul Heinrich von Groth, who served as the editor for 44 years. It has used several titles over its history, with the present title having been adopted in 2010. The journal is indexed in a variety of databases and has a 2020 impact factor of 1.616.
== History ==
The journal was established in 1877 by Paul von Groth as a German-language publication under the title Zeitschrift für Krystallographie und Mineralogie, and he served as its editor until the end of 1920. Groth was appointed as the inaugural Professor of Mineralogy at the University of Strasbourg in 1872 and made great contributions to the disciplines of mineralogy and crystallography both there and, from 1883, as the curator at the Deutsches Museum in Munich. Groth was the first to classify minerals according to their chemical composition and contributed to the understanding of isomorphism and morphotropy in crystalline systems. Using the data from 55 volumes of the journal covering 39 years of publications (1877–1915) plus other sources, Groth produced the five volume work Chemische Krystallographie between 1906 and 1919. This work catalogued the chemical and physical properties of the between 9,000 and 10,000 crystalline substances known at the time.
It has used a series of names over its history (see table below), finally becoming Zeitschrift für Kristallographie – Crystalline Materials in 2010, a name distinguishing it from the 1987 spin-off journal Zeitschrift für Kristallographie – New Crystal Structures.
=== Special issues ===
Beginning in December 2002, the journal has produced special issues with articles grouped around a single theme. Topics covered include the analysis of complex materials using pair distribution function methods, borates (double issue), hydrogen storage, in situ crystallisation, mathematical crystallography, mineral structures, nanocrystallography, phononic crystals, photocrystallography, the application of precession electron diffraction methods, twinned crystals, and zeolites (double issue). On four occasions, one or two issues of the journal have been dedicated to the memory of a crystallographer or mineralogist, usually with a theme associated with the individual's work and a description of their contribution to the field. These are summarised in the table below:
== Abstracting and indexing ==
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical and Earth Sciences
EBSCO databases
Inspec
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2015 impact factor of 2.560, and it is ranked 8th amongst the 26 crystallography journals.
== References ==
== External links ==
Official website | Wikipedia/Zeitschrift_für_Kristallographie_–_Crystalline_Materials |
Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low-energy electrons (30–200 eV) and observation of diffracted electrons as spots on a fluorescent screen.
LEED may be used in one of two ways:
Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I–V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface at hand.
== Historical perspective ==
An electron-diffraction experiment similar to modern LEED was the first to observe the wavelike properties of electrons, but LEED was established as an ubiquitous tool in surface science only with the advances in vacuum generation and electron detection techniques.
=== Davisson and Germer's discovery of electron diffraction ===
The theoretical possibility of the occurrence of electron diffraction first emerged in 1924, when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel-laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h/p, where h is the Planck constant.
The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927, when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays developed by Bragg and Laue earlier. Before the acceptance of the de Broglie hypothesis, diffraction was believed to be an exclusive property of waves.
Davisson and Germer published notes of their electron-diffraction experiment result in Nature and in Physical Review in 1927. One month after Davisson and Germer's work appeared, Thompson and Reid published their electron-diffraction work with higher kinetic energy (thousand times higher than the energy used by Davisson and Germer) in the same journal. Those experiments revealed the wave property of electrons and opened up an era of electron-diffraction study.
=== Development of LEED as a tool in surface science ===
Though discovered in 1927, low-energy electron diffraction did not become a popular tool for surface analysis until the early 1960s. The main reasons were that monitoring directions and intensities of diffracted beams was a difficult experimental process due to inadequate vacuum techniques and slow detection methods such as a Faraday cup. Also, since LEED is a surface-sensitive method, it required well-ordered surface structures. Techniques for the preparation of clean metal surfaces first became available much later.
Nonetheless, H. E. Farnsworth and coworkers at Brown University pioneered the use of LEED as a method for characterizing the absorption of gases onto clean metal surfaces and the associated regular adsorption phases, starting shortly after the Davisson and Germer discovery into the 1970s.
In the early 1960s LEED experienced a renaissance, as ultra-high vacuum became widely available, and the post acceleration detection method was introduced by Germer and his coworkers at Bell Labs using a flat phosphor screen. Using this technique, diffracted electrons were accelerated to high energies to produce clear and visible diffraction patterns on the screen. Ironically the post-acceleration method had already been proposed by Ehrenberg in 1934. In 1962 Lander and colleagues introduced the modern hemispherical screen with associated hemispherical grids. In the mid-1960s, modern LEED systems became commercially available as part of the ultra-high-vacuum instrumentation suite by Varian Associates and triggered an enormous boost of activities in surface science. Notably, future Nobel prize winner Gerhard Ertl started his studies of surface chemistry and catalysis on such a Varian system.
It soon became clear that the kinematic (single-scattering) theory, which had been successfully used to explain X-ray diffraction experiments, was inadequate for the quantitative interpretation of experimental data obtained from LEED. At this stage a detailed determination of surface structures, including adsorption sites, bond angles and bond lengths was not possible.
A dynamical electron-diffraction theory, which took into account the possibility of multiple scattering, was established in the late 1960s. With this theory, it later became possible to reproduce experimental data with high precision.
== Experimental setup ==
In order to keep the studied sample clean and free from unwanted adsorbates, LEED experiments are performed in an ultra-high vacuum environment (residual gas pressure <10−7 Pa).
=== LEED optics ===
The main components of a LEED instrument are:
An electron gun from which monochromatic electrons are emitted by a cathode filament that is at a negative potential, typically 10–600 V, with respect to the sample. The electrons are accelerated and focused into a beam, typically about 0.1 to 0.5 mm wide, by a series of electrodes serving as electron lenses. Some of the electrons incident on the sample surface are backscattered elastically, and diffraction can be detected if sufficient order exists on the surface. This typically requires a region of single crystal surface as wide as the electron beam, although sometimes polycrystalline surfaces such as highly oriented pyrolytic graphite (HOPG) are sufficient.
A high-pass filter for scattered electrons in the form of a retarding field analyzer, which blocks all but elastically scattered electrons. It usually contains three or four hemispherical concentric grids. Because only radial fields around the sampled point would be allowed and the geometry of the sample and the surrounding area is not spherical, the space between the sample and the analyzer has to be field-free. The first grid, therefore, separates the space above the sample from the retarding field. The next grid is at a negative potential to block low energy electrons, and is called the suppressor or the gate. To make the retarding field homogeneous and mechanically more stable another grid at the same potential is added behind the second grid. The fourth grid is only necessary when the LEED is used like a tetrode and the current at the screen is measured, when it serves as screen between the gate and the anode.
A hemispherical positively-biased fluorescent screen on which the diffraction pattern can be directly observed, or a position-sensitive electron detector. Most new LEED systems use a reverse view scheme, which has a minimized electron gun, and the pattern is viewed from behind through a transmission screen and a viewport. Recently, a new digitized position sensitive detector called a delay-line detector with better dynamic range and resolution has been developed.
=== Sample ===
The sample of the desired surface crystallographic orientation is initially cut and prepared outside the vacuum chamber. The correct alignment of the crystal can be achieved with the help of X-ray diffraction methods such as Laue diffraction. After being mounted in the UHV chamber the sample is cleaned and flattened. Unwanted surface contaminants are removed by ion sputtering or by chemical processes such as oxidation and reduction cycles. The surface is flattened by annealing at high temperatures.
Once a clean and well-defined surface is prepared, monolayers can be adsorbed on the surface by exposing it to a gas consisting of the desired adsorbate atoms or molecules.
Often the annealing process will let bulk impurities diffuse to the surface and therefore give rise to a re-contamination after each cleaning cycle. The problem is that impurities that adsorb without changing the basic symmetry of the surface, cannot easily be identified in the diffraction pattern. Therefore, in many LEED experiments Auger electron spectroscopy is used to accurately determine the purity of the sample.
=== Using the detector for Auger electron spectroscopy ===
LEED optics is in some instruments also used for Auger electron spectroscopy. To improve the measured signal, the gate voltage is scanned in a linear ramp. An RC circuit serves to derive the second derivative, which is then amplified and digitized. To reduce the noise, multiple passes are summed up. The first derivative is very large due to the residual capacitive coupling between gate and the anode and may degrade the performance of the circuit. By applying a negative ramp to the screen this can be compensated. It is also possible to add a small sine to the gate. A high-Q RLC circuit is tuned to the second harmonic to detect the second derivative.
=== Data acquisition ===
A modern data acquisition system usually contains a CCD/CMOS camera pointed to the screen for diffraction pattern visualization and a computer for data recording and further analysis. More expensive instruments have in-vacuum position sensitive electron detectors that measure the current directly, which helps in the quantitative I–V analysis of the diffraction spots.
== Theory ==
=== Surface sensitivity ===
The basic reason for the high surface sensitivity of LEED is that for low-energy electrons the interaction between the solid and electrons is especially strong. Upon penetrating the crystal, primary electrons will lose kinetic energy due to inelastic scattering processes such as plasmon and phonon excitations, as well as electron–electron interactions.
In cases where the detailed nature of the inelastic processes is unimportant, they are commonly treated by assuming an exponential decay of the primary electron-beam intensity I0 in the direction of propagation:
I
(
d
)
=
I
0
e
−
d
/
Λ
(
E
)
.
{\displaystyle I(d)=I_{0}\,e^{-d/\Lambda (E)}.}
Here d is the penetration depth, and
Λ
(
E
)
{\displaystyle \Lambda (E)}
denotes the inelastic mean free path, defined as the distance an electron can travel before its intensity has decreased by the factor 1/e. While the inelastic scattering processes and consequently the electronic mean free path depend on the energy, it is relatively independent of the material. The mean free path turns out to be minimal (5–10 Å) in the energy range of low-energy electrons (20–200 eV). This effective attenuation means that only a few atomic layers are sampled by the electron beam, and, as a consequence, the contribution of deeper atoms to the diffraction progressively decreases.
=== Kinematic theory: single scattering ===
Kinematic diffraction is defined as the situation where electrons impinging on a well-ordered crystal surface are elastically scattered only once by that surface. In the theory the electron beam is represented by a plane wave with a wavelength given by the de Broglie hypothesis:
λ
=
h
2
m
e
E
,
λ
[nm]
≈
1.5
E
[eV]
.
{\displaystyle \lambda ={\frac {h}{\sqrt {2m_{\text{e}}E}}},\quad \lambda {\text{ [nm]}}\approx {\sqrt {\frac {1.5}{E{\text{ [eV]}}}}}.}
The interaction between the scatterers present in the surface and the incident electrons is most conveniently described in reciprocal space. In three dimensions the primitive reciprocal lattice vectors are related to the real space lattice {a, b, c} in the following way:
a
∗
=
2
π
b
×
c
a
⋅
(
b
×
c
)
,
b
∗
=
2
π
c
×
a
b
⋅
(
c
×
a
)
,
c
∗
=
2
π
a
×
b
c
⋅
(
a
×
b
)
.
{\displaystyle \mathbf {a} ^{*}=2\pi {\frac {\mathbf {b} \times \mathbf {c} }{\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )}},\quad \mathbf {b} ^{*}=2\pi {\frac {\mathbf {c} \times \mathbf {a} }{\mathbf {b} \cdot (\mathbf {c} \times \mathbf {a} )}},\quad \mathbf {c} ^{*}=2\pi {\frac {\mathbf {a} \times \mathbf {b} }{\mathbf {c} \cdot (\mathbf {a} \times \mathbf {b} )}}.}
For an incident electron with wave vector
k
i
=
2
π
/
λ
i
{\displaystyle \mathbf {k} _{i}=2\pi /\lambda _{i}}
and scattered wave vector
k
f
=
2
π
/
λ
f
{\displaystyle \mathbf {k} _{f}=2\pi /\lambda _{f}}
, the condition for constructive interference and hence diffraction of scattered electron waves is given by the Laue condition:
k
f
−
k
i
=
G
h
k
l
,
{\displaystyle \mathbf {k} _{f}-\mathbf {k} _{i}=\mathbf {G} _{hkl},}
where (h, k, l) is a set of integers, and
G
h
k
l
=
h
a
∗
+
k
b
∗
+
l
c
∗
{\displaystyle {\textbf {G}}_{hkl}=h\mathbf {a} ^{*}+k\mathbf {b} ^{*}+l\mathbf {c} ^{*}}
is a vector of the reciprocal lattice. Note that these vectors specify the Fourier components of charge density in the reciprocal (momentum) space, and that the incoming electrons scatter at these density modulations within the crystal lattice. The magnitudes of the wave vectors are unchanged, i.e.
|
k
f
|
=
|
k
i
|
{\displaystyle |\mathbf {k} _{f}|=|\mathbf {k} _{i}|}
, because only elastic scattering is considered.
Since the mean free path of low-energy electrons in a crystal is only a few angstroms, only the first few atomic layers contribute to the diffraction. This means that there are no diffraction conditions in the direction perpendicular to the sample surface. As a consequence, the reciprocal lattice of a surface is a 2D lattice with rods extending perpendicular from each lattice point. The rods can be pictured as regions where the reciprocal lattice points are infinitely dense.
Therefore, in the case of diffraction from a surface the Laue condition reduces to the 2D form:
k
f
∥
−
k
i
∥
=
G
h
k
=
h
a
∗
+
k
b
∗
,
{\displaystyle \mathbf {k} _{f}^{\parallel }-\mathbf {k} _{i}^{\parallel }=\mathbf {G} _{hk}=h\mathbf {a} ^{*}+k\mathbf {b} ^{*},}
where
a
∗
{\displaystyle \mathbf {a} ^{*}}
and
b
∗
{\displaystyle \mathbf {b} ^{*}}
are the primitive translation vectors of the 2D reciprocal lattice of the surface and
k
f
∥
{\displaystyle {\textbf {k}}_{f}^{\parallel }}
,
k
i
∥
{\displaystyle {\textbf {k}}_{i}^{\parallel }}
denote the component of respectively the reflected and incident wave vector parallel to the sample surface.
a
∗
{\displaystyle {\textbf {a}}^{*}}
and
b
∗
{\displaystyle {\textbf {b}}^{*}}
are related to the real space surface lattice, with
n
^
{\displaystyle {\hat {\mathbf {n} }}}
as the surface normal, in the following way:
a
∗
=
2
π
b
×
n
^
|
a
×
b
|
,
b
∗
=
2
π
n
^
×
a
|
a
×
b
|
.
{\displaystyle {\begin{aligned}\mathbf {a} ^{*}&=2\pi {\frac {\mathbf {b} \times {\hat {\mathbf {n} }}}{|\mathbf {a} \times \mathbf {b} |}},\\\mathbf {b} ^{*}&=2\pi {\frac {{\hat {\mathbf {n} }}\times \mathbf {a} }{|\mathbf {a} \times \mathbf {b} |}}.\end{aligned}}}
The Laue-condition equation can readily be visualized using the Ewald's sphere construction.
Figures 3 and 4 show a simple illustration of this principle: The wave vector
k
i
{\displaystyle \mathbf {k} _{i}}
of the incident electron beam is drawn such that it terminates at a reciprocal lattice point. The Ewald's sphere is then the sphere with radius
|
k
i
|
{\displaystyle |\mathbf {k} _{i}|}
and origin at the center of the incident wave vector. By construction, every wave vector centered at the origin and terminating at an intersection between a rod and the sphere will then satisfy the 2D Laue condition and thus represent an allowed diffracted beam.
=== Interpretation of LEED patterns ===
Figure 4 shows the Ewald's sphere for the case of normal incidence of the primary electron beam, as would be the case in an actual LEED setup. It is apparent that the pattern observed on the fluorescent screen is a direct picture of the reciprocal lattice of the surface. The spots are indexed according to the values of h and k. The size of the Ewald's sphere and hence the number of diffraction spots on the screen is controlled by the incident electron energy. From the knowledge of the reciprocal lattice models for the real space lattice can be constructed and the surface can be characterized at least qualitatively in terms of the surface periodicity and the point group. Figure 7 shows a model of an unreconstructed (100) face of a simple cubic crystal and the expected LEED pattern. Since these patterns can be inferred from the crystal structure of the bulk crystal, known from other more quantitative diffraction techniques, LEED is more interesting in the cases where the surface layers of a material reconstruct, or where surface adsorbates form their own superstructures.
==== Superstructures ====
Overlaying superstructures on a substrate surface may introduce additional spots in the known (1×1) arrangement. These are known as extra spots or super spots. Figure 6 shows many such spots appearing after a simple hexagonal surface of a metal has been covered with a layer of graphene. Figure 7 shows a schematic of real and reciprocal space lattices for a simple (1×2) superstructure on a square lattice.
For a commensurate superstructure the symmetry and the rotational alignment with respect to adsorbent surface can be determined from the LEED pattern. This is easiest shown by using a matrix notation, where the primitive translation vectors of the superlattice {as, bs} are linked to the primitive translation vectors of the underlying (1×1) lattice {a, b} in the following way
a
s
=
G
11
a
+
G
12
b
,
b
s
=
G
21
a
+
G
22
b
.
{\displaystyle {\begin{aligned}{\textbf {a}}_{s}&=G_{11}{\textbf {a}}+G_{12}{\textbf {b}},\\{\textbf {b}}_{s}&=G_{21}{\textbf {a}}+G_{22}{\textbf {b}}.\end{aligned}}}
The matrix for the superstructure then is
G
=
(
G
11
G
12
G
21
G
22
)
.
{\displaystyle {\begin{aligned}G=\left({\begin{array}{cc}G_{11}&G_{12}\\G_{21}&G_{22}\end{array}}\right).\end{aligned}}}
Similarly, the primitive translation vectors of the lattice describing the extra spots {a∗s, b∗s} are linked to the primitive translation vectors of the reciprocal lattice {a∗, b∗}
a
s
∗
=
G
11
∗
a
∗
+
G
12
∗
b
∗
,
b
s
∗
=
G
21
∗
a
∗
+
G
22
∗
b
∗
.
{\displaystyle {\begin{aligned}{\textbf {a}}_{s}^{*}&=G_{11}^{*}{\textbf {a}}^{*}+G_{12}^{*}{\textbf {b}}^{*},\\{\textbf {b}}_{s}^{*}&=G_{21}^{*}{\textbf {a}}^{*}+G_{22}^{*}{\textbf {b}}^{*}.\end{aligned}}}
G∗ is related to G in the following way
G
∗
=
(
G
−
1
)
T
=
1
det
(
G
)
(
G
22
−
G
21
−
G
12
G
11
)
.
{\displaystyle {\begin{aligned}G^{*}&=(G^{-1})^{\text{T}}\\&={\frac {1}{\det(G)}}\left({\begin{array}{cc}G_{22}&-G_{21}\\-G_{12}&G_{11}\end{array}}\right).\end{aligned}}}
==== Domains ====
An essential problem when considering LEED patterns is the existence of symmetrically equivalent domains. Domains may lead to diffraction patterns that have higher symmetry than the actual surface at hand. The reason is that usually the cross sectional area of the primary electron beam (~1 mm2) is large compared to the average domain size on the surface and hence the LEED pattern might be a superposition of diffraction beams from domains oriented along different axes of the substrate lattice.
However, since the average domain size is generally larger than the coherence length of the probing electrons, interference between electrons scattered from different domains can be neglected. Therefore, the total LEED pattern emerges as the incoherent sum of the diffraction patterns associated with the individual domains.
Figure 8 shows the superposition of the diffraction patterns for the two orthogonal domains (2×1) and (1×2) on a square lattice, i.e. for the case where one structure is just rotated by 90° with respect to the other. The (1×2) structure and the respective LEED pattern are shown in Figure 7. It is apparent that the local symmetry of the surface structure is twofold while the LEED pattern exhibits a fourfold symmetry.
Figure 1 shows a real diffraction pattern of the same situation for the case of a Si(100) surface. However, here the (2×1) structure is formed due to surface reconstruction.
=== Dynamical theory: multiple scattering ===
The inspection of the LEED pattern gives a qualitative picture of the surface periodicity i.e. the size of the surface unit cell and to a certain degree of surface symmetries. However it will give no information about the atomic arrangement within a surface unit cell or the sites of adsorbed atoms. For instance, when the whole superstructure in Figure 7 is shifted such that the atoms adsorb in bridge sites instead of on-top sites the LEED pattern stays the same, although the individual spot intensities may somewhat differ.
A more quantitative analysis of LEED experimental data can be achieved by analysis of so-called I–V curves, which are measurements of the intensity versus incident electron energy. The I–V curves can be recorded by using a camera connected to computer controlled data handling or by direct measurement with a movable Faraday cup. The experimental curves are then compared to computer calculations based on the assumption of a particular model system. The model is changed in an iterative process until a satisfactory agreement between experimental and theoretical curves is achieved. A quantitative measure for this agreement is the so-called reliability- or R-factor. A commonly used reliability factor is the one proposed by Pendry. It is expressed in terms of the logarithmic derivative of the intensity:
L
(
E
)
=
I
′
/
I
.
{\displaystyle {\begin{aligned}L(E)&=I'/I.\end{aligned}}}
The R-factor is then given by:
R
=
∑
g
∫
(
Y
gth
(
E
)
−
Y
gexpt
(
E
)
)
2
d
E
/
∑
g
∫
(
Y
gth
2
(
E
)
+
Y
gexpt
2
(
E
)
)
d
E
,
{\displaystyle {\begin{aligned}R&=\sum _{g}\int (Y_{\textrm {gth}}(E)-Y_{\textrm {gexpt}}(E))^{2}dE/\sum _{g}\int (Y_{\textrm {gth}}^{2}(E)+Y_{\textrm {gexpt}}^{2}(E))dE,\end{aligned}}}
where
Y
(
E
)
=
L
−
1
/
(
L
−
2
+
V
o
i
2
)
{\displaystyle Y(E)=L^{-1}/(L^{-2}+V_{oi}^{2})}
and
V
o
i
{\displaystyle V_{oi}}
is the imaginary part of the electron self-energy. In general,
R
p
≤
0.2
{\displaystyle R_{p}\leq 0.2}
is considered as a good agreement,
R
p
≃
0.3
{\displaystyle R_{p}\simeq 0.3}
is considered mediocre and
R
p
≃
0.5
{\displaystyle R_{p}\simeq 0.5}
is considered a bad agreement. Figure 9 shows examples of the comparison between experimental I–V spectra and theoretical calculations.
=== Dynamical LEED calculations ===
The term dynamical stems from the studies of X-ray diffraction and describes the situation where the response of the crystal to an incident wave is included self-consistently and multiple scattering can occur. The aim of any dynamical LEED theory is to calculate the intensities of diffraction of an electron beam impinging on a surface as accurately as possible.
A common method to achieve this is the self-consistent multiple scattering approach. One essential point in this approach is the assumption that the scattering properties of the surface, i.e. of the individual atoms, are known in detail. The main task then reduces to the determination of the effective wave field incident on the individual scatters present in the surface, where the effective field is the sum of the primary field and the field emitted from all the other atoms. This must be done in a self-consistent way, since the emitted field of an atom depends on the incident effective field upon it. Once the effective field incident on each atom is determined, the total field emitted from all atoms can be found and its asymptotic value far from the crystal then gives the desired intensities.
A common approach in LEED calculations is to describe the scattering potential of the crystal by a "muffin tin" model, where the crystal potential can be imagined being divided up by non-overlapping spheres centered at each atom such that the potential has a spherically symmetric form inside the spheres and is constant everywhere else. The choice of this potential reduces the problem to scattering from spherical potentials, which can be dealt with effectively. The task is then to solve the Schrödinger equation for an incident electron wave in that "muffin tin" potential.
== Related techniques ==
=== Tensor LEED ===
In LEED the exact atomic configuration of a surface is determined by a trial and error process where measured I–V curves are compared to computer-calculated spectra under the assumption of a model structure. From an initial reference structure a set of trial structures is created by varying the model parameters. The parameters are changed until an optimal agreement between theory and experiment is achieved. However, for each trial structure a full LEED calculation with multiple scattering corrections must be conducted. For systems with a large parameter space the need for computational time might become significant. This is the case for complex surfaces structures or when considering large molecules as adsorbates.
Tensor LEED is an attempt to reduce the computational effort needed by avoiding full LEED calculations for each trial structure. The scheme is as follows: One first defines a reference surface structure for which the I–V spectrum is calculated. Next a trial structure is created by displacing some of the atoms. If the displacements are small the trial structure can be considered as a small perturbation of the reference structure and first-order perturbation theory can be used to determine the I–V curves of a large set of trial structures.
=== Spot profile analysis low-energy electron diffraction (SPA-LEED) ===
A real surface is not perfectly periodic but has many imperfections in the form of dislocations, atomic steps, terraces and the presence of unwanted adsorbed atoms. This departure from a perfect surface leads to a broadening of the diffraction spots and adds to the background intensity in the LEED pattern.
SPA-LEED is a technique where the profile and shape of the intensity of diffraction beam spots is measured. The spots are sensitive to the irregularities in the surface structure and their examination therefore permits more-detailed conclusions about some surface characteristics. Using SPA-LEED may for instance permit a quantitative determination of the surface roughness, terrace sizes, dislocation arrays, surface steps and adsorbates.
Although some degree of spot profile analysis can be performed in regular LEED and even LEEM setups, dedicated SPA-LEED setups, which scan the profile of the diffraction spot over a dedicated channeltron detector allow for much higher dynamic range and profile resolution.
=== Other ===
Spin-polarized low energy electron diffraction
Inelastic low energy electron diffraction
Very low-energy electron diffraction (VLEED)
Reflection high-energy electron diffraction (RHEED)
Ultrafast low-energy electron diffraction (ULEED)
== See also ==
List of surface analysis methods
== External links ==
LEED program packages
LEED pattern analyzer (LEEDpat)
== References == | Wikipedia/Low-energy_electron_diffraction |
This is a timeline of crystallography.
== 17th century ==
1669 - In his book De solido intra solidum naturaliter contento Nicolas Steno asserted that, although the number and size of crystal faces may vary from one crystal to another, the angles between corresponding faces are always the same. This was the original statement of the first law of crystallography (Steno's law).
== 18th century ==
1723 - Moritz Anton Cappeller introduced the term crystallography in his book Prodromus Crystallographiae De Crystallis Improprie Sic Dictis Commentarium.
1766 - Pierre-Joseph Macquer, in his Dictionnaire de Chymie, promoted mechanisms of crystallization based on the idea that crystals are composed of polyhedral molecules (primitive integrantes).
1772 - Jean-Baptiste L. Romé de l'Isle developed geometrical ideas on crystal structure in his Essai de Cristallographie. He also described the twinning phenomenon in crystals.
1781 - Abbé René Just Haüy (often termed the "Father of Modern Crystallography") discovered that crystals always cleave along crystallographic planes. Based on this observation, and the fact that the inter-facial angles in each crystal species always have the same value, Haüy concluded that crystals must be periodic and composed of regularly arranged rows of tiny polyhedra (molécules intégrantes). This theory explained why all crystal planes are related by small rational numbers (the law of rational indices).
1783 - Jean-Baptiste L. Romé de l'Isle in the second edition of his Cristallographie used the contact goniometer to discover the law of constancy of interfacial angles: angles are constant and characteristic for crystals of the same chemical substance.
1784 - René Just Haüy published his law of decrements: a crystal is composed of molecules arranged periodically in three dimensions.
1795 - René Just Haüy lectured on his law of symmetry: "the manner in which Nature creates crystals is always obeying ... the law of the greatest possible symmetry, in the sense that oppositely situated but corresponding parts are always equal in number, arrangement, and form of their faces".
== 19th century ==
1801 - René Just Haüy published his multi-volume Traité de Minéralogie in Paris. A second edition under the title Traité de Cristallographie was published in 1822.
1801 - Déodat de Dolomieu published his Sur la philosophie minéralogique et sur l'espèce minéralogique in Paris.
1815 - René Just Haüy published his law of symmetry.
1815 - Christian Samuel Weiss, founder of the dynamist school of crystallography, developed a geometric treatment of crystals in which crystallographic axes are the basis for classification of crystals rather than Haüy's polyhedral molecules.
1819 - Eilhard Mitscherlich discovered crystallographic isomorphism.
1822 - Friedrich Mohs attempted to bring the molecular approach of Haüy and the geometric approach of Weiss into agreement.
1823 - Franz Ernst Neumann invented a system of crystal face notation, by using the reciprocals of the intercepts with crystal axes, which becomes the standard for the next 60 years.
1824 - Ludwig August Seeber conceived of the concept of using an array of discrete (molecular) points to represent a crystal.
1826 - Moritz Ludwig Frankenheim derived the 32 crystal classes by using the crystallographic restriction, consistent with Haüy's laws, that only 2, 3, 4 and 6-fold rotational axes are permitted.
1830 - Johann F. C. Hessel published an independent geometrical derivation of the 32 point groups (crystal classes).
1832 - Friedrich Wöhler and Justus von Liebig discovered polymorphism in molecular crystals, using the example of benzamide.
1839 - William Hallowes Miller invented zonal relations by projecting the faces of a crystal upon the surface of a circumscribed sphere. Miller indices are defined which form a notation system in crystallography for planes in crystal (Bravais) lattices.
1840 - Gabriel Delafosse, independently of Seeber, represented crystal structure as an array of discrete points generated by defined translations.
1842 - Moritz Frankenheim derived 15 different theoretical networks of points in space not dependent on molecular shape.
1848 - Louis Pasteur discovered that sodium ammonium tartrate can crystallize in left- and right-handed forms and showed that the two forms can rotate polarized light in opposite directions. This was the first demonstration of molecular chirality, and also the first explanation of isomerism.
1850 - Auguste Bravais derived the 14 space lattices.
1869 - Axel Gadolin, independently of Hessel, derived the 32 crystal classes using stereographic projection.
1877 - Paul Heinrich von Groth founded the journal Zeitschrift für Krystallographie und Mineralogie, and served as its editor for 44 years.
1877 - Ernest-François Mallard, building on the work of Auguste Bravais, published a memoir on optically "anomalous" crystals (that is, those crystals the morphology of which seems to be of greater symmetry than their optics), in which the importance of crystal twinning and "pseudosymmetry" were used as explanatory concepts.
1879 - Leonhard Sohncke listed the 65 crystallographic point systems using rotations and reflections in addition to translations.
1888 - Friedrich Reinitzer discovered the existence of liquid crystals during investigations of cholesteryl benzoate.
1889 - Otto Lehmann, after receiving a letter from Friedrich Reinitzer, used polarizing light to explain the phenomenon of liquid crystals.
1891 - Derivation of the 230 space groups (by adding mirror-image symmetry to Sohncke's work) by a collaborative effort of Evgraf Fedorov and Arthur Schoenflies.
1894 - William Barlow, using a sphere packing approach, independently derived the 230 space groups.
1894 - Pierre Curie described the now called Curie's principle for the symmetry properties of crystals.
1895 - Wilhelm Conrad Röntgen on 8 November 1895 produced and detected electromagnetic radiation in a wavelength range now known as X-rays or Röntgen rays, an achievement that earned him the first Nobel Prize in Physics in 1901. X-rays became the major mode of crystallographic research in the 20th century.
1899 - Hermanus Haga and Cornelis Wind observed X-ray diffuse broadening through a slit and deduced that the wavelength of X-rays is on the order of an angstrom.
== 20th century ==
1905 - Charles Glover Barkla discovered the X-ray polarization effect.
1908 - Bernhard Walter and Robert Wichard Pohl observed X-ray diffraction from a slit.
1912 - Max von Laue discovered diffraction patterns from crystals in an x-ray beam.
1912 - Bragg diffraction, expressed through Bragg's law, is first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society.
1912 - Heinrich Baumhauer discovered and described polytypism in crystals of carborundum, or silicon carbide.
1913 - Lawrence Bragg published the first observation of x-ray diffraction by crystals. Similar observations were also published by Torahiko Terada in the same year.
1913 - Georges Friedel stated Friedel's law, a property of Fourier transforms of real functions. Friedel's law is used in X-ray diffraction, crystallography and scattering from real potential within the Born approximation.
1914 - Max von Laue won the Nobel Prize in Physics "for his discovery of the diffraction of X-rays by crystals."
1915 - William and Lawrence Bragg published the book X rays and crystal structure and shared the Nobel Prize in Physics "for their services in the analysis of crystal structure by means of X-rays."
1916 - Peter Debye and Paul Scherrer discovered powder (polycrystalline) diffraction.
1916 - Paul Peter Ewald predicted the Pendellösung effect, which is a foundational aspect of the dynamical diffraction theory of X rays.
1917 - Albert W. Hull independently discovered powder diffraction in researching the crystal structure of metals.
1920 - Reginald Oliver Herzog and Willi Jancke published the first systematic analysis of X-ray diffraction patterns of cellulose extracted from a variety of sources.
1921 - Paul Peter Ewald introduced a spherical construction for explaining the occurrence of diffraction spots, which is now called Ewald's sphere.
1922 - Charles Galton Darwin formulated the theory of X-ray diffraction from imperfect crystals and introduced the concept of mosaicity in crystallography.
1922 - Ralph Wyckoff published a book containing tables with the positional coordinates permitted by the symmetry elements. These positions are now known as Wyckoff positions. This book was the forerunner of the International tables for crystallography, which first appeared in 1935.
1923 - Roscoe Dickinson and Albert Raymond, and independently, H.J. Gonell and Hermann Mark, first showed that an organic molecule, specifically hexamethylenetetramine, could be characterized by x-ray crystallography.
1923 - William H. Bragg and Reginald E. Gibbs elucidated the structure of quartz.
1923 - Paul Peter Ewald published his book Kristalle und Röntgenstrahlen (Crystals and X-rays).
1924 - Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta introduced his theory of electron waves. This was the start of electron and neutron diffraction and crystallography.
1924 - J.D. Bernal established the structure of graphite.
1926 - Victor Goldschmidt distinguished between atomic and ionic radii and postulated some rules for atom substitution in crystal structures.
1927 - Frits Zernike and Jan Albert Prins proposed the pair distribution function for analyzing molecular structures in solution-phase diffraction.
1927 - Two groups demonstrated electron diffraction, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident.
1928 - Felix Machatschki, working with Goldschmidt, showed that silicon can be replaced by aluminium in feldspar structures.
1928 - Kathleen Lonsdale used x-rays to determine that the structure of benzene is a flat hexagonal ring.
1928 - Paul Niggli introduced reduced cells for simplifying structures using a technique now known as Niggli reduction.
1928 - Hans Bethe published the first non-relativistic explanation of electron diffraction based upon Schrödinger's equation, which remains central to all further analysis.
1928 - Carl Hermann introduced and Charles Mauguin modified the international standard notation for crystallographic groups called Hermann–Mauguin notation.
1929 - Linus Pauling formulated a set of rules (later called Pauling's rules) to describe the structure of complex ionic crystals.
1929 - William Howard Barnes published the crystal structure of ice.
1930 - Lawrence Bragg assembled the first classification of silicates, describing their structure in terms of grouping of SiO4 tetrahedra.
1930 - Gas electron diffraction was developed by Herman Mark and Raymond Wierl,
1931 - Paul Ewald and Carl Hermann published the first volume of the Strukturbericht (Structure Report), which established the systematic classification of crystal structure prototypes, also known as the Strukturbericht designation.
1931 - Fritz Laves enumerated the Laves tilings for the first time.
1932 - W. H. Zachariasen published an article entitled The atomic arrangement in glass, which perhaps had more influence than any other published work on the science of glass.
1932 - Friedrich Rinne introduced the concept of paracrystallinity for liquid crystals and amorphous materials.
1932 - Vadim E. Lashkaryov and Ilya D. Usyskin determined of the positions of hydrogen atoms in ammonium chloride crystals using electron diffraction.
1934 - Arthur Patterson introduced the Patterson function which uses diffraction intensities to determine the interatomic distances within a crystal, setting limits to the possible phase values for the reflected x-rays.
1934 - Martin Julian Buerger developed the equi-inclination Weissenberg X-ray camera. Buerger invented the precession camera in 1942.
1934 - C. Arnold Beevers and Henry Lipson invented the Beevers–Lipson strip as a calculation aid for Fourier methods for the determination of the crystal structure of CuSO4.5H2O.
1934 - Fritz Laves investigated the structures of intermetallic compounds of formula AB2. These structures were subsequently named Laves phases.
1935 - First publication of the International tables for the determination of crystal structures edited by Carl Hermann. The successor volumes are currently published by IUCr as the International tables for crystallography.
1935 - William Astbury established the structure of keratin using x-ray crystallography; this work provided the foundation for Linus Pauling's 1951 discovery of the α-helix.
1936 - Peter Debye won the Nobel Prize in Chemistry "for his contributions to our knowledge of molecular structure through his investigations on dipole moments and on the diffraction of X-rays and electrons in gases."
1936 - Hans Boersch showed that electron microscope could be used as micro-diffraction cameras with an aperture—the birth of selected area electron diffraction.: Chpt 5-6
1937 - Clinton Joseph Davisson and George Paget Thomson shared the Nobel Prize in physics "for their experimental discovery of the diffraction of electrons by crystals."
1939 - Linus Pauling published the book The Nature of the Chemical Bond and the Structure of Molecules and Crystals.
1939 - André Guinier discovered small-angle X-ray scattering.
1939 - Walther Kossel and Gottfried Möllenstedt published the first work on convergent beam electron diffraction (CBED), It was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to use CBED patterns to determine point groups and space groups.
1941 - The International Centre for Diffraction Data was founded.
1945 - George W. Brindley and Keith Robinson solved the crystal structure of kaolinite.
1945 - The crystal structure of the perovskite BaTiO3 was first published by Helen Megaw based on barium titanate X-ray diffraction data.
1945 - A.F. Wells published the classic reference book, Structural inorganic chemistry, which subsequently went through five editions.
1946 - Foundation of the International Union of Crystallography.
1946 - James Batcheller Sumner shared the Nobel Prize in Chemistry "for his discovery that enzymes can be crystallized".
1947 - Lewis Stephen Ramsdell systematically classified the polytypes of silicon carbide, and introduced the Ramsdell notation.
1948 - The first congress and general assembly of the International Union of Crystallography was held at Harvard University.
1948 - Acta Crystallographica was founded by the International Union of Crystallography (IUCr) with P.P. Ewald as its first editor.
1948 - Ernest O. Wollan and Clifford Shull published the first series of neutron diffraction experiments for crystallography performed at the Oak Ridge National Laboratory.
1948 - George Pake used solid state NMR spectroscopy to determine hydrogen atom distances in a single crystal of gypsum.
1949 - Clifford Shull opened a new field of magnetic crystallography based on neutron diffraction.
1950 - Jerome Karle and Herbert A. Hauptman introduced formulae for phase determination known as direct methods.
1951 - Johannes Martin Bijvoet and his colleagues, using anomalous scattering, confirmed Emil Fischer's arbitrary assignment of absolute configuration, in relation to the direction of optical rotation of polarized light, was correct in practice.
1951 - Linus Pauling determined the structure of the α-helix and the β-sheet in polypeptide chains.
1951 - Alexei Vasilievich Shubnikov published Symmetry and antisymmetry of finite figures which opened up the field of antisymmetry in magnetic structures.
1952 - David Sayre suggested that the phase problem could be more easily solved by having at least one more intensity measurement beyond those of the Bragg peaks in each dimension. This concept is understood today as oversampling.
1952 - Geoffrey Wilkinson and Ernst Otto Fischer determined the structure of ferrocene, the first metallic sandwich compound, for which they won the 1973 Nobel prize in Chemistry. The structure was soon refined by Jack Dunitz, Leslie Orgel, and Alexander Rich.
1953 - Arne Magnéli introduced the term homologous series to describe polytypes of transition metal oxides that exhibit crystallographic shear structures.
1953 - Determination of the structure of DNA by three British teams, for which James Watson, Francis Crick and Maurice Wilkins won the 1962 Nobel Prize in Physiology or Medicine in 1962 (Franklin's death in 1958 made her ineligible for the award).
1954 - Ukichiro Nakaya's book Snow Crystals: Natural and Artificial, dedicated to the modern study of snow crystals, is published.
1954 - Linus Pauling won the Nobel Prize in Chemistry "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances"."
1956 - Durward W. J. Cruickshank developed the theoretical framework for anisotropic displacement parameters, also known as the thermal ellipsoid.
1956 - James Menter published the first electron microscope images showing the lattice structure of a material.
1958 - William Burton Pearson published A Handbook of Lattice Spacings and Structures of Metals and Alloys, where he introduced the Pearson symbols for crystal structure types.
1959 - Norio Kato and Andrew Richard Lang observed Pendellösung fringes in X-ray diffraction from silicon and quartz. The observation of similar fringes in neutron diffraction was made by Clifford Shull in 1968.
1960 - John Kendrew determined the structure of myoglobin for which he shared the 1962 Nobel Prize in Chemistry.
1960 - After many years of research, Max Perutz determined the structure of haemoglobin for which he shared the 1962 Nobel Prize in Chemistry.
1960 - Lester Germer and his coworkers at Bell Labs using a flat phosphor screen for the first modern low-energy electron diffraction camera combined with ultra-high vacuum, the start of quantitative surface crystallography.
1962 - Alan Mackay demonstrated that there exists close packing of spheres to yield icosahedral structures.
1962 - Michael Rossmann and David Blow laid the foundation for the molecular replacement approach which provides phase information without requiring additional experimental effort.
1962 - Max Perutz and John Kendrew shared the Nobel Prize for Chemistry "for their studies of the structures of globular proteins", namely haemoglobin and myoglobin respectively
1962 - James Watson, Francis Crick and Maurice Wilkins won the Nobel Prize in Physiology or Medicine "for their discoveries concerning the molecular structure of nucleic acids and its significance for information transfer in living material," specifically for their determination of the structure of DNA.
1963 - Isabella Karle developed the symbolic addition procedure in direct methods for inverting X-ray diffraction data.
1963 - Jürg Waser introduced restrained least square method, also known as regularized least squares, for crystallographic structure fitting.
1964 - Dorothy Hodgkin won the Nobel Prize for Chemistry "for her determinations by X-ray techniques of the structures of important biochemical substances." The substances included penicillin and vitamin B12.
1965 - David Chilton Phillips, Louise Johnson and their co-workers published the structure of Lysozyme, the first enzyme to have its structure determined.
1965 - Olga Kennard established the Cambridge Structural Database.
1967 - Hugo Rietveld invented the Rietveld refinement method for computation of crystal structures.
1968 - Erwin Félix Lewy-Bertaut introduced magnetic space groups to account for the spin ordering of magnetic structures observed in neutron crystallography.
1968 - Aaron Klug and David DeRosier used electron microscopy to visualise the structure of the tail of bacteriophage T4, a common virus, thus signalling a breakthrough in macromolecular structure determination.
1968 - Dorothy Hodgkin, after 35 years of work, finally deciphered the structure of insulin.
1969 - Benno P. Schoenborn conducted the first structural study of macromolecules (myoglobin) by neutron diffraction at the Brookhaven National Laboratory.
1970 - Albert Crewe demonstrated imaging of single atoms in a scanning transmission electron microscopy.
1971 - Establishment of the Protein Data Bank (PDB). At PDB, Edgar Meyer develops the first general software tools for handling and visualizing protein structural data.
1971 - Gerd Rosenbaum, Kenneth Holmes, and Jean Witz first discussed the potential of synchrotron X-ray diffraction for biological applications.
1972 - The first quantitative matching of atomic scale images and dynamical simulations was published by J. G. Allpress, E. A. Hewat, A. F. Moodie and J. V. Sanders.
1972 - Michael Glazer established the classification of octahedral tilting patterns in perovskite crystal structures, later also known as the Glazer tilts.
1973 - Alex Rich's group published the first report of a polynucleotide crystal structure - that of the yeast transfer RNA (tRNA) for phenylalanine.
1973 - Geoffrey Wilkinson and Ernst Fischer shared the Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, so called sandwich compounds", specifically the structure of ferrocene.
1976 - Douglas L. Dorset and Herbert A. Hauptman used direct methods to solve crystal structures from electron diffraction data.
1976 - Boris Delaunay, building on his work in the 1930s, proved that the regularity of a system of points, an (r, R) system or Delone set, can be established by postulating the points' congruence within a sphere of a defined finite radius.
1976 - William Lipscomb won the Nobel Prize in Chemistry "for his studies on the structure of boranes illuminating problems of chemical bonding."
1978 - Stephen C. Harrison provided the first high-resolution structure of a virus: tomato bushy stunt virus which is icosahedral in form.
1978 - Günter Bergerhoff and I. David Brown initiated the Inorganic Crystal Structure Database.
1979 - The first award of the Gregori Aminoff Prize for a contribution in the field of crystallography is made by the Royal Swedish Academy of Sciences to Paul Peter Ewald.
1979 - A team involving Alfred Y. Cho and others at Bell Labs made the first reconstruction of atomic structures at the materials interface between gallium arsenide and aluminium using X-ray diffraction.
1980 - Jerome Karle and Wayne Hendrickson developed multi-wavelength anomalous dispersion (MAD) a technique to facilitate the determination of the three-dimensional structure of biological macromolecules via a solution of the phase problem.
1982 - Aaron Klug won the Nobel Prize in Chemistry "for his development of crystallographic electron microscopy and his structural elucidation of biologically important nucleic acid-protein complexes."
1983 - John R. Helliwell promoted the use of synchrotron radiation in the crystallography of molecular biology.
1983 - Effectively simultaneously Ian Robinson used surface X-ray Diffraction (SXRD) to solve the structure of the gold 2x1 (110) surface, Laurence D. Marks used electron microscopy and Gerd Binnig and Heinrich Rohrer used scanning tunneling microscope.
1984 - A team led by Dan Shechtman also involving Ilan Blech, Denis Gratias, and John W. Cahn discovered quasicrystals in a metallic alloy. These structures have no unit cell and no periodic translational order but have long-range bond orientational order, which generates a defined diffraction pattern.
1984 - Aaron Klug and his colleagues provided an advance in determining the structure of protein–nucleic acid complexes when they solved the structure of the 206-kDa nucleosome core particle.
1985 - Jerome Karle shared the Nobel Prize in Chemistry with Herbert A. Hauptman "for their outstanding achievements in the development of direct methods for the determination of crystal structures". Karle developed the theoretical basis for multiple-wavelength anomalous diffraction (MAD).
1985 - Hartmut Michel and his colleagues reported the first high-resolution X-ray crystal structure of an integral membrane protein when they published the structure of a photosynthetic reaction centre.
1985 - Kunio Takanayagi led a team which solved the structure of the 7x7 reconstruction of the silicon (111) surface using Patterson function methods with ultra-high vacuum electron diffraction. This surface structure had defeated many prior attempts.
1986 - Ernst Ruska shared the Nobel Prize in Physics "for his fundamental work in electron optics, and for the design of the first electron microscope".
1987 - John M. Cowley and Alexander F. Moodie shared the first IUCr Ewald Prize "for their outstanding achievements in electron diffraction and microscopy. They carried out pioneering work on the dynamical scattering of electrons and the direct imaging of crystal structures and structure defects by high-resolution electron microscopy. The physical optics approach used by Cowley and Moodie takes into account many hundreds of scattered beams, and represents a far-reaching extension of the dynamical theory for X-rays, first developed by P.P. Ewald".
1987 - Don Craig Wiley and Jack L. Strominger solved the structure of the soluble portion of a class I MHC molecule known as HLA-A2. This structure revealed the presence of a pocket which holds the antigenic peptide, which is recognized by the receptors of T cells only when firmly bound to the MHC product and presented at the surface of an infected cell. This structure strongly influenced the concept of T cell recognition in future work.
1988 - Johann Deisenhofer, Robert Huber and Hartmut Michel shared the Nobel Prize in Chemistry "for the determination of the three-dimensional structure of a photosynthetic reaction centre."
1989 - Gautam R. Desiraju defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties."
1991 - Georg E. Schulz and colleagues reported the structure of a bacterial porin, a membrane protein with a cylindrical shape (a ‘β-barrel').
1991 - The crystallographic information file (CIF) format was introduced by Sydney R. Hall, Frank H. Allen, and I. David Brown based on the self-defining text archive and retrieval (STAR) file format developed by Sydney R. Hall.
1991 - Sumio Iijima used electron diffraction to determine the structure of carbon nanotubes.
1992 - The International Union of Crystallography changed the IUCr's definition of a crystal to "any solid having an essentially discrete diffraction pattern" thus formally recognizing quasicrystals.
1992 - First release of the CNS software package by Axel T. Brunger. CNS is an extension of X-PLOR released in 1987, and is used for solving structures based on X-ray diffraction or solution NMR data.
1994 - Jan Pieter Abrahams et al. reported the structure of an F1-ATPase which uses the proton-motive force across the inner mitochondrial membrane to facilitate the synthesis of adenosine triphosphate (ATP).
1994 - Roger Vincent and Paul Midgley invented the precession electron diffraction method for electron crystallography in a transmission electron microscope.
1994 - Bertram Brockhouse and Clifford Shull shared the Nobel Prize in Physics "for pioneering contributions to the development of neutron scattering techniques for studies of condensed matter". Specifically, Brockhouse "for the development of neutron spectroscopy" and Shull "for the development of the neutron diffraction technique."
1994 - Philip Coppens led a team of researchers to uncover the transient structure of sodium nitroprusside, a first example in X-ray excited-state crystallography.
1995 - Douglas L. Dorset published Structural Electron Crystallography, a major text on electron crystallography.
1997 - The Bilbao Crystallographic Server was launched at the University of the Basque Country, led by Mois Ilia Aroyo, Juan Manuel Perez-Mato.
1997 - The X-ray crystal structure of bacteriorhodopsin was the first time the lipidic cubic phase (LCP) was used to facilitate the crystallization of a membrane protein; LCP has since been used to obtain the structures of many unique membrane proteins, including G protein-coupled receptors (GPCRs).
1997 - Paul D. Boyer and John E. Walker shared one half of the Nobel Prize in Chemistry "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" Walker determined the crystal structure of ATP synthase, and this structure confirmed a mechanism earlier proposed by Boyer, mainly on the basis of isotopic studies.
1997 - Nobuo Niimura led a team that first used a neutron image plate for structure determination of lysozyme at the Institut Laue–Langevin.
1998 - The structure of tubulin and the location of the taxol-binding site is first determined by Eva Nogales and her team using electron crystallography.
1998 - A group led by Jon Gjønnes combined three-dimensional electron diffraction with precession electron diffraction and direct methods to solve an intermetallic, combining this with dynamical refinements.
1999 - Jianwei Miao, Janos Kirz, David Sayre and co-workers performed the first experiment to extend crystallography to allow structural determination of non-crystalline specimens which has become known as coherent diffraction imaging (CDI), lensless imaging, or computational microscopy.
1999 - A team led by Michael O'Keefe and Omar Yaghi synthesized and determined the structure of MOF-5, the first metal-organic framework (MOF) compound. In the ensuing years, the duo and mathematician Olaf Delgado-Friedrichs further developed the periodic net theory proposed by Alexander F. Wells to characterize MOFs.
== 21st century ==
2000 - Janos Hajdu, Richard Neutze, and colleagues calculated that they could use Sayre's ideas from the 1950s, to implement a ‘diffraction before destruction' concept, using an X-ray free-electron laser (XFEL).
2001 - Harry F. Noller's group published the 5.5-Å structure of the complete Thermus thermophilus 70S ribosome. This structure revealed that the major functional regions of the ribosome were based on RNA, establishing the primordial role of RNA in translation.
2001 - Roger Kornberg's group published the 2.8-Å structure of Saccharomyces cerevisiae RNA polymerase. The structure allowed both transcription initiation and elongation mechanisms to be deduced. Simultaneously, this group reported the structure of free RNA polymerase II, which contributed towards the eventual visualisation of the interaction between DNA, RNA, and the ribosome.
2003 - Raimond Ravelli et al. demonstrated X-ray radiation damage-induced phasing method for structure determination.
2005 - The first X-ray free-electron laser in the soft X-ray regime, FLASH, became an operational user facility at DESY for X-ray diffraction experiments.
2007 - Ute Kolb and co-workers developed automated diffraction tomography for electron crystallography by combining diffraction and tomography within a transmission electron microscope.
2007 - Two X-ray crystal structures of a GPCR, the human β2 adrenergic receptor, were published. Because many drugs elicit their biological effect(s) by binding to a GPCR, the structures of these and other GPCRs may be used to develop efficacious drugs with few side effects.
2009 - The first hard X-ray free-electron laser, the Linac Coherent Light Source, became operational at the SLAC National Accelerator Laboratory.
2009 - Luca Bindi, Paul Steinhardt, Nan Yao, and Peter Lu identified the first naturally occurring quasicrystal using X-ray and electron crystallography.
2009 - Venkatraman Ramakrishnan, Thomas A. Steitz and Ada E. Yonath shared the Nobel Prize in Chemistry "for studies of the structure and function of the ribosome."
2009 - Judith Howard and her collaborators created the Olex2 crystallographic software package.
2011 - Gustaaf Van Tendeloo led a team including Sandra Van Aert, Kees Joost Batenburg et. al. determined the 3D atomic positions of a silver nanoparticle using electron tomography.
2011 - Dan Shechtman received the Nobel Prize in chemistry "for the discovery of quasicrystals."
2011 - Henry N. Chapman, Petra Fromme, John C. H. Spence and 85 co-workers used femtosecond pulses from a Free-electron laser (XFEL) to examine the structure of nanocrystals of Photosystem I. By using very brief x-ray pulses, most radiation damage is mitigated using the technique called serial femtosecond crystallography.
2012 - Jianwei Miao and his co-workers applied the coherent diffraction imaging (CDI) method in Atomic Electron Tomography (AET).
2013 - Tamir Gonen and his co-workers demonstrated microcrystal electron diffraction (microED) for lysozyme microcrystals at the Janelia Farm Research Campus.
2014 - Carmelo Giacovazzo published Phasing in Crystallography: A Modern Perspective, a comprehensive opus on phasing methods in X-ray and electron crystallography.
2014 - The International Union of Crystallography and UNESCO named 2014 the International Year of Crystallography to commemorate the century of discovery since the invention of X-ray diffraction.
2017 - Lukas Palatinus and co-workers used dynamical structure refinement to resolve hydrogen atom positions in nanocrystals using electron diffraction.
2017 - Jacques Dubochet, Joachim Frank and Richard Henderson shared the Nobel Prize in chemistry "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution."
2019 - The Cambridge Structural Database reached the milestone of one million structures.
2020 - Two independent groups led respectively by Holger Stark and Sjors Scheres demonstrated that single-particle cryoelectron microscopy has reached atomic resolution.
2021 - Kenneth G. Libbrecht published the book Snow Crystals: A Case Study in Spontaneous Structure Formation, summarizing his decade-spanning work on the subject for engineering conditions for designer ice crystals.
2022 - Leonid Dubrovinsky, Igor A. Abrikosov, and Natalia Dubrovinskaia led a team that demonstrates high-pressure crystallography in the terapascal regime.
2024 - A team led by Anders Madsen developed a deep learning model, PhAI, to solve crystallographic phase problem for small molecules.
== See also ==
Physical crystallography before X-rays
== References ==
== Further reading ==
=== Crystallography before 20th century ===
Whitlock, H. P. (1934). "A century of progress in crystallography" (PDF). The American Mineralogist. 19: 93–100.
Burke, John G. (1966), Origins of the science of crystals, University of California Press. LCCN 66--13584
Lima-de-Faria, José (ed.) (1990), Historical atlas of crystallography, Springer Netherlands
Kubbinga, Henk (2012). "Crystallography from Haüy to Laue: Controversies on the molecular and atomistic nature of solids". Zeitschrift für Kristallographie. 227 (1): 1–26. Bibcode:2012ZK....227....1K. doi:10.1524/zkri.2012.1459.
Molčanov, Krešimir; Stilinović, Vladimir (2014-01-13). "Chemical Crystallography before X-ray Diffraction". Angewandte Chemie International Edition. 53 (3): 638–652. doi:10.1002/anie.201301319. ISSN 1433-7851. PMID 24065378.
"Bernard MAITTE René-Just Haüy (1743-1822) et la naissance de la cristallographie*". annales.org. Retrieved 2024-05-15.
=== Crystallography in the 20th century and beyond ===
"100 Years of X-ray Crystallography". Chemical & Engineering News. Retrieved 2024-05-14.
Milestones in crystallography, Nature, August 2014
Schwarzenbach, Dieter (2012-01-01). "The success story of crystallography". Acta Crystallographica Section A. 68 (1): 57–67. Bibcode:2012AcCrA..68...57S. doi:10.1107/S0108767311030303. ISSN 0108-7673. PMID 22186283.
"Timelines of Crystallography". iycr2014.org. Retrieved 2024-08-19.
McMahon, Malcolm I. (2011), Rissanen, Kari (ed.), "High-Pressure Crystallography", Advanced X-Ray Crystallography, Topics in Current Chemistry, vol. 315, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 69–109, doi:10.1007/128_2011_132, ISBN 978-3-642-27406-0, PMID 21567312, retrieved 2024-05-22
Baur, Werner H. (2014-04-03). "One hundred years of inorganic crystal chemistry – a personal view". Crystallography Reviews. 20 (2): 64–116. Bibcode:2014CryRv..20...64B. doi:10.1080/0889311X.2013.879648. ISSN 0889-311X.
Pinheiro, Carlos Basílio; Abakumov, Artem M. (2015-01-01). "Superspace crystallography: a key to the chemistry and properties". IUCrJ. 2 (1): 137–154. Bibcode:2015IUCrJ...2..137P. doi:10.1107/S2052252514023550. ISSN 2052-2525. PMC 4285887. PMID 25610634.
Kopský, Vojtěch (2015-02-02). "Crystallography and Magnetic Phenomena". Symmetry. 7 (1): 125–145. Bibcode:2015Symm....7..125K. doi:10.3390/sym7010125. ISSN 2073-8994.
Gratias, Denis; Quiquandon, Marianne (2019-05-23). "Discovery of quasicrystals: The early days". Comptes Rendus. Physique. 20 (7–8): 803–816. Bibcode:2019CRPhy..20..803G. doi:10.1016/j.crhy.2019.05.009. ISSN 1878-1535.
=== History of X-ray crystallography ===
Ewald, P. P. (ed.) (1962), 50 Years of x-ray diffraction, IUCR, Oosthoek
Arndt, U. W. (2001-09-22). "Instrumentation in X-ray crystallography: Past, present and future". Notes and Records of the Royal Society of London. 55 (3): 457–472. doi:10.1098/rsnr.2001.0157. ISSN 0035-9149.
Watkin, David J. (2010). "Chemical crystallography–science, technology or a black art". Crystallography Reviews. 16 (3): 197–230. Bibcode:2010CryRv..16..197W. doi:10.1080/08893110903483246. ISSN 0889-311X.
Authier, André (2013), Early days of x-ray crystallography, Oxford Univ. Press. ISBN 9780198754053
Etter, Martin; Dinnebier, Robert E. (2014). "A Century of Powder Diffraction: a Brief History". Zeitschrift für anorganische und allgemeine Chemie. 640 (15): 3015–3028. doi:10.1002/zaac.201400526. ISSN 0044-2313.
Mingos, D. Michael P.; Raithby, Paul R., eds. (2020). 21st Century Challenges in Chemical Crystallography I: History and Technical Developments. Structure and Bonding. Vol. 185. Cham: Springer International Publishing. doi:10.1007/978-3-030-64743-8. ISBN 978-3-030-64742-1.
=== History of electron crystallography ===
Thomson, George (1968). "The early history of electron diffraction". Contemporary Physics. 9 (1): 1–15. Bibcode:1968ConPh...9....1T. doi:10.1080/00107516808204390. ISSN 0010-7514.
Tong, S.Y (1994). "Electron-diffraction for surface studies — the first 30 years". Surface Science. 299–300: 358–374. Bibcode:1994SurSc.299..358T. doi:10.1016/0039-6028(94)90667-X.
Dorset, D. L. (1996-10-01). "Electron crystallography". Acta Crystallographica Section B. 52 (5): 753–769. Bibcode:1996AcCrB..52..753D. doi:10.1107/S0108768196005599. ISSN 0108-7681. PMID 8900031.
Saha, Ambarneil; Nia, Shervin S.; Rodríguez, José A. (2022-09-14). "Electron Diffraction of 3D Molecular Crystals". Chemical Reviews. 122 (17): 13883–13914. doi:10.1021/acs.chemrev.1c00879. ISSN 0009-2665. PMC 9479085. PMID 35970513.
=== History of neutron crystallography ===
Schoenborn, B. P.; Nunes, A. C. (1972). "Neutron Scattering". Annual Review of Biophysics and Bioengineering. 1 (1): 529–552. doi:10.1146/annurev.bb.01.060172.002525. ISSN 0084-6589. PMID 4567759.
Bacon, G. E., ed. (1986). Fifty years of neutron diffraction: the advent of neutron scattering. Bristol: A. Hilger, published with the assistance of the International Union of Crystallography. ISBN 978-0-85274-587-8.
Harrison, R. J. (2006-01-01). "Neutron Diffraction of Magnetic Materials". Reviews in Mineralogy and Geochemistry. 63 (1): 113–143. Bibcode:2006RvMG...63..113H. doi:10.2138/rmg.2006.63.6. ISSN 1529-6466.
Blakeley, M.P. (2009). "Neutron macromolecular crystallography". Crystallography Reviews. 15 (3): 157–218. Bibcode:2009CryRv..15..157B. doi:10.1080/08893110902965003. ISSN 0889-311X.
Mason, T. E.; Gawne, T. J.; Nagler, S. E.; Nestor, M. B.; Carpenter, J. M. (2013-01-01). "The early development of neutron diffraction: science in the wings of the Manhattan Project". Acta Crystallographica Section A. 69 (1): 37–44. doi:10.1107/S0108767312036021. ISSN 0108-7673. PMC 3526866. PMID 23250059.
=== History of NMR crystallography ===
Andrew, E.R.; Szczesniak, E. (1995). "A historical account of NMR in the solid state". Progress in Nuclear Magnetic Resonance Spectroscopy. 28 (1): 11–36. Bibcode:1995PNMRS..28...11A. doi:10.1016/0079-6565(95)01018-1.
Harris, Robin K. (2008-12-15), "Crystallography and NMR: An Overview", in Harris, Robin K. (ed.), Encyclopedia of Magnetic Resonance, Chichester, UK: John Wiley & Sons, Ltd, doi:10.1002/9780470034590.emrstm1007, ISBN 978-0-470-03459-0, retrieved 2024-05-17
=== History of structure determination ===
Beevers, Ca; Lipson, H (1985). "A Brief History of Fourier Methods in Crystal-structure Determination". Australian Journal of Physics. 38 (3): 263. Bibcode:1985AuJPh..38..263B. doi:10.1071/PH850263. ISSN 0004-9506.
Hauptman, Herbert (1997-10-01). "Phasing methods for protein crystallography". Current Opinion in Structural Biology. 7 (5): 672–680. doi:10.1016/S0959-440X(97)80077-2. ISSN 0959-440X. PMID 9345626.
Hendrickson, Wayne A. (2013). "Evolution of diffraction methods for solving crystal structures". Acta Crystallographica Section A. 69 (1): 51–59. Bibcode:2013AcCrA..69...51H. doi:10.1107/S0108767312050453. ISSN 0108-7673. PMID 23250061.
Agirre, Jon; Dodson, Eleanor (2018). "Forty years of collaborative computational crystallography". Protein Science. 27 (1): 202–206. doi:10.1002/pro.3298. ISSN 0961-8368. PMC 5734308. PMID 28901632.
Hendrickson, Wayne A. (2023-09-01). "Facing the phase problem". IUCrJ. 10 (5): 521–543. Bibcode:2023IUCrJ..10..521H. doi:10.1107/S2052252523006449. ISSN 2052-2525. PMC 10478523. PMID 37668214.
=== History of macromolecular crystallography ===
Berman, Helen M. (2008-01-01). "The Protein Data Bank: a historical perspective". Acta Crystallographica Section A. 64 (1): 88–95. doi:10.1107/S0108767307035623. ISSN 0108-7673. PMID 18156675.
Jaskolski, Mariusz; Dauter, Zbigniew; Wlodawer, Alexander (2014). "A brief history of macromolecular crystallography, illustrated by a family tree and its Nobel fruits". The FEBS Journal. 281 (18): 3985–4009. doi:10.1111/febs.12796. ISSN 1742-464X. PMC 6309182. PMID 24698025.
Haas, David J. (2020-03-01). "The early history of cryo-cooling for macromolecular crystallography". IUCrJ. 7 (2): 148–157. Bibcode:2020IUCrJ...7..148H. doi:10.1107/S2052252519016993. ISSN 2052-2525. PMC 7055388. PMID 32148843.
Khusainov, Georgii; Standfuss, Joerg; Weinert, Tobias (2024-03-01). "The time revolution in macromolecular crystallography". Structural Dynamics. 11 (2): 020901. doi:10.1063/4.0000247. ISSN 2329-7778. PMC 11015943. PMID 38616866.
=== History of crystallographic organizations and journals ===
Kamminga, H. (1989-09-01). "The International Union of Crystallography: its formation and early development". Acta Crystallographica Section A. 45 (9): 581–601. Bibcode:1989AcCrA..45..581K. doi:10.1107/S0108767389003910. ISSN 0108-7673.
Cruickshank, D. W. J. (1998-11-01). "Aspects of the History of the International Union of Crystallography". Acta Crystallographica Section A. 54 (6): 687–696. Bibcode:1998AcCrA..54..687C. doi:10.1107/S0108767398011477.
Authier, André (2009-05-01). "60 years of IUCr journals". Acta Crystallographica Section A. 65 (3): 167–182. Bibcode:2009AcCrA..65..167A. doi:10.1107/S0108767309007235. ISSN 0108-7673. PMID 19349661. | Wikipedia/Timeline_of_crystallography |
Forensic geophysics is a branch of forensic science and is the study, the search, the localization and the mapping of buried objects or elements beneath the soil or the water, using geophysics tools for legal purposes. There are various geophysical techniques for forensic investigations in which the targets are buried and have different dimensions (from weapons or metallic barrels to human burials and bunkers). Geophysical methods have the potential to aid the search and the recovery of these targets because they can non-destructively and rapidly investigate large areas where a suspect, illegal burial or, in general, a forensic target is hidden in the subsoil. When in the subsurface there is a contrast of physical properties between a target and the material in which it is buried, it is possible to individuate and define precisely the concealing place of the searched target. It is also possible to recognize evidences of human soil occupation or excavation, both recent and older. Forensic geophysics is an evolving technique that is gaining popularity and prestige in law enforcement.
Searched for objects obviously include clandestine graves of murder victims, but also include unmarked burials in graveyards and cemeteries, weapons used in criminal activities and environmental crime illegally dumping material.
There are various near-surface geophysical techniques that can be utilised to detect a near-surface buried object, which should be site and case-specific. A thorough desk study (including historical maps), utility survey, site reconnaissance and control studies should be undertaken before trial geophysical surveys and then full geophysical surveys are undertaken in phased investigations. Note also other search techniques should be used to first to prioritise suspect areas, for example cadaver dogs or forensic geomorphologists.
== Techniques ==
For large-scale buried objects, seismic surveys may be appropriate but these have, at best, 2m vertical resolution so may not be ideal for certain targets, more typically they are used to detect bedrock below the surface.
For relatively quick site surveys, bulk ground electrical conductivity surveys can be collected which identifies areas of disturbance of different ground but these can suffer from a lack of resolution. This recent Black Death investigation in central London shows an example. shows a successful woodland search for a cold case in woodland in New Zealand.
Ground-penetrating radar (or GPR) has a typical maximum depth below ground level (bgl) of 10 m, depending upon the antennae frequencies used, typically 50 MHz to 1.2 Gz. The higher the frequency the smaller the object that can be resolved but also penetration depths decrease, so operators need to think carefully when choosing antennae frequencies and, ideally, undertake trial surveys using different antennae over a target at a known depth onsite. GPR is the most popularly used technique in forensic search, but is not suitable in certain soil types and environments, e.g. coastal (i.e. salt-rich) and clay-rich soils (lack of penetration). 2D profiles can be relatively quickly collected and, if time permits, successive profiles can be used to generate 3D datasets which may resolve more subtle targets. Recent studies have used GPR to locate mass graves from the Spanish Civil War in mountainous and urban environments.
Electrical resistivity methods can also detect objects, especially in clay-rich soil which would preclude the use of GPR. There are different equipment configurations, the dipole-dipole (fixed-offset) method is the most common which can traverse across an area, measuring resistivity variations at a set depth (typically 1-2x probe separations) which have been used in forensic searches. More slower methods are putting out many probes and collecting both spatially horizontally and vertically, called Electrical resistivity imaging (ERI). Multiple 2D profiles is termed electrical resistivity tomography (ERT).
Magnetometry can detect buried metal (or indeed fired objects such as bricks or even where surface fires were) using simple total field magnetometers, through to fluxgate gradiometers and high-end alkali vapour gradiometers, depending upon accuracy (and cost) required. Surface magnetic susceptibility has also shown recent promise for forensic search.
Water-based searches are also becoming more common, with specialist marine magnetometers, side-scan sonar and other acoustic methods and even water-penetrating radar methods used to rapidly scan bottoms of ponds, lakes, rivers and near-shore depositional environments.
== Controlled research ==
There has been recent efforts to undertake research over known buried and below-water surface simulated forensic targets in order to gain an insight into optimum search technique(s) and/or equipment configuration(s). Most commonly, this involved the burial porcine cadavers and long-term monitoring for soilwater, seasonal effects on electrical resistivity surveys, burial in walls and beneath concrete, and Long-Term monitoring in the UK, the US and Latin America. Finally there has been surveys in graveyards over graves of known ages to determine the geophysical responses of multi-geophysical techniques with increasing burial ages
== See also ==
Forensic geology
== References ==
== Further reading == | Wikipedia/Forensic_geophysics |
Crystallography is a book of poetry and prose published in 1994 and revised in 2003 by Canadian author Christian Bök. Based around a pataphysical conceit that language is a crystallization process, the book includes several forms of poetry including concrete poetry, as well as pseudohistorical texts, diagrams, charts, and English gematria.
Major poems in the book include Geodes and Diamonds.
Bök explains the title in an introduction. Crystallography refers to both the science of crystallography and a reanalysis of the word's roots: crystal meaning "clear", and "graph" meaning "writing":
Inspired by the etymology of the word "crystallography". such a work represents an act of lucid writing, which uses the language of geological science to misread the poetics of rhetorical language. Such lucid writing does not concern itself with the transparent transmission of a message (so that, ironically, the poetry often seems "opaque"); instead, lucid writing concerns itself with the exploratory examination of its own pattern (in a manner reminiscent of lucid dreaming). (Bök, 2003)
== References ==
=== Further reading ===
Bök, Christian. Crystallography. Toronto: Coach House Press, 2003 (2nd. Ed.) | Wikipedia/Crystallography_(book) |
Gas chromatography (GC) is a common type of chromatography used in analytical chemistry for separating and analyzing compounds that can be vaporized without decomposition. Typical uses of GC include testing the purity of a particular substance, or separating the different components of a mixture. In preparative chromatography, GC can be used to prepare pure compounds from a mixture.
Gas chromatography is also sometimes known as vapor-phase chromatography (VPC), or gas–liquid partition chromatography (GLPC). These alternative names, as well as their respective abbreviations, are frequently used in scientific literature.
Gas chromatography is the process of separating compounds in a mixture by injecting a gaseous or liquid sample into a mobile phase, typically called the carrier gas, and passing the gas through a stationary phase. The mobile phase is usually an inert gas or an unreactive gas such as helium, argon, nitrogen or hydrogen. The stationary phase can be solid or liquid, although most GC systems today use a polymeric liquid stationary phase. The stationary phase is contained inside of a separation column. Today, most GC columns are fused silica capillaries with an inner diameter of 100–320 micrometres (0.0039–0.0126 in) and a length of 5–60 metres (16–197 ft). The GC column is located inside an oven where the temperature of the gas can be controlled and the effluent coming off the column is monitored by a suitable detector.
== Operating principle ==
A gas chromatograph is made of a narrow tube, known as the column, through which the vaporized sample passes, carried along by a continuous flow of inert or nonreactive gas. Components of the sample pass through the column at different rates, depending on their chemical and physical properties and the resulting interactions with the column lining or filling, called the stationary phase. The column is typically enclosed within a temperature controlled oven. As the chemicals exit the end of the column, they are detected and identified electronically.
== History ==
=== Background ===
Chromatography dates to 1903 in the work of the Russian scientist, Mikhail Semenovich Tswett, who separated plant pigments via liquid column chromatography.
=== Invention ===
The invention of gas chromatography is generally attributed to Anthony T. James and Archer J.P. Martin. Their gas chromatograph used partition chromatography as the separating principle, rather than adsorption chromatography. The popularity of gas chromatography quickly rose after the development of the flame ionization detector.
Martin and another one of their colleagues, Richard Synge, with whom he shared the 1952 Nobel Prize in Chemistry, had noted in an earlier paper that chromatography might also be used to separate gases. Synge pursued other work while Martin continued his work with James.
=== Gas adsorption chromatography precursors ===
German physical chemist Erika Cremer in 1947 together with Austrian graduate student Fritz Prior developed what could be considered the first gas chromatograph that consisted of a carrier gas, a column packed with silica gel, and a thermal conductivity detector. They exhibited the chromatograph at ACHEMA in Frankfurt, but nobody was interested in it.
N.C. Turner with the Burrell Corporation introduced in 1943 a massive instrument that used a charcoal column and mercury vapors. Stig Claesson of Uppsala University published in 1946 his work on a charcoal column that also used mercury.
Gerhard Hesse, while a professor at the University of Marburg/Lahn decided to test the prevailing opinion among German chemists that molecules could not be separated in a moving gas stream. He set up a simple glass column filled with starch and successfully separated bromine and iodine using nitrogen as the carrier gas. He then built a system that flowed an inert gas through a glass condenser packed with silica gel and collected the eluted fractions.
Courtenay S.G Phillips of Oxford University investigated separation in a charcoal column using a thermal conductivity detector. He consulted with Claesson and decided to use displacement as his separating principle. After learning about the results of James and Martin, he switched to partition chromatography.
=== Column technology ===
Early gas chromatography used packed columns, made of block 1–5 m long, 1–5 mm diameter, and filled with particles. The resolution of packed columns was improved by the invention of capillary column, in which the stationary phase is coated on the inner wall of the capillary.
== Physical components ==
=== Autosamplers ===
The autosampler provides the means to introduce a sample automatically into the inlets. Manual insertion of the sample is possible but is no longer common. Automatic insertion provides better reproducibility and time-optimization.Different kinds of autosamplers exist. Autosamplers can be classified in relation to sample capacity (auto-injectors vs. autosamplers, where auto-injectors can work a small number of samples), to robotic technologies (XYZ robot vs. rotating robot – the most common), or to analysis:
Liquid
Static head-space by syringe technology
Dynamic head-space by transfer-line technology
Solid phase microextraction (SPME)
=== Inlets ===
The column inlet (or injector) provides the means to introduce a sample into a continuous flow of carrier gas. The inlet is a piece of hardware attached to the column head.
Common inlet types are:
S/SL (split/splitless) injector – a sample is introduced into a heated small chamber via a syringe through a septum – the heat facilitates volatilization of the sample and sample matrix. The carrier gas then either sweeps the entirety (splitless mode) or a portion (split mode) of the sample into the column. In split mode, a part of the sample/carrier gas mixture in the injection chamber is exhausted through the split vent. Split injection is preferred when working with samples with high analyte concentrations (>0.1%) whereas splitless injection is best suited for trace analysis with low amounts of analytes (<0.01%). In splitless mode the split valve opens after a pre-set amount of time to purge heavier elements that would otherwise contaminate the system. This pre-set (splitless) time should be optimized, the shorter time (e.g., 0.2 min) ensures less tailing but loss in response, the longer time (2 min) increases tailing but also signal.
On-column inlet – the sample is here introduced directly into the column in its entirety without heat, or at a temperature below the boiling point of the solvent. The low temperature condenses the sample into a narrow zone. The column and inlet can then be heated, releasing the sample into the gas phase. This ensures the lowest possible temperature for chromatography and keeps samples from decomposing above their boiling point.
PTV injector – Temperature-programmed sample introduction was first described by Vogt in 1979. Originally Vogt developed the technique as a method for the introduction of large sample volumes (up to 250 μL) in capillary GC. Vogt introduced the sample into the liner at a controlled injection rate. The temperature of the liner was chosen slightly below the boiling point of the solvent. The low-boiling solvent was continuously evaporated and vented through the split line. Based on this technique, Poy developed the programmed temperature vaporising injector; PTV. By introducing the sample at a low initial liner temperature many of the disadvantages of the classic hot injection techniques could be circumvented.
Gas source inlet or gas switching valve – gaseous samples in collection bottles are connected to what is most commonly a six-port switching valve. The carrier gas flow is not interrupted while a sample can be expanded into a previously evacuated sample loop. Upon switching, the contents of the sample loop are inserted into the carrier gas stream.
P/T (purge-and-trap) system – An inert gas is bubbled through an aqueous sample causing insoluble volatile chemicals to be purged from the matrix. The volatiles are 'trapped' on an absorbent column (known as a trap or concentrator) at ambient temperature. The trap is then heated and the volatiles are directed into the carrier gas stream. Samples requiring preconcentration or purification can be introduced via such a system, usually hooked up to the S/SL port.
The choice of carrier gas (mobile phase) is important. Hydrogen has a range of flow rates that are comparable to helium in efficiency. However, helium may be more efficient and provide the best separation if flow rates are optimized. Helium is non-flammable and works with a greater number of detectors and older instruments. Therefore, helium is the most common carrier gas used. However, the price of helium has gone up considerably over recent years, causing an increasing number of chromatographers to switch to hydrogen gas. Historical use, rather than rational consideration, may contribute to the continued preferential use of helium.
=== Detectors ===
Commonly used detectors are the flame ionization detector (FID) and the thermal conductivity detector (TCD). While TCDs are beneficial in that they are non-destructive, its low detection limit for most analytes inhibits widespread use. FIDs are sensitive primarily to hydrocarbons, and are more sensitive to them than TCD. FIDs cannot detect water or carbon dioxide which make them ideal for environmental organic analyte analysis. FID is two to three times more sensitive to analyte detection than TCD.
The TCD relies on the thermal conductivity of matter passing around a thin wire of tungsten-rhenium with a current traveling through it. In this set up helium or nitrogen serve as the carrier gas because of their relatively high thermal conductivity which keep the filament cool and maintain uniform resistivity and electrical efficiency of the filament. When analyte molecules elute from the column, mixed with carrier gas, the thermal conductivity decreases while there is an increase in filament temperature and resistivity resulting in fluctuations in voltage ultimately causing a detector response. Detector sensitivity is proportional to filament current while it is inversely proportional to the immediate environmental temperature of that detector as well as flow rate of the carrier gas.
In a flame ionization detector (FID), electrodes are placed adjacent to a flame fueled by hydrogen / air near the exit of the column, and when carbon containing compounds exit the column they are pyrolyzed by the flame. This detector works only for organic / hydrocarbon containing compounds due to the ability of the carbons to form cations and electrons upon pyrolysis which generates a current between the electrodes. The increase in current is translated and appears as a peak in a chromatogram. FIDs have low detection limits (a few picograms per second) but they are unable to generate ions from carbonyl containing carbons. FID compatible carrier gasses include helium, hydrogen, nitrogen, and argon.
In FID, sometimes the stream is modified before entering the detector. A methanizer converts carbon monoxide and carbon dioxide into methane so that it can be detected. A different technology is the polyarc, by Activated Research Inc, that converts all compounds to methane.
Alkali flame detector (AFD) or alkali flame ionization detector (AFID) has high sensitivity to nitrogen and phosphorus, similar to NPD. However, the alkaline metal ions are supplied with the hydrogen gas, rather than a bead above the flame. For this reason AFD does not suffer the "fatigue" of the NPD, but provides a constant sensitivity over long period of time. In addition, when alkali ions are not added to the flame, AFD operates like a standard FID. A catalytic combustion detector (CCD) measures combustible hydrocarbons and hydrogen. Discharge ionization detector (DID) uses a high-voltage electric discharge to produce ions.
Flame photometric detector (FPD) uses a photomultiplier tube to detect spectral lines of the compounds as they are burned in a flame. Compounds eluting off the column are carried into a hydrogen fueled flame which excites specific elements in the molecules, and the excited elements (P,S, Halogens, Some Metals) emit light of specific characteristic wavelengths. The emitted light is filtered and detected by a photomultiplier tube. In particular, phosphorus emission is around 510–536 nm and sulfur emission is at 394 nm. With an atomic emission detector (AED), a sample eluting from a column enters a chamber which is energized by microwaves that induce a plasma. The plasma causes the analyte sample to decompose and certain elements generate an atomic emission spectra. The atomic emission spectra is diffracted by a diffraction grating and detected by a series of photomultiplier tubes or photo diodes.
Electron capture detector (ECD) uses a radioactive beta particle (electron) source to measure the degree of electron capture. ECD are used for the detection of molecules containing electronegative / withdrawing elements and functional groups like halogens, carbonyl, nitriles, nitro groups, and organometalics. In this type of detector either nitrogen or 5% methane in argon is used as the mobile phase carrier gas. The carrier gas passes between two electrodes placed at the end of the column, and adjacent to the cathode (negative electrode) resides a radioactive foil such as 63Ni. The radioactive foil emits a beta particle (electron) which collides with and ionizes the carrier gas to generate more ions resulting in a current. When analyte molecules with electronegative / withdrawing elements or functional groups electrons are captured which results in a decrease in current generating a detector response.
Nitrogen–phosphorus detector (NPD), a form of thermionic detector where nitrogen and phosphorus alter the work function on a specially coated bead and a resulting current is measured.
Dry electrolytic conductivity detector (DELCD) uses an air phase and high temperature (v. Coulsen) to measure chlorinated compounds.
Mass spectrometer (MS), also called GC-MS; highly effective and sensitive, even in a small quantity of sample. This detector can be used to identify the analytes in chromatograms by their mass spectrum. Some GC-MS are connected to an NMR spectrometer which acts as a backup detector. This combination is known as GC-MS-NMR. Some GC-MS-NMR are connected to an infrared spectrophotometer which acts as a backup detector. This combination is known as GC-MS-NMR-IR. It must, however, be stressed this is very rare as most analyses needed can be concluded via purely GC-MS.
Vacuum ultraviolet (VUV) represents the most recent development in gas chromatography detectors. Most chemical species absorb and have unique gas phase absorption cross sections in the approximately 120–240 nm VUV wavelength range monitored. Where absorption cross sections are known for analytes, the VUV detector is capable of absolute determination (without calibration) of the number of molecules present in the flow cell in the absence of chemical interferences.
Olfactometric detector, also called GC-O, uses a human assessor to analyse the odour activity of compounds. With an odour port or a sniffing port, the quality of the odour, the intensity of the odour and the duration of the odour activity of a compound can be assessed.
Other detectors include the Hall electrolytic conductivity detector (ElCD), helium ionization detector (HID), infrared detector (IRD), photo-ionization detector (PID), pulsed discharge ionization detector (PDD), and thermionic ionization detector (TID).
== Methods ==
The method is the collection of conditions in which the GC operates for a given analysis. Method development is the process of determining what conditions are adequate and/or ideal for the analysis required.
Conditions which can be varied to accommodate a required analysis include inlet temperature, detector temperature, column temperature and temperature program, carrier gas and carrier gas flow rates, the column's stationary phase, diameter and length, inlet type and flow rates, sample size and injection technique. Depending on the detector(s) (see below) installed on the GC, there may be a number of detector conditions that can also be varied. Some GCs also include valves which can change the route of sample and carrier flow. The timing of the opening and closing of these valves can be important to method development.
=== Carrier gas selection and flow rates ===
Typical carrier gases include helium, nitrogen, argon, and hydrogen. Which gas to use is usually determined by the detector being used, for example, a DID requires helium as the carrier gas. When analyzing gas samples the carrier is also selected based on the sample's matrix, for example, when analyzing a mixture in argon, an argon carrier is preferred because the argon in the sample does not show up on the chromatogram. Safety and availability can also influence carrier selection.
The purity of the carrier gas is also frequently determined by the detector, though the level of sensitivity needed can also play a significant role. Typically, purities of 99.995% or higher are used. The most common purity grades required by modern instruments for the majority of sensitivities are 5.0 grades, or 99.999% pure meaning that there is a total of 10 ppm of impurities in the carrier gas that could affect the results. The highest purity grades in common use are 6.0 grades, but the need for detection at very low levels in some forensic and environmental applications has driven the need for carrier gases at 7.0 grade purity and these are now commercially available. Trade names for typical purities include "Zero Grade", "Ultra-High Purity (UHP) Grade", "4.5 Grade" and "5.0 Grade".
The carrier gas linear velocity affects the analysis in the same way that temperature does (see above). The higher the linear velocity the faster the analysis, but the lower the separation between analytes. Selecting the linear velocity is therefore the same compromise between the level of separation and length of analysis as selecting the column temperature. The linear velocity will be implemented by means of the carrier gas flow rate, with regards to the inner diameter of the column.
With GCs made before the 1990s, carrier flow rate was controlled indirectly by controlling the carrier inlet pressure, or "column head pressure". The actual flow rate was measured at the outlet of the column or the detector with an electronic flow meter, or a bubble flow meter, and could be an involved, time consuming, and frustrating process. It was not possible to vary the pressure setting during the run, and thus the flow was essentially constant during the analysis. The relation between flow rate and inlet pressure is calculated with Poiseuille's equation for compressible fluids.
Many modern GCs, however, electronically measure the flow rate, and electronically control the carrier gas pressure to set the flow rate. Consequently, carrier pressures and flow rates can be adjusted during the run, creating pressure/flow programs similar to temperature programs.
=== Stationary compound selection ===
The polarity of the solute is crucial for the choice of stationary compound, which in an optimal case would have a similar polarity as the solute. Common stationary phases in open tubular columns are cyanopropylphenyl dimethyl polysiloxane, carbowax polyethyleneglycol, biscyanopropyl cyanopropylphenyl polysiloxane and diphenyl dimethyl polysiloxane. For packed columns more options are available.
=== Inlet types and flow rates ===
The choice of inlet type and injection technique depends on if the sample is in liquid, gas, adsorbed, or solid form, and on whether a solvent matrix is present that has to be vaporized. Dissolved samples can be introduced directly onto the column via a COC injector, if the conditions are well known; if a solvent matrix has to be vaporized and partially removed, a S/SL injector is used (most common injection technique); gaseous samples (e.g., air cylinders) are usually injected using a gas switching valve system; adsorbed samples (e.g., on adsorbent tubes) are introduced using either an external (on-line or off-line) desorption apparatus such as a purge-and-trap system, or are desorbed in the injector (SPME applications).
=== Sample size and injection technique ===
==== Sample injection ====
The real chromatographic analysis starts with the introduction of the sample onto the column. The development of capillary gas chromatography resulted in many practical problems with the injection technique. The technique of on-column injection, often used with packed columns, is usually not possible with capillary columns. In the injection system in the capillary gas chromatograph the amount injected should not overload the column and
the width of the injected plug should be small compared to the spreading due to the chromatographic process. Failure to comply with this latter requirement will reduce the separation capability of the column. As a general rule, the volume injected, Vinj, and the volume of the detector cell, Vdet, should be about 1/10 of the volume occupied by the portion of sample containing the molecules of interest (analytes) when they exit the column.
Some general requirements which a good injection technique should fulfill are that it should be possible to obtain the column's optimum separation efficiency, it should allow accurate and reproducible injections of small amounts of representative samples, it should induce no change in sample composition, it should not exhibit discrimination based on differences in boiling point, polarity, concentration or thermal/catalytic stability, and it should be applicable for trace analysis as well as for undiluted samples.
However, there are a number of problems inherent in the use of syringes for injection. Even the best syringes claim an accuracy of only 3%, and in unskilled hands, errors are much larger. The needle may cut small pieces of rubber from the septum as it injects sample through it. These can block the needle and prevent the syringe filling the next time it is used. It may not be obvious that this has happened. A fraction of the sample may get trapped in the rubber, to be released during subsequent injections. This can give rise to ghost peaks in the chromatogram. There may be selective loss of the more volatile components of the sample by evaporation from the tip of the needle.
=== Column selection ===
The choice of column depends on the sample and the active measured. The main chemical attribute regarded when choosing a column is the polarity of the mixture, but functional groups can play a large part in column selection. The polarity of the sample must closely match the polarity of the column stationary phase to increase resolution and separation while reducing run time. The separation and run time also depends on the film thickness (of the stationary phase), the column diameter and the column length.
=== Column temperature and temperature program ===
The column(s) in a GC are contained in an oven, the temperature of which is precisely controlled electronically. (When discussing the "temperature of the column," an analyst is technically referring to the temperature of the column oven. The distinction, however, is not important and will not subsequently be made in this article.)
The rate at which a sample passes through the column is directly proportional to the temperature of the column. The higher the column temperature, the faster the sample moves through the column. However, the faster a sample moves through the column, the less it interacts with the stationary phase, and the less the analytes are separated.
In general, the column temperature is selected to compromise between the length of the analysis and the level of separation.
A method which holds the column at the same temperature for the entire analysis is called "isothermal". Most methods, however, increase the column temperature during the analysis, the initial temperature, rate of temperature increase (the temperature "ramp"), and final temperature are called the temperature program.
A temperature program allows analytes that elute early in the analysis to separate adequately, while shortening the time it takes for late-eluting analytes to pass through the column.
== Data reduction and analysis ==
=== Qualitative analysis ===
Generally, chromatographic data is presented as a graph of detector response (y-axis) against retention time (x-axis), which is called a chromatogram. This provides a spectrum of peaks for a sample representing the analytes present in a sample eluting from the column at different times. Retention time can be used to identify analytes if the method conditions are constant. Also, the pattern of peaks will be constant for a sample under constant conditions and can identify complex mixtures of analytes. However, in most modern applications, the GC is connected to a mass spectrometer or similar detector that is capable of identifying the analytes represented by the peaks.
=== Quantitative analysis ===
The area under a peak is proportional to the amount of analyte present in the chromatogram. By calculating the area of the peak using the mathematical function of integration, the concentration of an analyte in the original sample can be determined. Concentration can be calculated using a calibration curve created by finding the response for a series of concentrations of analyte, or by determining the relative response factor of an analyte. The relative response factor is the expected ratio of an analyte to an internal standard (or external standard) and is calculated by finding the response of a known amount of analyte and a constant amount of internal standard (a chemical added to the sample at a constant concentration, with a distinct retention time to the analyte).
In most modern GC-MS systems, computer software is used to draw and integrate peaks, and match MS spectra to library spectra.
== Applications ==
In general, substances that vaporize below 300 °C (and therefore are stable up to that temperature) can be measured quantitatively. The samples are also required to be salt-free; they should not contain ions. Very minute amounts of a substance can be measured, but it is often required that the sample must be measured in comparison to a sample containing the pure, suspected substance known as a reference standard.
Various temperature programs can be used to make the readings more meaningful; for example to differentiate between substances that behave similarly during the GC process.
Professionals working with GC analyze the content of a chemical product, for example in assuring the quality of products in the chemical industry; or measuring chemicals in soil, air or water, such as soil gases. GC is very accurate if used properly and can measure picomoles of a substance in a 1 ml liquid sample, or parts-per-billion concentrations in gaseous samples.
In practical courses at colleges, students sometimes get acquainted to the GC by studying the contents of lavender oil or measuring the ethylene that is secreted by Nicotiana benthamiana plants after artificially injuring their leaves. These GC analyse hydrocarbons (C2-C40+). In a typical experiment, a packed column is used to separate the light gases, which are then detected with a TCD. The hydrocarbons are separated using a capillary column and detected with a FID. A complication with light gas analyses that include H2 is that He, which is the most common and most sensitive inert carrier (sensitivity is proportional to molecular mass) has an almost identical thermal conductivity to hydrogen (it is the difference in thermal conductivity between two separate filaments in a Wheatstone Bridge type arrangement that shows when a component has been eluted). For this reason, dual TCD instruments used with a separate channel for hydrogen that uses nitrogen as a carrier are common. Argon is often used when analysing gas phase chemistry reactions such as F-T synthesis so that a single carrier gas can be used rather than two separate ones. The sensitivity is reduced, but this is a trade off for simplicity in the gas supply.
Gas chromatography is used extensively in forensic science. Disciplines as diverse as solid drug dose (pre-consumption form) identification and quantification, arson investigation, paint chip analysis, and toxicology cases, employ GC to identify and quantify various biological specimens and crime-scene evidence.
== See also ==
Analytical chemistry
Chromatography
Gas chromatography–mass spectrometry
Gas chromatography-olfactometry
High-performance liquid chromatography
Inverse gas chromatography
Proton transfer reaction mass spectrometry
Secondary electrospray ionization
Selected ion flow tube mass spectrometry
Standard addition
Thin layer chromatography
Unresolved complex mixture
== References ==
== External links ==
Media related to Gas chromatography at Wikimedia Commons
Chromatographic Columns in the Chemistry LibreTexts Library | Wikipedia/Gas_chromatography |
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives. Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces.
== History ==
The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process. Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir, bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED. Similar studies with platinum, nickel, and iron followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl's advancements in surface chemistry, specifically
his investigation of the interaction between carbon monoxide molecules and platinum surfaces.
== Chemistry ==
Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry.
=== Catalysis ===
The adhesion of gas or liquid molecules to the surface is known as adsorption. This can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface.
Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and Auger electron spectroscopy. Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements.
=== Electrochemistry ===
Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface X-ray scattering. These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes.
=== Geochemistry ===
Geological phenomena such as iron cycling and soil contamination are controlled by the interfaces between minerals and their environment. The atomic-scale structure and chemical properties of mineral–solution interfaces are studied using in situ synchrotron X-ray techniques such as X-ray reflectivity, X-ray standing waves, and X-ray absorption spectroscopy as well as scanning probe microscopy. For example, studies of heavy metal or actinide adsorption onto mineral surfaces reveal molecular-scale details of adsorption, enabling more accurate predictions of how these contaminants travel through soils or disrupt natural dissolution–precipitation cycles.
== Physics ==
Surface physics can be roughly defined as the study of physical interactions that occur at interfaces. It overlaps with surface chemistry. Some of the topics investigated in surface physics include friction, surface states, surface diffusion, surface reconstruction, surface phonons and plasmons, epitaxy, the emission and tunneling of electrons, spintronics, and the self-assembly of nanostructures on surfaces. Techniques to investigate processes at surfaces include surface X-ray scattering, scanning probe microscopy, surface-enhanced Raman spectroscopy and X-ray photoelectron spectroscopy.
== Analysis techniques ==
The study and analysis of surfaces involves both physical and chemical analysis techniques.
Several modern methods probe the topmost 1–10 nm of surfaces exposed to vacuum. These include angle-resolved photoemission spectroscopy (ARPES), X-ray photoelectron spectroscopy (XPS), Auger electron spectroscopy (AES), low-energy electron diffraction (LEED), electron energy loss spectroscopy (EELS), thermal desorption spectroscopy (TPD), ion scattering spectroscopy (ISS), secondary ion mass spectrometry, dual-polarization interferometry, and other surface analysis methods included in the list of materials analysis methods. Many of these techniques require vacuum as they rely on the detection of electrons or ions emitted from the surface under study. Moreover, in general ultra-high vacuum, in the range of 10−7 pascal pressure or better, it is necessary to reduce surface contamination by residual gas, by reducing the number of molecules reaching the sample over a given time period. At 0.1 mPa (10−6 torr) partial pressure of a contaminant and standard temperature, it only takes on the order of 1 second to cover a surface with a one-to-one monolayer of contaminant to surface atoms, so much lower pressures are needed for measurements. This is found by an order of magnitude estimate for the (number) specific surface area of materials and the impingement rate formula from the kinetic theory of gases.
Purely optical techniques can be used to study interfaces under a wide variety of conditions. Reflection-absorption infrared, dual polarisation interferometry, surface-enhanced Raman spectroscopy and sum frequency generation spectroscopy can be used to probe solid–vacuum as well as solid–gas, solid–liquid, and liquid–gas surfaces. Multi-parametric surface plasmon resonance works in solid–gas, solid–liquid, liquid–gas surfaces and can detect even sub-nanometer layers. It probes the interaction kinetics as well as dynamic structural changes such as liposome collapse or swelling of layers in different pH. Dual-polarization interferometry is used to quantify the order and disruption in birefringent thin films. This has been used, for example, to study the formation of lipid bilayers and their interaction with membrane proteins.
Acoustic techniques, such as quartz crystal microbalance with dissipation monitoring, is used for time-resolved measurements of solid–vacuum, solid–gas and solid–liquid interfaces. The method allows for analysis of molecule–surface interactions as well as structural changes and viscoelastic properties of the adlayer.
X-ray scattering and spectroscopy techniques are also used to characterize surfaces and interfaces. While some of these measurements can be performed using laboratory X-ray sources, many require the high intensity and energy tunability of synchrotron radiation. X-ray crystal truncation rods (CTR) and X-ray standing wave (XSW) measurements probe changes in surface and adsorbate structures with sub-Ångström resolution. Surface-extended X-ray absorption fine structure (SEXAFS) measurements reveal the coordination structure and chemical state of adsorbates. Grazing-incidence small angle X-ray scattering (GISAXS) yields the size, shape, and orientation of nanoparticles on surfaces. The crystal structure and texture of thin films can be investigated using grazing-incidence X-ray diffraction (GIXD, GIXRD).
X-ray photoelectron spectroscopy (XPS) is a standard tool for measuring the chemical states of surface species and for detecting the presence of surface contamination. Surface sensitivity is achieved by detecting photoelectrons with kinetic energies of about 10–1000 eV, which have corresponding inelastic mean free paths of only a few nanometers. This technique has been extended to operate at near-ambient pressures (ambient pressure XPS, AP-XPS) to probe more realistic gas–solid and liquid–solid interfaces. Performing XPS with hard X-rays at synchrotron light sources yields photoelectrons with kinetic energies of several keV (hard X-ray photoelectron spectroscopy, HAXPES), enabling access to chemical information from buried interfaces.
Modern physical analysis methods include scanning-tunneling microscopy (STM) and a family of methods descended from it, including atomic force microscopy (AFM). These microscopies have considerably increased the ability of surface scientists to measure the physical structure of many surfaces. For example, they make it possible to follow reactions at the solid–gas interface in real space, if those proceed on a time scale accessible by the instrument.
== See also ==
== References ==
== Further reading ==
Kolasinski, Kurt W. (2012-04-30). Surface Science: Foundations of Catalysis and Nanoscience (3 ed.). Wiley. ISBN 978-1119990352.
Attard, Gary; Barnes, Colin (January 1998). Surfaces. Oxford Chemistry Primers. ISBN 978-0198556862.
== External links ==
"Ram Rao Materials and Surface Science", a video from the Vega Science Trust
Surface Chemistry Discoveries
Surface Metrology Guide | Wikipedia/Surface_science |
Kinks are deviations of a dislocation defect along its glide plane. In edge dislocations, the constant glide plane allows short regions of the dislocation to turn, converting into screw dislocations and producing kinks. Screw dislocations have rotatable glide planes, thus kinks that are generated along screw dislocations act as an anchor for the glide plane. Kinks differ from jogs in that kinks are strictly parallel to the glide plane, while jogs shift away from the glide plane.
== Energy ==
Pure-edge and screw dislocations are conceptually straight in order to minimize its length, and through it, the strain energy of the system. Low-angle mixed dislocations, on the other hand, can be thought of as primarily edge dislocation with screw kinks in a stair-case structure (or vice versa), switching between straight pure-edge and pure-screw dislocation segments. In reality, kinks are not sharp transitions. Both the total length of the dislocation and the kink angle are dependent on the free energy of the system. The primary dislocation regions lie in Peierls-Nabarro potential minima, while the kink requires addition energy in the form of an energy peak. To minimize free energy, the kink equilibrates at a certain length and angle. Large energy peaks create short but sharp kinks in order to minimize dislocation length within the high energy region, while small energy peaks create long and drawn-out kinks in order to minimize total dislocation length.
== Kink movement ==
Kinks facilitate the movement of dislocations along its glide plane under shear stress, and is directly responsible for plastic deformation of crystals. When a crystal undergoes shear force, e.g. cut with scissors, the applied shear force causes dislocations to move through the material, displacing atoms and deforming the material. The entire dislocation does not move at once – rather, the dislocation produces a pair of kinks, which then propagates in opposite directions down the length of the dislocation, eventually shifting the entire dislocation by a Burgers vector. The velocity of dislocations through kink propagation also clearly limited on the nucleation frequency of kinks, as a lack of kinks compromises the mechanism by which dislocations move.
As shear force approaches infinity, the velocity at which dislocations migrate is limited by the physical properties of the material, maximizing at the material's sound velocity. At lower shear stresses, the velocity of dislocations end up relating exponentially with the applied shear force:
v
0
=
C
τ
p
,
{\displaystyle v_{0}=C\tau ^{p},\,\!}
where
τ
{\displaystyle \tau }
is applied shear force
C
{\displaystyle C}
and
p
{\displaystyle p}
are experimentally found constants
The above equation gives the upper limit on dislocation velocity. The interactions of dislocation movement on its environment, particularly other defects such as jogs and precipitates, results in drag and slows down the dislocation:
v
D
=
v
0
e
−
D
/
τ
,
{\displaystyle v_{D}=v_{0}e^{-D/\tau },\,\!}
where
D
{\displaystyle D}
is the drag parameter of the crystal
Kink movement is strongly dependent on temperature as well. Higher thermal energy assists in the generation of kinks, as well as increasing atomic vibrations and promoting dislocation motion.
Kinks may also form under compressive stress due to the buckling of crystal planes into a cavity. At high compressive forces, masses of dislocations move at once. Kinks align with each other, forming walls of kinks that propagate all at once. At sufficient forces, the tensile force produced by the dislocation core exceeds the fracture stress of the material, combining kink boundaries into sharp kinks and de-laminating the basal planes of the crystal.
== References == | Wikipedia/Kink_(materials_science) |
A crystallographic defect is an interruption of the regular patterns of arrangement of atoms or molecules in crystalline solids. The positions and orientations of particles, which are repeating at fixed distances determined by the unit cell parameters in crystals, exhibit a periodic crystal structure, but this is usually imperfect. Several types of defects are often characterized: point defects, line defects, planar defects, bulk defects. Topological homotopy establishes a mathematical method of characterization.
== Point defects ==
Point defects are defects that occur only at or around a single lattice point. They are not extended in space in any dimension. Strict limits for how small a point defect is are generally not defined explicitly. However, these defects typically involve at most a few extra or missing atoms. Larger defects in an ordered structure are usually considered dislocation loops. For historical reasons, many point defects, especially in ionic crystals, are called centers: for example a vacancy in many ionic solids is called a luminescence center, a color center, or F-center. These dislocations permit ionic transport through crystals leading to electrochemical reactions. These are frequently specified using Kröger–Vink notation.
Vacancy defects are lattice sites which would be occupied in a perfect crystal, but are vacant. If a neighboring atom moves to occupy the vacant site, the vacancy moves in the opposite direction to the site which used to be occupied by the moving atom. The stability of the surrounding crystal structure guarantees that the neighboring atoms will not simply collapse around the vacancy. In some materials, neighboring atoms actually move away from a vacancy, because they experience attraction from atoms in the surroundings. A vacancy (or pair of vacancies in an ionic solid) is sometimes called a Schottky defect.
Interstitial defects are atoms that occupy a site in the crystal structure at which there is usually not an atom. They are generally high energy configurations. Small atoms (mostly impurities) in some crystals can occupy interstices without high energy, such as hydrogen in palladium.
A nearby pair of a vacancy and an interstitial is often called a Frenkel defect or Frenkel pair. This is caused when an ion moves into an interstitial site and creates a vacancy.
Due to fundamental limitations of material purification methods, materials are never 100% pure, which by definition induces defects in crystal structure. In the case of an impurity, the atom is often incorporated at a regular atomic site in the crystal structure. This is neither a vacant site nor is the atom on an interstitial site and it is called a substitutional defect. The atom is not supposed to be anywhere in the crystal, and is thus an impurity. In some cases where the radius of the substitutional atom (ion) is substantially smaller than that of the atom (ion) it is replacing, its equilibrium position can be shifted away from the lattice site. These types of substitutional defects are often referred to as off-center ions. There are two different types of substitutional defects: Isovalent substitution and aliovalent substitution. Isovalent substitution is where the ion that is substituting the original ion is of the same oxidation state as the ion it is replacing. Aliovalent substitution is where the ion that is substituting the original ion is of a different oxidation state than the ion it is replacing. Aliovalent substitutions change the overall charge within the ionic compound, but the ionic compound must be neutral. Therefore, a charge compensation mechanism is required. Hence either one of the metals is partially or fully oxidised or reduced, or ion vacancies are created.
Antisite defects occur in an ordered alloy or compound when atoms of different type exchange positions. For example, some alloys have a regular structure in which every other atom is a different species; for illustration assume that type A atoms sit on the corners of a cubic lattice, and type B atoms sit in the center of the cubes. If one cube has an A atom at its center, the atom is on a site usually occupied by a B atom, and is thus an antisite defect. This is neither a vacancy nor an interstitial, nor an impurity.
Topological defects are regions in a crystal where the normal chemical bonding environment is topologically different from the surroundings. For instance, in a perfect sheet of graphite (graphene) all atoms are in rings containing six atoms. If the sheet contains regions where the number of atoms in a ring is different from six, while the total number of atoms remains the same, a topological defect has formed. An example is the Stone Wales defect in nanotubes, which consists of two adjacent 5-membered and two 7-membered atom rings.
Amorphous solids may contain defects. These are naturally somewhat hard to define, but sometimes their nature can be quite easily understood. For instance, in ideally bonded amorphous silica all Si atoms have 4 bonds to O atoms and all O atoms have 2 bonds to Si atom. Thus e.g. an O atom with only one Si bond (a dangling bond) can be considered a defect in silica. Moreover, defects can also be defined in amorphous solids based on empty or densely packed local atomic neighbourhoods, and the properties of such 'defects' can be shown to be similar to normal vacancies and interstitials in crystals.
Complexes can form between different kinds of point defects. For example, if a vacancy encounters an impurity, the two may bind together if the impurity is too large for the lattice. Interstitials can form 'split interstitial' or 'dumbbell' structures where two atoms effectively share an atomic site, resulting in neither atom actually occupying the site.
== Line defects ==
Line defects can be described by gauge theories.
Dislocations are linear defects, around which the atoms of the crystal lattice are misaligned.
There are two basic types of dislocations, the edge dislocation and the screw dislocation. "Mixed" dislocations, combining aspects of both types, are also common.
Edge dislocations are caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the adjacent planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. The analogy with a stack of paper is apt: if a half a piece of paper is inserted in a stack of paper, the defect in the stack is only noticeable at the edge of the half sheet.
The screw dislocation is more difficult to visualise, but basically comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes of atoms in the crystal lattice.
The presence of dislocation results in lattice strain (distortion). The direction and magnitude of such distortion is expressed in terms of a Burgers vector (b). For an edge type, b is perpendicular to the dislocation line, whereas in the cases of the screw type it is parallel. In metallic materials, b is aligned with close-packed crystallographic directions and its magnitude is equivalent to one interatomic spacing.
Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge.
It is the presence of dislocations and their ability to readily move (and interact) under the influence of stresses induced by external loads that leads to the characteristic malleability of metallic materials.
Dislocations can be observed using transmission electron microscopy, field ion microscopy and atom probe techniques.
Deep-level transient spectroscopy has been used for studying the electrical activity of dislocations in semiconductors, mainly silicon.
Disclinations are line defects corresponding to "adding" or "subtracting" an angle around a line. Basically, this means that if you track the crystal orientation around the line defect, you get a rotation. Usually, they were thought to play a role only in liquid crystals, but recent developments suggest that they might have a role also in solid materials, e.g. leading to the self-healing of cracks.
== Planar defects ==
Grain boundaries occur where the crystallographic direction of the lattice abruptly changes. This usually occurs when two crystals begin growing separately and then meet.
Antiphase boundaries occur in ordered alloys: in this case, the crystallographic direction remains the same, but each side of the boundary has an opposite phase: For example, if the ordering is usually ABABABAB (hexagonal close-packed crystal), an antiphase boundary takes the form of ABABBABA.
Stacking faults occur in a number of crystal structures, but the common example is in close-packed structures. They are formed by a local deviation of the stacking sequence of layers in a crystal. An example would be the ABABCABAB stacking sequence.
A twin boundary is a defect that introduces a plane of mirror symmetry in the ordering of a crystal. For example, in cubic close-packed crystals, the stacking sequence of a twin boundary would be ABCABCBACBA.
On planes of single crystals, steps between atomically flat terraces can also be regarded as planar defects. It has been shown that such defects and their geometry have significant influence on the adsorption of organic molecules
== Bulk defects ==
Three-dimensional macroscopic or bulk defects, such as pores, cracks, or inclusions
Voids — small regions where there are no atoms, and which can be thought of as clusters of vacancies
Impurities can cluster together to form small regions of a different phase. These are often called precipitates.
== Mathematical classification methods ==
A successful mathematical classification method for physical lattice defects, which works not only with the theory of dislocations and other defects in crystals but also, e.g., for disclinations in liquid crystals and for excitations in superfluid 3He, is homotopy theory, a branch of topology.
== Computer simulation methods ==
Density functional theory, classical molecular dynamics and kinetic Monte Carlo
simulations are widely used to study the properties of defects in solids with computer simulations.
Simulating jamming of hard spheres of different sizes and/or in containers with non-commeasurable sizes using the Lubachevsky–Stillinger algorithm
can be an effective technique for demonstrating some types of crystallographic defects.
== See also ==
Bjerrum defect
Crystallographic defects in diamond
Kröger–Vink notation
F-center
== References ==
== Further reading ==
Hagen Kleinert, Gauge Fields in Condensed Matter, Vol. II, "Stresses and defects", pp. 743–1456, World Scientific (Singapore, 1989); Paperback ISBN 9971-5-0210-0
Hermann Schmalzried: Solid State Reactions. Verlag Chemie, Weinheim 1981, ISBN 3-527-25872-8. | Wikipedia/Crystallographic_defect |
Deoxyribonucleic acid ( ; DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phosphodiester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, the single-ringed pyrimidines and the double-ringed purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine.
Both strands of double-stranded DNA store the same biological information. This information is replicated when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation.
Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
== Properties ==
DNA is a long polymer made from repeating units called nucleotides. The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, and have the same pitch of 34 ångströms (3.4 nm). The pair of chains have a radius of 10 Å (1.0 nm). According to another study, when measured in a different solution, the DNA chain measured 22–26 Å (2.2–2.6 nm) wide, and one nucleotide unit measured 3.3 Å (0.33 nm) long. The buoyant density of most DNA is 1.7g/cm3.
DNA does not usually exist as a single strand, but instead as a pair of strands that are held tightly together. These two long strands coil around each other, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside, and a base linked to a sugar and to one or more phosphate groups is called a nucleotide. A biopolymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar groups. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These are known as the 3′-end (three prime end), and 5′-end (five prime end) carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond.
Therefore, any DNA strand normally has one end at which there is a phosphate group attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl group attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a nucleic acid double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime end (5′ ), and three prime end (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the related pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. The four bases found in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine, forming A-T and G-C base pairs.
=== Nucleobase classification ===
The nucleobases are classified into two types: the purines, A and G, which are fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T. A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.
=== Non-canonical bases ===
Modified bases occur in DNA. The first of these recognized was 5-methylcytosine, which was found in the genome of Mycobacterium tuberculosis in 1925. The reason for the presence of these noncanonical bases in bacterial viruses (bacteriophages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. Modifications of the bases cytosine and adenine, the more common and modified DNA bases, play vital roles in the epigenetic control of gene expression in plants and animals.
A number of noncanonical bases are known to occur in DNA. Most of these are modifications of the canonical bases plus uracil.
Modified Adenine
N6-carbamoyl-methyladenine
N6-methyadenine
Modified Guanine
7-Deazaguanine
7-Methylguanine
Modified Cytosine
N4-Methylcytosine
5-Carboxylcytosine
5-Formylcytosine
5-Glycosylhydroxymethylcytosine
5-Hydroxycytosine
5-Methylcytosine
Modified Thymidine
α-Glutamythymidine
α-Putrescinylthymine
Uracil and modifications
Base J
Uracil
5-Dihydroxypentauracil
5-Hydroxymethyldeoxyuracil
Others
Deoxyarchaeosine
2,6-Diaminopurine (2-Aminoadenine)
=== Grooves ===
Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. The major groove is 22 ångströms (2.2 nm) wide, while the minor groove is 12 Å (1.2 nm) in width. Due to the larger width of the major groove, the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in width that would be seen if the DNA was twisted back into the ordinary B form.
=== Base pairing ===
In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix (from six-carbon ring to six-carbon ring) is called a Watson-Crick base pair. DNA with high GC-content is more stable than DNA with low GC-content. A Hoogsteen base pair (hydrogen bonding the 6-carbon ring to the 5-carbon ring) is a rare variation of base-pairing. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in organisms.
==== ssDNA vs. dsDNA ====
Most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart—a process known as melting—to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperatures, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).
The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the melting temperature (also called Tm value), which is the temperature at which 50% of the double-strand molecules are converted to single-strand molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have more strongly interacting strands, while short helices with high AT content have more weakly interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the melting temperature Tm necessary to break half of the hydrogen bonds. When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.
=== Amount ===
In humans, the total female diploid nuclear genome per cell extends for 6.37 Gigabase pairs (Gbp), is 208.23 cm long and weighs 6.51 picograms (pg). Male values are 6.27 Gbp, 205.00 cm, 6.41 pg. Each DNA polymer can contain hundreds of millions of nucleotides, such as in chromosome 1. Chromosome 1 is the largest human chromosome with approximately 220 million base pairs, and would be 85 mm long if straightened.
In eukaryotes, in addition to nuclear DNA, there is also mitochondrial DNA (mtDNA) which encodes certain proteins used by the mitochondria. The mtDNA is usually relatively small in comparison to the nuclear DNA. For example, the human mitochondrial DNA forms closed circular molecules, each of which contains 16,569 DNA base pairs, with each such molecule normally containing a full set of the mitochondrial genes. Each human mitochondrion contains, on average, approximately 5 such mtDNA molecules. Each human cell contains approximately 100 mitochondria, giving a total number of mtDNA molecules per human cell of approximately 500. However, the amount of mitochondria per cell also varies by cell type, and an egg cell can contain 100,000 mitochondria, corresponding to up to 1,500,000 copies of the mitochondrial genome (constituting up to 90% of the DNA of the cell).
=== Sense and antisense ===
A DNA sequence is called a "sense" sequence if it is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
=== Supercoiling ===
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
=== Alternative DNA structures ===
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson functions that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was proposed by Wilkins et al. in 1953 for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double helix.
Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
=== Alternative DNA chemistry ===
For many years, exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1 was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.
=== Quadruplex structures ===
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases, known as a guanine tetrad, form a flat plate. These flat four-base units then stack on top of each other to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
=== Branched DNA ===
In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
=== Artificial bases ===
Several artificial nucleobases have been synthesized, and successfully incorporated in the eight-base DNA analogue named Hachimoji DNA. Dubbed S, B, P, and Z, these artificial bases are capable of bonding with each other in a predictable way (S–B and P–Z), maintain the double helix structure of DNA, and be transcribed to RNA. Their existence could be seen as an indication that there is nothing special about the four natural nucleobases that evolved on Earth. On the other hand, DNA is tightly related to RNA which does not only act as a transcript of DNA but also performs as molecular machines many tasks in cells. For this purpose it has to fold into a structure. It has been shown that to allow to create all possible structures at least four bases are required for the corresponding RNA, while a higher number is also possible but this would be against the natural principle of least effort.
=== Acidity ===
The phosphate groups of DNA give it similar acidic properties to phosphoric acid and it can be considered as a strong acid. It will be fully ionized at a normal cellular pH, releasing protons which leave behind negative charges on the phosphate groups. These negative charges protect DNA from breakdown by hydrolysis by repelling nucleophiles which could hydrolyze it.
=== Macroscopic appearance ===
Pure DNA extracted from cells forms white, stringy clumps.
== Chemical modifications and altered DNA packaging ==
=== Base modifications and DNA packaging ===
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.
For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms—the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
=== Damage ===
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
== Biological functions ==
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In an alternative fashion, a cell may copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.
=== Genes and genomes ===
Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
=== Transcription and translation ===
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAG, TAA, and TGA codons, (UAG, UAA, and UGA on the mRNA).
=== Replication ===
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
=== Extracellular nucleic acids ===
Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress.
Cell-free fetal DNA is found in the blood of the mother, and can be sequenced to determine a great deal of information about the developing fetus.
Under the name of environmental DNA eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity.
=== Neutrophil extracellular traps ===
Neutrophil extracellular traps (NETs) are networks of extracellular fibers, primarily composed of DNA, which allow neutrophils, a type of white blood cell, to kill extracellular pathogens while minimizing damage to the host cells.
== Interactions with proteins ==
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
=== DNA-binding proteins ===
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
=== DNA-modifying enzymes ===
==== Nucleases and ligases ====
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
==== Topoisomerases and helicases ====
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
==== Polymerases ====
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called templates. These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
== Genetic recombination ==
A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north–south cleavage. The north–south cleavage nicks both strands of DNA, while the east–west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes.
== Evolution ==
DNA contains the genetic information that allows all forms of life to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.
Ancient DNA has been recovered from ancient organisms at a timescale where genome evolution can be directly observed, including from extinct organisms up to millions of years old, such as the woolly mammoth.
== Uses in technology ==
=== Genetic engineering ===
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.
=== DNA profiling ===
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, also called DNA fingerprinting. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime.
DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.
DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth, but there are new methods to test paternity while a mother is still pregnant.
=== DNA enzymes or catalytic DNA ===
Deoxyribozymes, also called DNAzymes or catalytic DNA, were first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in cells.
=== Bioinformatics ===
Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
=== DNA nanotechnology ===
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins. DNA and other nucleic acids are the basis of aptamers, synthetic oligonucleotide ligands for specific target molecules used in a range of biotechnology and biomedical applications.
=== History and anthropology ===
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology.
=== Information storage ===
DNA as a storage device for information has enormous potential since it has much higher storage density compared to electronic devices. However, high costs, slow read and write times (memory latency), and insufficient reliability has prevented its practical use.
== History ==
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.
In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information.
In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in the cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH.
In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.
In 1943, Oswald Avery, along with co-workers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). Erwin Chargaff developed and published observations now known as Chargaff's rules, stating that in DNA from any species of any organism, the amount of guanine should be equal to cytosine and the amount of adenine should be equal to thymine.
Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the enterobacteria phage T2.
In May 1952, Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin, took an X-ray diffraction image, labeled as "Photo 51", at high hydration levels of DNA. This photo was given to Watson and Crick by Maurice Wilkins and was critical to their obtaining the correct structure of DNA. Franklin told Crick and Watson that the backbones had to be on the outside. Before then, Linus Pauling, and Watson and Crick, had erroneous models with the chains inside and the bases pointing outwards. Franklin's identification of the space group for DNA crystals revealed to Crick that the two DNA strands were antiparallel. In February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Watson and Crick completed their model, which is now accepted as the first correct model of the double helix of DNA. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge, England to announce that he and Watson had "discovered the secret of life".
The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it. The structure was reported in a letter titled "MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid", in which they said, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method. Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure.
In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery.
In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and co-workers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
In 1986, DNA analysis was first used in a criminal investigation when police in the UK requested Alec Jeffreys of the University of Leicester to prove or disprove the involvement in a particular case of a suspect who claimed innocence in the matter. Although the suspect had already confessed to committing a recent rape-murder, he was denying any involvement in a similar crime committed three years earlier. Yet the details of the two cases were so alike that the police concluded both crimes had been committed by the same person. However, all charges against the suspect were dropped when Jeffreys' DNA testing exonerated the suspect — from both the earlier murder and the one to which he'd confessed. Further such DNA profiling led to positive identification of another suspect who, in 1988, was found guilty of both rape-murders.
== See also ==
== References ==
== Further reading ==
== External links ==
DNA binding site prediction on protein
DNA the Double Helix Game From the official Nobel Prize web site
DNA under electron microscope
Dolan DNA Learning Center
Double Helix: 50 years of DNA, Nature
Proteopedia DNA
Proteopedia Forms_of_DNA
ENCODE threads explorer ENCODE home page at Nature
Double Helix 1953–2003 National Centre for Biotechnology Education
Genetic Education Modules for Teachers – DNA from the Beginning Study Guide
PDB Molecule of the Month DNA
"Clue to chemistry of heredity found". The New York Times, June 1953. First American newspaper coverage of the discovery of the DNA structure
DNA from the Beginning Another DNA Learning Center site on DNA, genes, and heredity from Mendel to the human genome project.
The Register of Francis Crick Personal Papers 1938 – 2007 at Mandeville Special Collections Library, University of California, San Diego
Seven-page, handwritten letter that Crick sent to his 12-year-old son Michael in 1953 describing the structure of DNA. See Crick's medal goes under the hammer, Nature, 5 April 2013. | Wikipedia/DNA |
The Gerchberg–Saxton (GS) algorithm is an iterative phase retrieval algorithm for retrieving the phase of a complex-valued wavefront from two intensity measurements acquired in two different planes. Typically, the two planes are the image plane and the far field (diffraction) plane, and the wavefront propagation between these two planes is given by the Fourier transform. The original paper by Gerchberg and Saxton considered image and diffraction pattern of a sample acquired in an electron microscope.
It is often necessary to know only the phase distribution from one of the planes, since the phase distribution on the other plane can be obtained by performing a Fourier transform on the plane whose phase is known. Although often used for two-dimensional signals, the GS algorithm is also valid for one-dimensional signals.
The pseudocode below performs the GS algorithm to obtain a phase distribution for the plane "Source", such that its Fourier transform would have the amplitude distribution of the plane "Target".
The Gerchberg-Saxton algorithm is one of the most prevalent methods used to create computer-generated holograms.
== Pseudocode algorithm ==
Let:
FT – forward Fourier transform
IFT – inverse Fourier transform
i – the imaginary unit, √−1 (square root of −1)
exp – exponential function (exp(x) = ex)
Target and Source be the Target and Source Amplitude planes respectively
A, B, C & D be complex planes with the same dimension as Target and Source
Amplitude – Amplitude-extracting function:
e.g. for complex z = x + iy, amplitude(z) = sqrt(x·x + y·y)
for real x, amplitude(x) = |x|
Phase – Phase extracting function:
e.g. Phase(z) = arctan(y / x)
end Let
algorithm Gerchberg–Saxton(Source, Target, Retrieved_Phase) is
A := IFT(Target)
while error criterion is not satisfied
B := Amplitude(Source) × exp(i × Phase(A))
C := FT(B)
D := Amplitude(Target) × exp(i × Phase(C))
A := IFT(D)
end while
Retrieved_Phase = Phase(A)
This is just one of the many ways to implement the GS algorithm. Aside from optimizations, others may start by performing a forward Fourier transform to the source distribution.
== See also ==
Phase retrieval
Fourier optics
Holography
Adaptive-additive algorithm
== References ==
== External links ==
Dr W. Owen Saxton's pages [1] Archived 2008-06-13 at the Wayback Machine, [2]
Applications and publications on phase retrieval from the University of Rochester, Institute of Optics
A Python-Script of the GS by Dominik Doellerer
MATLAB GS algorithms [3], [4] | Wikipedia/Gerchberg–Saxton_algorithm |
Low-energy electron microscopy, or LEEM, is an analytical surface science technique used to image atomically clean surfaces, atom-surface interactions, and thin (crystalline) films.
== Operation ==
High-energy electrons (15-20 keV) are emitted from an electron gun, focused using a set of condenser optics, and sent through a magnetic beam deflector (usually 60˚ or 90˚). The “fast” electrons travel through an objective lens and begin decelerating to low energies (1-100 eV) near the sample surface because the sample is held at a potential near that of the gun. The low-energy electrons are now termed “surface-sensitive” and the near-surface sampling depth can be varied by tuning the energy of the incident electrons (difference between the sample and gun potentials minus the work functions of the sample and system). The low-energy elastically backscattered electrons travel back through the objective lens, reaccelerate to the gun voltage (because the objective lens is grounded), and pass through the beam separator again. However, now the electrons travel away from the condenser optics and into the projector lenses. Imaging of the back focal plane of the objective lens into the object plane of the projector lens (using an intermediate lens) produces a diffraction pattern (low-energy electron diffraction, LEED) at the imaging plane and recorded in a number of different ways. The intensity distribution of the diffraction pattern will depend on the periodicity at the sample surface and is a direct result of the wave nature of the electrons. One can produce individual images of the diffraction pattern spot intensities by turning off the intermediate lens and inserting a contrast aperture in the back focal plane of the objective lens (or, in state-of-the-art instruments, in the center of the separator, as chosen by the excitation of the objective lens), thus allowing for real-time observations of dynamic processes at surfaces. Such phenomena include (but are not limited to): tomography, phase transitions, adsorption, reaction, segregation, thin film growth, etching, strain relief, sublimation, and magnetic microstructure. These investigations are only possible because of the accessibility of the sample; allowing for a wide variety of in situ studies over a wide temperature range. LEEM was invented by Ernst Bauer in 1962; however, not fully developed (by Ernst Bauer and Wolfgang Telieps) until 1985.
== Introduction ==
LEEM differs from conventional electron microscopes in four main ways:
The sample must be illuminated on the same side of the imaging optics, i.e. through the objective lens, because samples are not transparent to low-energy electrons;
In order to separate the incident and elastically scattered low energy electrons, scientists use magnetic “electron prism” beam separators which focus electrons both in and out of the plane of the beampath (to avoid distortions in the image and diffraction patterns);
In electrostatic immersion objective lens brings the sample close to that of the gun, slowing down the high energy electrons to a desired energy only just before interacting with the sample surface;
The instrument must be able to work under ultra-high vacuum (UHV), or 10−10 torr (760 torr = 1 atm, atmospheric pressure), although "near-ambient pressure" (NAP-LEEM) instruments have been developed by adding a higher pressure compartment and differential pumping stages, allowing for sample room pressures up to 10−1 mbar.
== Surface diffraction ==
Kinematic or elastic backscattering occurs when low energy (1-100 eV) electrons impinge on a clean, well-ordered crystalline specimen. It is assumed that each electron undergoes only one scattering event, and incident electron beam is described as a plane wave with the wavelength:
λ
=
h
2
m
E
,
λ
[
A
]
=
150
E
[
eV
]
{\displaystyle {\begin{aligned}\lambda ={\frac {h}{\sqrt {2mE}}},\qquad \lambda [{\textrm {A}}]={\sqrt {\frac {150}{E[{\textrm {eV}}]}}}\end{aligned}}}
Inverse space is used to describe the periodicity of the lattice and the interaction of the plane wave with the sample surface. In inverse (or "k-space") space, the wave vector of the incident and scattered waves are
k
0
=
2
π
/
λ
0
{\displaystyle {\textbf {k}}_{0}=2\pi /\lambda _{0}}
and
k
=
2
π
/
λ
{\displaystyle {\begin{aligned}{\textbf {k}}=2\pi /\lambda \end{aligned}}}
, respectively,
and constructive interference occurs at the Laue condition:
k
−
k
0
=
G
hkl
{\displaystyle {\textbf {k}}-{\textbf {k}}_{0}={\textbf {G}}_{\textrm {hkl}}}
where (h,k,l) is a set of integers and
G
hkl
=
h
a
∗
+
k
b
∗
+
l
c
∗
{\displaystyle {\textbf {G}}_{\textrm {hkl}}=h{\textbf {a}}^{*}+k{\textbf {b}}^{*}+l{\textbf {c}}^{*}}
is a vector of the reciprocal lattice.
== Experimental setup ==
A typical LEEM setup consists of electron gun, used to generate electrons by way of thermionic or field emission from a source tip. In thermionic emission, electrons escape a source tip (usually made of LaB6) by resistive heating and application of an electric field to effectively lower the energy needed for electrons to escape the surface. Once sufficient thermal vibrational energy is attained electrons may overcome this electrostatic energy barrier, allowing them to travel into vacuum and accelerate down the lens column to the gun potential (because the lenses are at ground). In field emission, rather than heating the tip to vibrationally excite electrons from the surface, the source tip (usually tungsten) is sharpened to a small point such that when large electric fields are applied, they concentrate at the tip, lowering the barrier to escape the surface as well as making tunneling of electrons from the tip to vacuum level more feasible.
Condenser/illumination optics are used to focus electrons leaving the electron gun and manipulate and/or translate the illumination electron beam. Electromagnetic quadrupole electron lenses are used, the number of which depends on how much resolution and focusing flexibility the designer wishes. However, the ultimate resolution of LEEM is usually determined by that of the objective lens.
Illumination beam aperture allows researchers to control the area of the specimen which is illuminated (LEEM's version of electron microscopy's “selected area diffraction”, termed microdiffraction) and is located in the beam separator on the illumination side.
Magnetic beam separator is needed to resolve the illuminating and imaging beam (while in turn spatially separating the optics for each). There has been much development on the technology of electron beam separators; the early separators introduced distortion in either the image or diffraction plane. However, IBM recently developed a hybrid prism array/nested quadratic field design, focusing the electron beams both in and out of the plane of the beampath, allowing for deflection and transfer of the image and diffraction planes without distortion or energy dispersion.
Electrostatic immersion objective lens is used to form a real image of the sample by way of a 2/3-magnification virtual image behind the sample. The uniformity of the electrostatic field between the objective lens and specimen, limited by spherical and chromatic aberrations larger than those of any other lenses, ultimately determines the overall performance of the instrument.
Contrast aperture is located in the center on the projector lens side of the beam separator. In most electron microscopies, the contrast aperture is introduced into the back focal plan of the objective lens (where the actual diffraction plane lies). However, this is not true in the LEEM, because dark-field imaging (imaging of nonspecular beams) would not be possible because the aperture has to move laterally and would intercept the incident beam for large shifts. Therefore, researchers adjust the excitation of the objective lens so as to produce an image of the diffraction pattern in the middle of the beam separator and choose the desired spot intensity to image using a contrast aperture inserted there. This aperture allows scientists to image diffraction intensities that may be of particular interest (dark field).
Illumination optics are employed to magnify the image or diffraction pattern and project it onto the imaging plate or screen.
Imaging plate or screen used to image the electron intensity so that we can see it. This can be done many different ways including, phosphorescent screens, imaging plates, CCDs, among others.
== Specialized imaging techniques ==
=== Low energy electron diffraction (LEED) ===
After a parallel beam of low-energy electrons interacts with a specimen, the electrons form a diffraction or LEED pattern which depends on periodicity present at the surface and is a direct result of the wave nature of an electron. It is important to point out that in regular LEED the entire sample surface is being illuminated by a parallel beam of electrons, and thus the diffraction pattern will contain information about the entire surface.
LEED performed in a LEEM instrument (sometimes referred to as Very Low-Energy Electron Diffraction (VLEED), due to the even lower electron energies), limits the area illuminated to the beam spot, typically in the order of square micrometers.
The diffraction pattern is formed in the back focal plane of the objective lens, imaged into the object plane of the projective lens (using an intermediate lens), and the final pattern appears on the phosphorescent screen, photographic plate or CCD.
As the reflected electrons are bent away from the electron source by the prism, the specular reflected electrons can be measured, even starting from zero landing energy, as no shadow of the source is visible on the screen (which prevents this in regular LEED instruments).
It is worth noting that the spacing of diffracted beams does not increase with kinetic energy as for conventional LEED systems. This is due to the imaged electrons being accelerated to the high energy of the imaging column and are therefore imaged with a constant size of K-space regardless of the incident electron energy.
=== Microdiffraction ===
Microdiffraction is conceptually exactly like LEED. However, unlike in a LEED experiment where the sampled surface area is some square millimeters, one inserts the illumination and the beam aperture into the beam path while imaging a surface and thus reduces the size of the sampled surface area. The chosen area ranges from a fraction of a square micrometer to square micrometers. If the surface is not homogeneous, a diffraction pattern obtained from LEED experiment appears convoluted and is therefore hard to analyze. In a microdiffraction experiment researchers may focus on a particular island, terrace, domain and so on, and retrieve a diffraction pattern composed solely of a single surface feature, making the technique extremely useful.
=== Bright field imaging ===
Bright Field imaging uses the specular, reflected, (0,0) beam to form an image. Also known as phase or interference contrast imaging, bright field imaging makes particular use of the wave nature of the electron to generate vertical diffraction contrast, making steps on the surface visible.
=== Dark field imaging ===
In dark field imaging (also termed diffraction contrast imaging) researchers choose a desired diffraction spot and use a contrast aperture to pass only those electrons that contribute to that particular spot. In the image planes after the contrast aperture it is then possible to observe where the electrons originate from in real space. This technique allows scientists to study on which areas of a specimen a structure with a certain lattice vector (periodicity) exists.
=== Spectroscopy ===
Both (micro-)diffraction as well as bright field and dark field imaging can be performed as a function of the electron landing energy, measuring a diffraction pattern or an image for a range of energies. This way of measuring (often called LEEM-IV) yields spectra for each diffraction spot or sample position. In its simplest form, this spectrum gives a `fingerprint' of the surface, enabling the identification of different surface structures.
A particular application of bright field spectroscopy is the counting of the exact number of layers in layered materials such as (few layer) graphene, hexagonal boron nitride and some transition metal dichalcogenides.
=== Photoemission electron microscopy (PEEM) ===
In photoemission electron microscopy (PEEM), upon exposure to electromagnetic radiation (photons), secondary electrons are excited from the surface and imaged. PEEM was first developed in the early 1930s, using ultraviolet (UV) light to induce photoemission of (secondary) electrons. However, since then, this technique has made many advances, the most important of which was the pairing of PEEM with a synchrotron light source, providing tunable, linear polarized, left and right circularized radiation in the soft x-ray range. Such application allows scientist to retrieve topographical, elemental, chemical, and magnetic contrast of surfaces.
LEEM instruments are often equipped with light sources to perform PEEM imaging. This helps in system alignment and enables collection LEEM, PEEM and ARPES data of a single sample in a single instrument.
=== Mirror electron microscopy (MEM) ===
In mirror electron microscopy, electrons are slowed in the retarding field of the condenser lens to the limit of the instrument and thus, only allowed to interact with the “near-surface” region of the sample. It is very complicated to understand the exact contrast variations come from, but the important things to point out here are that height variations at the surface of the region change the properties of the retarding field, therefore influencing the reflected (specular) beam. No LEED pattern is formed, because no scattering events have taken place, and therefore, reflected intensity is high.
=== Low-energy electron holography ===
Low-energy electron holography is realized with electron with kinetic energies in the range 30 - 250 eV. The source of the coherent electron beam is a sharp metal tip and the electrons are extracted by field emission. The wave transmitted through the sample propagates to the detector where the interference pattern is acquired, formed by superposition of the scattered with the non-scattered (reference) wave, constituting an in-line hologram. The structure of the object (macromolecule) is then reconstructed from the hologram by numerical methods. Low-energy electron holography has successfully been applied for imaging of individual biological molecules, including: purple protein membrane, DNA molecules, phthalocyaninato polysiloxane molecules, the tobacco mosaic virus8, a bacteriophage, ferritin and individual proteins (bovine serum albumin, cytochrome C and hemoglobin). The resolution achieved by low-energy electron holography is about 0.7 - 1 nm.
=== Reflectivity contrast imaging ===
The elastic backscattering of low energy electrons from surfaces is strong. The reflectivity coefficients of surfaces depend strongly on the energy of incident electrons and the nuclear charge, in a non-monotonic fashion. Therefore, contrast can be maximized by varying the energy of the electrons incident at the surface.
=== Spin-polarized LEEM (SPLEEM) ===
SPLEEM uses spin-polarized illumination electrons to image the magnetic structure of a surface by way of spin–spin coupling of the incident electrons with that of the surface.
=== Other ===
Other advanced techniques include:
Low-Energy Electron Potentiometry: Determining the shift of LEEM spectra allows the determination of local work function and electrical potential.
ARRES: Angular Resolved Reflected Electron Spectroscopy.
eV-TEM: Transmission Electron Microscopy at LEEM energies.
== References ==
Bauer, Ernst (1998). "LEEM basics". Surface Review and Letters. 5 (6): 1275–1286. Bibcode:1998SRL.....5.1275B. doi:10.1142/S0218625X98001614.
Bauer, Ernst (1994). "Low energy electron microscopy". Rep. Prog. Phys. 57 (9): 895–938. Bibcode:1994RPPh...57..895B. doi:10.1088/0034-4885/57/9/002. S2CID 250913078.
Tromp, R. M. (2000). "Low-energy electron microscopy" (PDF). IBM J. Res. Dev. 44 (4): 503–516. doi:10.1147/rd.444.0503. S2CID 37638137.
Anders, S.; Padmore, Howard A.; Duarte, Robert M.; Renner, Timothy; Stammler, Thomas; Scholl, Andreas; Scheinfein, Michael R.; Stöhr, Joachim; et al. (1999). "Photoemission electron microscope for the study of magnetic materials". Review of Scientific Instruments. 70 (10): 3973–3981. Bibcode:1999RScI...70.3973A. doi:10.1063/1.1150023. Archived from the original on 2013-02-23. Retrieved 2020-03-19. | Wikipedia/Low-energy_electron_microscopy |
Physical crystallography before X-rays describes how physical crystallography developed as a science up to the discovery of X-rays by Wilhelm Conrad Röntgen in 1895. In the period before X-rays, crystallography can be divided into three broad areas: geometric crystallography culminating in the discovery of the 230 space groups in 1891–4, chemical crystallography and physical crystallography.
Physical crystallography is concerned with the physical properties of crystals, such as their optical, electrical, and magnetic properties. The effect of electromagnetic radiation on crystals is covered in the following sections: double refraction, rotary polarization, conical refraction, absorption and pleochroism, luminescence, fluorescence and phosphorescence, reflection from opaque materials, and infrared optics. The effect of temperature change on crystals is covered in: thermal expansion, thermal conduction, thermoelectricity, and pyroelectricity. The effect of electricity and magnetism on crystals is covered in: electrical conduction, magnetic properties, and dielectric properties. The effect of mechanical force on crystals is covered in: photoelasticity, elastic properties, and piezoelectricity.
The study of crystals in the time before X-rays was focused more on their geometry and mathematical analysis than their physical properties. Unlike geometrical crystallography, the history of physical crystallography has no central story, but is a collection of developments in different areas.
== Symmetry ==
During the 19th century crystallography was progressively transformed into an empirical and mathematical science by the adoption of symmetry concepts. In 1832 Franz Ernst Neumann used symmetry considerations when studying double refraction. Woldemar Voigt, who was a student of Neumann, in 1885 formalized Neumann's principle as "if a crystal is invariant with respect to certain symmetry operations, any of its physical properties must also be invariant with respect to the same symmetry operations". Neumann's principle is sometimes referred to as the Neumann–Minnigerode–Curie principle based on later work by Bernhard Minnigerode (another student of Neumann) and Pierre Curie. Curie's principle "the symmetries of the causes are to be found in the effects" is a generalization of Neumann's principle. At the end of the 19th century Voigt introduced tensor calculus to model the physical properties of anisotropic crystals.
== Double refraction ==
Double refraction occurs when a ray of light incident upon a birefringent material, is split by polarization into two rays taking slightly different paths. The double refraction and rhomboidal cleavage of crystals of calcite, or Iceland spar, were first recorded in 1669 by Rasmus Bartholin In 1690 Christiaan Huygens analyzed double refraction in his book Traité de la lumière. Huygens reasoned that the cleavage rhombohedron resulted from the stacking of spherical particles and that the peculiarities of the transmission of light can be traced to the particular asymmetry of the crystal.
In 1810 Étienne-Louis Malus determined that natural light, too, when reflected through a certain angle, behaves like one of the rays exiting a double-refracting crystal. Malus called this phenomenon polarization. In 1812 Jean-Baptiste Biot defined optically positive and negative crystals for the first time. In 1819 David Brewster found that all crystals could be classified as isotropic, uniaxial or biaxial. Augustin-Jean Fresnel was a significant researcher in the whole field of crystal optics, and published a detailed paper on double refraction in 1827 in which he described the phenomenon in terms of polarization, understanding light as a wave with field components in transverse polarization. Crystal optics was an active research area during the 19th century and comprehensive accounts of the field were published by Lazarus Fletcher (1891), Theodor Liebisch (1891) and Friedrich Pockels (1906).
== Thermal expansion ==
In 1824 Eilhard Mitscherlich observed that the angle between the cleavage faces of calcite changed with the temperature of the crystal. Mitscherlich concluded that, on heating, calcite contracts (has a negative coefficient of thermal expansion) in a direction perpendicular to the trigonal axis while expanding (positive coefficient) along that axis. This implies that there is a cone of directions along which there is no thermal expansion. In 1864 Hippolyte Fizeau used an optical interference method to make measurements on many crystals. The measurements of the change of interfacial angle and the expansion of cut plates and bars were applied to crystals of all symmetries.
Crystals with less than cubic symmetry are anisotropic and will generally have different expansion coefficients in different directions. If the crystal symmetry is monoclinic or triclinic, even the angles between the axes are subject to thermal changes. In these cases the coefficient of thermal expansion is a tensor. If the temperature T of a crystal is raised by an amount ΔT, a deformation takes place that is described by the strain tensor uij = αijΔT. The quantities αij are the coefficients of thermal expansion. Since uij is a symmetrical polar tensor of second rank and T is a scalar, αij is a symmetric tensor of second rank. The contemporary usage of the term tensor was introduced by Woldemar Voigt in 1898.
== Thermal conduction ==
Joseph Fourier was an early researcher in thermal conduction, publishing Théorie analytique de la chaleur in 1822. The first experiments on thermal conduction in crystals were carried out by Jean-Marie Duhamel in 1832.
Henri Hureau de Sénarmont conducted experiments to determine if heat would move through crystals with directional dependence. He found that, for non-cubic crystals, the isothermal envelope surrounding a point source of heat in a crystal plate had an elliptical shape whose exact form depended on the orientation of the crystal. Sénarmont's results qualitatively established that thermal conductivity is directionally dependent (thermal anisotropy), with characteristic directions related to crystallographic axes. In 1848 Duhamel provided an analysis of Sénermont’s findings.
George Gabriel Stokes and William Thomson provided mathematical theories to explain Sénarmont’s observations. Stokes acknowledged the connection between the phenomena and the symmetry of the crystal, and showed that the number of constants of heat conductivity reduces from nine to six in the case of two planes of symmetry. The matrix of thermal conductivity components resulting from Stoke's derivation constituted a tensor. Experiments by de:Franz Stenger in 1884 examined the theories put forward by Stokes and Thomson and disproved some of their theoretical speculations.
== Thermoelectricity ==
Thomas Johann Seebeck discovered the thermoelectric effect in 1821, although it has been claimed that Alessandro Volta should be given the priority. In 1844 de:Wilhelm Gottlieb Hankel investigated thermoelectricity in cobalt and iron sulfide crystals. Hankel showed that when certain external faces were developed the crystals were thermoelectrically positive relative to copper, whereas with other facial forms they were negative. In 1850 Jöns Svanberg used bismuth and antimony crystals to demonstrate a directional variation of the thermoelectric effect. In 1854 William Thomson put forward a mechanical theory of thermoelectric currents in crystalline solids. In 1889 Theodor Liebisch analyzed the dependence of the thermoelectric force on the crystallographic direction in anisotropic crystals.
== Electrical conduction ==
The first observations on the variation of electrical conductivity with direction in a crystal (anisotropy) were made by Henri Hureau de Sénarmont in 1850 on 36 different substances. The results showed a correlation between the axes of symmetry and the directions of maximum or minimum conductivity. In 1855 Carlo Matteucci performed experiments on bismuth. In 1888, sv:Helge Bäckström performed electrical conduction measurements on hematite, another crystal of rhombohedral symmetry.
Electrical conductivity in a crystal is now defined as a second rank symmetric tensor relating two vectors:
J
i
=
σ
i
j
E
j
,
{\displaystyle \mathbf {J} _{i}={\boldsymbol {\sigma }}_{ij}\mathbf {E} _{j},}
where
J
i
{\displaystyle \mathbf {J} _{i}}
is the current density,
σ
i
j
{\displaystyle {\boldsymbol {\sigma }}_{ij}}
is the electrical conductivity tensor, and
E
j
{\displaystyle \mathbf {E} _{j}}
is the electric field intensity.
== Magnetic properties ==
Until the 19th century crystals were regarded either as magnetic or nonmagnetic. Magnetic crystals are now called ferromagnetic to distinguish them from the several other kinds which have since been discovered. Siméon Denis Poisson (1826) put forward a theory of magnetism as applied to crystals and predicted the behaviour of crystals in a magnetic field which was verified by Julius Plücker in 1847. Plücker studied various natural crystals, such as quartz and related the reaction of the crystal to a magnetic field to its symmetry. All these crystals were repelled from a strong field, unlike ferromagnetic crystals. They were therefore called diamagnetic. In 1850 a number of investigations were carried out by Plücker and August Beer using torsion balances to measure the small forces involved in most observations. Not only were some crystals repelled from a strong field but others were slightly attracted. These were called paramagnetic. Between 1850 and 1856 John Tyndall studied diamagnetism in crystals.
By the end of the 19th century the three types of crystal, ferromagnetic, diamagnetic and paramagnetic, were well established and successful theories had related diamagnetic and paramagnetic crystals to their crystal symmetry. Ferromagnetic properties were dealt with by Pierre Weiss (1896) who explained the hysteresis by assuming that the atoms have permanent magnetic poles which are normally in random positions, but arrange themselves in parallel under the influence of a magnetic field. On removing the field the mutual effect of the parallel dipoles tends to maintain the magnetized state. He further postulated that there were domains within which all the atomic dipoles were similarly orientated and that the N-S axis could be differently orientated in neighbouring domains.
== Dielectric properties ==
A dielectric is an electrical insulator that can be polarised by an applied electric field. In 1851 the first experiments on the behaviour of crystals in an electric field were carried out by Hermann Knoblauch in a manner similar to that used for the study of magnetic properties. The conductivity of the crystals, both over the surface and through the body of the crystal, made these experiments unreliable. In 1876 Elihu Root avoided some of these difficulties by employing a rapidly alternating field between parallel plates. In 1893 Friedrich Pockels gave an account of the abnormally large piezoelectric constants of Rochelle salt. A brief history on the theories of dielectrics in the 19th century has been written.
== Rotary polarization ==
In 1811 François Arago, who favoured the corpuscular theory of light, discovered the rotation of the plane of polarization of light travelling through quartz. In 1812 Jean-Baptiste Biot, who favoured the wave theory of light, enunciated the laws of rotary polarization and their application to the analysis of various substances. Biot discovered that while some crystals rotate the light to the right others rotate it to the left, and determined that the rotation is proportional to the thickness of substance traversed and to the wavelength of the light.
In 1821 John Herschel pointed out the relation between the direction of rotation and the development of faces on quartz crystals. Suspecting that rotatory polarization is an effect of a lack of symmetry, Herschel established that quartz crystals often present faces placed in such a way that those belonging to certain crystals are mirror images of the corresponding faces of other crystals. He explained the connection between this arrangement and the respective rotation of light to the right and to the left. In 1822 Augustin-Jean Fresnel explained the rotation by postulating oppositely circularly polarized beams travelling with different velocities along the optic axis. In 1831 George Biddell Airy gave an explanation of the formation of the spirals which bear his name. In 1846 Michael Faraday discovered that the plane of polarization may also be rotated when light passes through an isotropic medium when it is in a magnetic field. The corresponding Kerr effect can be observed on reflecting plane-polarized light from a polished ferromagnetic mirror when in a magnetized state.
In 1848 Louis Pasteur gave the general relation between crystal morphology and rotatory polarization. Pasteur solved the mystery of polarized light acting differently with chemically identical crystals and solutions. Pasteur discovered the phenomenon of molecular asymmetry, that is that molecules could be chiral and exist as a pair of enantiomers. Pasteur's method was to physically separate the crystals of a racemic mixture of sodium ammonium tartrate into right- and left-handed crystals, and then dissolve them to make two separate solutions which rotated polarized light in opposite directions.
In 1855 de:Christian August Hermann Marbach discovered that crystals of sodium chlorate, sodium bromate, sodium ammonium sulfate and sodium amyl acetate have the property of rotating the polarization plane. In 1857 Alfred Des Cloizeaux advanced a general theory of rotatory polarization whilst studying cinnabar and strychnine sulphate. In 1864 Josef Stefan introduced the banded spectrum in the study of rotatory polarization. Theories of magnetic optics in ferromagnetic crystals were published in 1892 by D. A. Goldhammer, and in 1893 by Paul Drude.
== Conical refraction ==
Conical refraction is an optical phenomenon in which a ray of light, passing through a biaxial crystal along certain directions, is refracted into a hollow cone of light. There are two possible conical refractions, one internal and one external.
In 1821-1822 Augustin-Jean Fresnel developed a theory of double refraction in both uniaxial and biaxial crystals. Fresnel derived the equation for the wavevector surface in 1823, and André-Marie Ampère rederived it in 1828. Many others investigated the wavevector surface of the biaxial crystal, but they all missed its physical implications.
William Rowan Hamilton, in his work on Hamiltonian optics, discovered the wavevector surface has four conoidal points and four tangent conics. This implies that, under certain conditions, a ray of light could be refracted into a cone of light within the crystal. He termed this phenomenon "conical refraction" and predicted two distinct types: internal and external, corresponding respectively to the conoidal points and tangent conics. Hamilton announced his discovery on 22 October 1832. He then asked Humphrey Lloyd to prove his theory experimentally. Lloyd first observed conical refraction on 14 December 1832 with a specimen of aragonite, and published his results in early 1833. In 1833 James MacCullagh claimed that Hamilton's work was a special case of a theorem he had published in 1830. Hamilton also exchanged letters with George Biddell Airy who was skeptical that conical refraction could be observed experimentally but became convinced after Lloyd's report.
Hamilton and Lloyd's discovery was a significant victory for the wave theory of light and solidified Fresnel's theory of double refraction. The discovery of conical refraction is an example of a mathematical prediction being subsequently verified by experiment.
Later theoretical work on conical refraction was published in 1860 by Robert Bellamy Clifton and in 1874 by Jules Antoine Lissajous, and experimental work in 1888 by Theodor Liebisch and in 1889 by Albrecht Schrauf.
== Photoelasticity ==
Photoelasticity describes changes in the optical properties of a material under mechanical deformation. The photoelastic phenomenon in transparent, non-crystalline materials (gels and glasses) was first discovered by David Brewster in 1815. Brewster then detected the effect in crystals and showed that uniaxial crystals could be made biaxial. In 1822 Augustin-Jean Fresnel experimentally confirmed that the photoelastic effect was a stress-induced birefringence.
Franz Ernst Neumann investigated double refraction in stressed transparent bodies. In 1841 Neumann published his elastic equations, which describe, in differential form, the changes which polarized light experiences when travelling through a stressed body. The Neumann equations are the basis of all subsequent photoelasticity research.
The photoelastic effect was analyzed by Friedrich Pockels, who also discovered the Pockels electro-optic effect, (the production of birefringence of light on the application of an electric field). In 1889/90 Pockels produced a phenomenological theory for both of these effects for all crystal classes.
== Absorption and pleochroism ==
In 1809 Louis Cordier discovered the phenomenon of pleochroism while investigating a new mineral that he named dichröıte. Dichröıte (cordierite) crystals showed different colors when viewed along different axes. From 1817-1819 David Brewster made a systematic study of light absorption and pleochroism in various minerals and showed that, in uniaxial crystals, the absorption is smallest in the direction of, and greatest at right angles to, the optical axis. In 1820 John Herschel studied the absorbtion of light in biaxial crystals and explained the interference rings first observed by David Brewster. In 1838 Jacques Babinet discovered that the greatest absorption in a crystal generally coincided with the direction of greatest refractive index. In 1845 Wilhelm Haidinger published a general account of pleochroism in crystals. In 1854 Henri Hureau de Sénarmont showed that transparent crystals stained by a dye during crystal growth became pleochroic.
In 1877 de:Paul Glan performed photometric observations on absorption. In 1880 de:Hugo Laspeyres pointed out the existence of absorption axes (directions of least, intermediate, and greatest absorption). He investigated certain biaxial crystals and found that the absorption axes, although subject to the symmetry of the crystal, did not necessarily coincide with the principal directions of the indicatrix. In 1888 Henri Becquerel made qualitative and quantitative observations. Woldemar Voigt (1885) and Paul Drude (1890) presented theories of the absorption of light in crystals. In 1906 Friedrich Pockels published his Lehrbuch der Kristalloptik which gave an overview of the subject.
== Luminescence, fluorescence and phosphorescence ==
Luminescence is the non-thermal emission of visible light by a substance; an example is the emission of visible light by minerals in response to irradiation by ultraviolet light. The term luminescence was first used by Eilhard Wiedemann in 1888; he stated that luminescence was separate from thermal radiation, and he distinguished six different forms of luminescence according to their excitation, for example photoluminescence, electroluminescence, etc.
Fluorescence is luminescence which occurs during the irradiation of a substance by electromagnetic radiation; fluorescent materials generally cease to glow nearly immediately when the radiation source stops. The term fluorescence was coined by George Stokes in 1852, and was derived from the behavior of fluorite when exposed to ultraviolet light.
Phosphorescence is long-lived luminescence; phosphorescent materials continue to emit light for some time after the radiation stops. In 1857 Edmond Becquerel invented the phosphoroscope, and in a detailed study of phosphorescence and fluorescence, showed that the duration of phosphorescence varies by substance, and that phosphorescence in solids is due to the presence of finely dispersed foreign substances. Becquerel suggested that fluorescence is simply phosphorescence of a very short duration. The most prominent phosphorescent material for 130 years was ZnS doped with Cu+, or later Co2+, ions. The material was discovered in 1866 by Théodore Sidot who succeeded in growing tiny ZnS crystals by a sublimation method.
Crystalloluminescence is the emission of light during crystal growth from solution. The first observation was that of potassium sulfate which was reported by a number of researchers in the eighteenth century; other substances reported in the early literature which exhibit crystalloluminescence include strontium nitrate, cobalt sulfate, potassium hydrogen sulfate, sodium sulfate, and arsenious acid. In 1918 Harry Weiser summarised the research on crystalloluminescence up to that date. Neither the spectral distribution nor the excitation mechanisms of crystalloluminescence are understood.
Triboluminescence is the generation of light when certain materials, for example quartz, are rubbed; fractoluminescence is the emission of light from the fracture of a crystal. The first recorded observation is attributed to Francis Bacon when he recorded in his 1620 Novum Organum that sugar sparkles when broken or scraped in the dark. The scientist Robert Boyle also reported on some of his work on triboluminescence in 1664.
In 1677 Henry Oldenburg described the luminescence of fluorite, CaF2, on heating. In 1830 Thomas Pearsall observed that colourless fluorite could be coloured by discharging sparks from a Leyden jar held against it. In 1881 luminescence excited by cathode rays was described by William Crookes. In 1885 Edmond Becquerel found that when crystals were bombarded by cathode rays they became coloured and also emitted light. In 1894 de:Eugen Goldstein showed that ultraviolet light has the same effect as cathode rays.
== Reflection from opaque materials ==
The study of the optical properties of opaque substances has been closely linked with the development of suitable microscopes. The first instrument adapted to reflected light was the Lieberkühn reflector attributed to Johann Nathanael Lieberkühn. The use of polished and etched surfaces for this type of study was introduced by Jöns Jacob Berzelius in 1813. A theory of the light reflected from metals was put forward by Augustin-Louis Cauchy in 1848. In 1858 Henry Clifton Sorby established the technique of cutting minerals and crystals into thin sections for examination under the polarizing microscope. In 1864 Sorby studied the microscopical structure of minerals from meteorites. In 1888 Paul Drude published work on reflection from antimony sulfide.
== Infrared optics ==
Heinrich Rubens measured the dependence of the refractive index of quartz on wavelength, and found absorption in particular infrared wavelength ranges. By 1896 Rubens saw these bands as a potential filter that would allow him to separate out an almost monochromatic beam from the broad range of infrared radiation that his sources produced. In 1897 Rubens and his student Ernest Fox Nichols studied the reststrahlen (residual rays) obtained when infrared rays of appropriate wavelength are reflected from the surfaces of crystals.
== Pyroelectricity ==
Pyroelectricity is the generation of a temporary voltage in a crystal when subjected to a temperature change. The appearance of electrostatic charges upon a change of temperature has been observed since ancient times, in particular with tourmaline and was described, among others, by Steno, Linnaeus, Aepinus and René Just Haüy. Aepinus published an account of his observations in 1756. Haüy made detailed investigations of pyroelectricity; he detected pyroelectricity in calamine and showed that electricity in tourmaline was strongest at the poles of the crystal and became imperceptible at the middle. Haüy published a book on electricity and magnetism in 1787. Haüy later showed that hemihedral crystals are electrified by temperature change while holohedral (symmetric) crystals are not.
Research into pyroelectricity became more quantitative in the 19th century. In 1824 David Brewster gave the effect the name it has today. In 1840 Gabriel Delafosse, Haüy's student, theorized that only molecules which are not symmetrical can be polarized electrically. Both William Thomson in 1878 and Woldemar Voigt in 1897 helped develop a theory for the processes behind pyroelectricity.
A detailed history of pyroelectricity has been written by Sidney Lang; shorter histories have also been published.
== Elastic properties ==
Some minerals, for example mica, are highly elastic, springing back to their original shape after being bent. Others, for example talc, may be readily bent but do not return to their original form when released. The initial theory of the elasticity of solid bodies were developed in the 1820s. Augustin-Louis Cauchy and Siméon Denis Poisson published theories of the mutual action of a regular arrangement of particles for a non-cubic body in 1823 and 1829 respectively. In 1827 Claude-Louis Navier published a theory for an isotropic body. Also during the 1820s Friedrich Mohs introduced his eponymous scale of hardness. In 1834 Franz Ernst Neumann published a paper on the elasticity of homohedral crystals.
In 1828 Cauchy generalised the problem and showed that 36 independent constants were required to describe elasticity in crystals. George Green (1837) introduced the limitation that the force between any two elements of a crystal, however small, must lie along the line joining their centres. This reduced the number of constants from 36 to 21. William Thomson (1857) showed that Green’s assumption was unnecessary and that the thermodynamic requirements of a reversible process require only 21 constants, without any special assumptions. In 1874 Woldemar Voigt measured the elasticity of rock salt and G. Baumgarten measured the elasticity of calcite. In 1887 Wilhelm Röntgen and J. Schneider measured the cubic compressibility of sodium and potassium chlorides. In 1877 Lambros Koromilas measured the elasticity of gypsum and mica by twisting mineral bars.; in 1881 H. Klang carried out similar experiments with fluorites.
In the period 1874-1888 Voigt was the leading researcher on the elasticity of crystals. Voigt showed that the number of elasticity constants reduces as more symmetry is introduced into the crystal. For a triclinc crystal, which is the most general case, 21 elasticity constants are required. For a monoclinic crystal there are 13 elasticity constants, for a rhombic crystal 9, for a hexagonal crystal 7, for a tetragonal crystal 6, and finally for a cubic crystal there are only 3. A summary of developments in the field was published by W. A. Wooster.
== Piezoelectricity ==
In 1880 Pierre and Jacques Curie discovered piezoelectricity (an electric charge that accumulates in response to applied mechanical stress) in certain crystals, including quartz, tourmaline, cane sugar and sodium chlorate. The Curies, however, did not predict the converse piezoelectric effect (the internal generation of a mechanical strain resulting from an applied electric field). The converse effect was deduced by Gabriel Lippmann in 1881. The Curies immediately confirmed the existence of the effect, and went on to obtain quantitative proof of the complete reversibility of electro-elasto-mechanical deformations in piezoelectric crystals.
In 1890 Woldemar Voigt published a phenomenological theory of the piezoelectric effect based on the symmetry of crystals without centrosymmetry.
== Research community ==
Before the 20th century crystallography was not a well-established academic discipline. There were no academic positions specifically in crystallography. Workers in the field normally carried out their crystallographic research as an ancillary to other employment(s), or had independent means. The leading workers in the field of physical crystallography were employed as follows:
Professors
Mathematics or science: Airy, Arago, E. Becquerel, Biot, Curie, Drude, Hamilton, Linnaeus, Mitscherlich, Pasteur, Pockels, Plücker, Stokes, Tyndall, Thomson, Voigt
Mineralogy: Groth, Haüy, Liebisch, Mohs, Neumann, Sénarmont
Other employment: Bartholinus (physician), Brewster (editor), Fresnel (engineer), Hooke (municipal official), Malus (military officer)
Independently wealthy: Herschel, Huygens
In the nineteenth century there were informal schools of physical crystallography researchers in France (Arago, E. Becquerel, Biot, Fresnel, Haüy, Sénarmont), Germany (Drude, Groth, Liebisch, Mitscherlich, Mohs, Neumann, Pockels, Voigt) and the British Isles (Airy, Brewster, Hamilton, Stokes, Thomson).
Until the founding of Zeitschrift für Krystallographie und Mineralogie by Paul Groth in 1877 there was no lead journal for the publication of crystallographic papers. The majority of crystallographic research was published in the journals of national scientific societies, or in mineralogical journals. The inauguration of Groth’s journal marked the emergence of crystallography as a mature science independent of geology.
== See also ==
Timeline of crystallography
== Citations ==
== Works cited == | Wikipedia/Physical_crystallography_before_X-rays |
The law of symmetry is a law in the field of crystallography concerning crystal structure. The law states that all crystals of the same substance possess the same elements of symmetry. The law is also named the law of constancy of symmetry, Haüy's law or the third law of crystallography.
== Definition ==
The way in which the law of symmetry was originally defined by Haüy in 1815 was based on his law of decrements and his conception of crystals being assembled of tiny parallelepipeds (molécules intégrantes) stacked up in three dimensions without leaving any gaps. The modern definition of the law of symmetry is based on symmetry elements, and is more in the German dynamistic crystallographic tradition of Christian Samuel Weiss, Moritz Ludwig Frankenheim and Johann F. C. Hessel. Weiss and his followers studied the external symmetry of crystals rather than their internal structure.
René Just Haüy first lectured about his law of symmetry in 1795 but it was not until 1815 that it was finally published. Haüy states the law as follows: "It consists in this, that any one method of decrement (décroissement) is repeated on all those parts of the nucleus of which the resemblance is such, that one can be substituted for the other by changing the position of this nucleus with respect to the eye, without it (the nucleus) ceasing to be presented in the same aspect"
Later authors stated the law in clearer forms:
"The law of symmetry by René Just Haüy (1815): if the shape of a crystal is altered, corresponding parts (faces, edges, angles) of the crystal are simultaneously and similarly modified."
"One of the most important results of Haüy's researches was the discovery of the law of symmetry, according to which when one form of crystallization is modified by its combination with other forms, all the similar parts, the edges, angles and faces, are always modified at the same time and in the same way.
"The way in which nature produces crystals is always that of the greatest symmetry, in that opposite and corresponding parts are always equal in number, arrangement and shape." René Haüy, 1815.
The law of symmetry stands out as one of the foremost contributions by Haüy. It is more an intuition than a true scientific law, but warned crystallographers on the importance of symmetry. This is one of its possible formulations: "A given type of decrement repeats itself on all the parts of the nucleus that are so similar that they can be substituted one for the other, when changing the position of this nucleus with respect to the eye. I call this [sic] parts identical."
== Symmetry elements ==
Haüy's method of building crystals from stacked parallelepipeds has been replaced in modern crystallography by three-dimensional lattices (Bravais lattices). The 32 crystallographic point groups combine the following symmetry elements.
=== Axis of symmetry ===
If a crystal has an axis of symmetry through its centre, such that the crystal can be rotated around the axis into a position where it appears identical to the starting position, then it has an axis of symmetry. A crystal may have zero, one, or multiple axes of symmetry but, by the crystallographic restriction theorem, the order of rotation may only be 2-fold, 3-fold, 4-fold, or 6-fold for each axis. An exception is made for quasicrystals which may have other orders of rotation, for example 5-fold. An axis of symmetry is also known as a proper rotation.
=== Plane of symmetry ===
If a crystal can be divided by a plane into two mirror-image halves, then the plane is a plane of symmetry. A crystal may have zero, one, or multiple planes of symmetry. For example, a cube has nine planes of symmetry. A plane of symmetry is also known as reflection symmetry or mirror symmetry.
=== Centre of symmetry ===
If every face of a crystal has another identical face at an equal distance from a central point, then this point is called the centre of symmetry symbolised as i. A crystal can only have one centre of symmetry. A centre of symmetry is also known as point reflection, inversion symmetry, or centrosymmetry.
=== Rotoinversion symmetry ===
A rotoinversion, symbolised as (1, 2, 3, 4 or 6), is a combination of a rotation about an axis and a reflection in a plane perpendicular to that axis. As an example, a two-fold rotoinversion (2) is illustrated in the figure. Rotoinversion is also known as improper rotation, rotoreflection, or rotation-reflection.
== History ==
René Just Haüy showed in 1784 that the law of constancy of interfacial angles could be accounted for if a crystal were made up of minute building blocks (molécules intégrantes), such as cubes, parallelepipeds, or rhombohedra. Haüy's method is named the law of decrements.: 322 The law of rational indices was not stated in its modern form by Haüy, but it is directly implied by his law of decrements.: 333
Haüy spoke for the first time about a law of symmetry in his physics classes at the École Normale Supérieure in 1795. In his memoir of 1815 Haüy related the number and the position of the faces observed on the external form of crystals to the symmetry of the hypothetical nucleus. However, he deliberately excludes certain crystals, among others boracite, quartz, and the tourmalines.: 7–8 He was forced to exclude some substances because their crystals did not exhibit holohedry (all of the edges and faces behave in an equivalent manner), as required by his law of symmetry, but rather hemihedry (half of the edges and faces are equivalent and the other half act differently). In the figure, a cube is transformed into an octahedron when all the faces are decremented by an identical amount at each vertex (holohedry), but into a tetrahedron when only alternate faces are decremented (hemihedry), an example being boracite. Haüy knew about the pyroelectric effect and the polarity induced in tourmaline by a change of temperature; he thought that the hemihedry these crystals exhibited might be accounted for by different electric forces acting on the two extremities of the axis of the crystal during growth.: 328–329
Haüy discovered that in some quartz crystals, certain faces were inclined more towards one side than the other. He called this type of quartz crystal 'plagihedral' and differentiated right from left plagihedra, depending on which direction the face was inclined.: 138–139 In practice, Haüy knew that there were counter-examples to his law of symmetry, such as plagihedral quartz, but as he did not have an explanation for them, he dismissed them merely as rare anomalies. In summary, hemimorphic forms, such as quartz and tourmaline, caused Haüy's law of symmetry great difficulties.: 180 In 1819, Weiss demonstrated the generality of this phenomenon and gave it the name of hemihedry, thus challenging Haüy's atomistic approach. The modern definition of hemihedry is: "The point group of a crystal is called hemihedry if it is a subgroup of index 2 of the point group of its lattice." The point group Td (tetrahedral symmetry) is a subgroup of index 2 of point group Oh (octahedral symmetry).
In his 1815 law of symmetry papers Haüy postulated the idea of rotational symmetry in crystals but he considered only a single (vertical) axis of rotation, which made it difficult to explain the observed crystal forms with additional (horizontal) axes of rotation. As an example, Haüy did not recognize the existence of a horizontal axis of two-fold symmetry in cobaltite, and so could not include this mineral in his law of symmetry.
The German mineralogists led by Weiss were interested in the optical properties of minerals and the systematic descriptions of crystals. Their approach led to the first two determinations of all 32 point groups by Frankenheim in 1826 and Hessel, using a different approach which combined symmetry elements, in 1830.: 367 Their work was not influential at the time, and, in 1867, Axel Gadolin independently rediscovered their results.
Gabriel Delafosse continued Haüy's work in France. He was the first to use the terms lattice (réseau) and unit cell (maille). He stated that the orientation of the molecular axes in a substance is constant, which implies symmetry of translation (a defining feature of a lattice), and that the external symmetry of a crystal reflects its inner symmetry, namely the symmetry of the constituent atoms and their arrangement. In other words, the law of symmetry applies to both the inside and the outside of a crystal.: 370–371
French scientists did not adopt the dynamic crystallographic theory, but they did attempted to learn from it. Delafosse built on Haüy's crystallographic approach by stating that the structure and physical properties of crystals should exhibit the same symmetry. Delafosse aimed to resolve the apparent counter-examples to Haüy's law of symmetry by explaining that the symmetry of the physical phenomena revealed the inner structure of crystals. This structure is sometimes more complex than the external morphology. Crystals, in these cases, are of lower symmetry than the lattice. This substructure explained the behaviour of hemihedral crystals, which were not adequately accounted for by Haüy.: 40
Later work by Auguste Bravais in 1851 in which he defined the Bravais lattices can be considered as drawing on a combination of the approaches of Haüy and Weiss.: 11–12
== See also ==
Law of constancy of interfacial angles
Law of rational indices
== References == | Wikipedia/Law_of_symmetry_(crystallography) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.